id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.16525
Method for calculation of the beta exponent from the Heitler-Matthews model of hadronic air showers
The number of muons in an air shower is a strong indicator of the mass of the primary particle and increases with a small power of the cosmic ray mass by the $\beta$-exponent, $N_{\mu} \sim A^{(1-\beta)}$. This behaviour can be explained in terms of the Heitler-Matthews model of hadronic air showers. In this paper, we present a method for calculating $\beta$ from the Heitler-Matthews model. The method has been successfully verified with a series of simulated events observed by the Pierre Auger Observatory at $10^{19}$ eV. To follow real measurements of the mass composition at this energy, the generated sample consists of a certain fraction of events produced with p, He, N and Fe primary energies. Since hadronic interactions at the highest energies can differ from those observed at energies reached by terrestrial accelerators, we generate a mock data set with $\beta =0.92$ (the canonical value) and $\beta =0.96$ (a more exotic scenario). The method can be applied to measured events to determine the muon signal for each primary particle as well as the muon scaling factor and the $\beta$-exponent. Determining the $\beta$-exponent can effectively constrain the parameters that govern hadronic interactions and help solve the so-called muon problem, where hadronic interaction models predict too few muons relative to observed events. In this paper, we lay the foundation for the future analysis of measured data from the Pierre Auger Observatory with a simulation study.
Kevin Almeida Cheminant, Dariusz Gora, Nataliia Borodai, Ralph Engel, Tanguy Pierog, Jan Pekala, Markus Roth, Jarosław Stasielak, Michael Unger, Darko Veberic, Henryk Wilczynski
2023-08-31T08:16:19Z
http://arxiv.org/abs/2308.16525v1
# Method for calculation of the beta exponent from the Heitler-Matthews model of hadronic air showers ###### Abstract: The number of muons in an air shower is a strong indicator of the mass of the primary particle and increases with a small power of the cosmic ray mass by the \(\beta\)-exponent, \(N_{\mu}\sim A^{(1-\beta)}\). This behaviour can be explained in terms of the Heitler-Matthews model of hadronic air showers. In this paper, we present a method for calculating \(\beta\) from the Heitler-Matthews model. The method has been successfully verified with a series of simulated events observed by the Pierre Auger Observatory at \(10^{19}\) eV. To follow real measurements of the mass composition at this energy, the generated sample consists of a certain fraction of events produced with p, He, N and Fe primary energies. Since hadronic interactions at the highest energies can differ from those observed at energies reached by terrestrial accelerators, we generate a mock data set with \(\beta=0.92\) (the canonical value) and \(\beta=0.96\) (a more exotic scenario). The method can be applied to measured events to determine the muon signal for each primary particle as well as the muon scaling factor and the \(\beta\)-exponent. Determining the \(\beta\)-exponent can effectively constrain the parameters that govern hadronic interactions and help solve the so-called muon problem, where hadronic interaction models predict too few muons relative to observed events. In this paper, we lay the foundation for the future analysis of measured data from the Pierre Auger Observatory with a simulation study. Introduction Simulations of extensive air showers using current hadronic interaction models predict too small numbers of muons compared to events observed in the air-shower experiments, which is known as the muon-deficit problem. To study the muon-deficit we use the top-down (TD) method [1, 2, 3, 4] - This chain of simulations and reconstructions enable us to calculate signals in the fluorescence (FD) and surface detectors (SD) of cosmic ray experiments like for example the Pierre Auger Observatory or Telescope Array. For each observed hybrid shower 1, starting with a large number of simulated air showers with varying initial conditions, we select the one which has a longitudinal profile most similar to the profile of the observed shower (the reference profile). As a result of the simulation-reconstruction chain we get an event, with complete information about the distributions of the signals in the detectors (including information on the specific components that contribute to these signals) - these signals can then be compared with their reference counterparts. Since the results of the simulations depend on the properties of the hadronic interaction models that are included in the simulation software, by comparing the simulations with corresponding observational results we should be able to verify these models at energies exceeding those available in any man-made accelerators. We expect that we will gain new information, which will enable improvement of the interaction models, and in this way we are also able to reduce the discrepancy between the observations and simulations [1, 4]. Footnote 1: Hybrid event is seen simultaneously by the SD and FD detector. In this note the method is proposed to calculate the \(\beta\)-exponent from the Heitler-Matthews model [5] by including also the muon deficit-problem. The idea of the method is to find the set of muon rescaling parameters \(\epsilon_{k}\) for different primaries \(k\), which is a function of only two parameters: \(\epsilon_{\rm p}\) and \(\Delta\beta\). These two parameters indicate how much we need to scale the proton signal (\(\epsilon_{\rm p}\) term) and by how much to modify the \(\beta\)-exponent (\(\Delta\beta\)) in the Heitler-Matthews formula in order to match the observed numbers of muons in data and in simulations. The method requires that the first two moments of the individual so-called \(z_{k}\)-distributions (our model) and overall \(z\)-distribution (the measured observable) are matched. In addition we require that the \(\epsilon_{k}\) parameters should follow the Heitler-Mathews progression. The \(z_{k}\)-distribution is essentially the difference between the total signal at 1000 m of a real hybrid event and of the total signal at 1000 m of the Monte Carlo (MC) dataset. In other words, the method tells us by how much each individual \(z_{k}\)-distribution must be shifted, rescaled and then, weighted and summed, in order to retrieve the \(z\)-distribution. In TD-analysis we have the _input dataset_, which are real or mock hybrid events, and the _matched dataset_, which is produced via Convex/Corsika Monte-Carlo simulations [9]. The input dataset contains \(N\) events and the events will be indexed as \(n=1,\ldots,N\). The multiple profile-matched MC events simulated with primary \(k\) corresponding to an input event \(n\) are indexed with \(i=1,\ldots,M\) and are thus denoted with the triplet subscript \(nki\)2. Footnote 2: All \(S^{n}\) symbols will be referring to the signal at 1000 m from the shower core so that the 1000 subscript can be dropped entirely. The signals at 1000 m for the input dataset will have no decorations, i.e. just \(S\), and the signals from the matched dataset will be denoted with \(\tilde{S}\). ## 2 Two-parameter nonlinear scaling model Observations of air showers, and also simulations, demonstrated that the number of muons \(N_{\rm\mu}\) grows almost linearly with the shower energy \(E\), and it also increases with a small power of the primary mass \(A_{k}\). These relations can be reproduced in the framework of the Heitler-Matthews model of hadronic air showers [5]. This model predicts \[N_{\mu}^{A_{k}}=A_{k}\,\,\left(\frac{E/A_{k}}{\epsilon_{\rm c}^{\pi}}\right)^{ \beta}\,, \tag{1}\] where \(\beta\approx 0.9\). More precisely, MC simulations yield \(\beta^{\rm mc}=0.927\pm 0.002\) for Epos-LHC and \(\beta^{\rm mc}=0.925\pm 0.002\) for QGSJerII-04 [10]. For any fixed energy Eq. (1) describes how the muon number depends on the primary mass: \(N_{\mu}^{A_{k}}=N_{\mu}^{\rm p}\,A_{k}^{1-\beta}\).3 Simulations have shown that muon number depends on various properties of hadronic interactions (e.g. multiplicity, charge ratio, baryon anti-baryon pair production) [12]. Therefore, estimating the \(\beta\)-exponent from data would be helpful in constraining the parameters of hadronic interactions and improving the accuracy of models. On the other hand results obtained from the Pierre Auger Observatory and other leading cosmic ray experiments indicate that simulations using LHC-tuned hadronic interaction models underestimate the number of muons in extensive air showers compared to experimental data. To account for this effect, we can formulate a scaling ansatz in Eq. (1) by: Footnote 3: the \(N_{\mu}^{\rm p}\) is the number of muons in proton shower; \(\epsilon_{\rm c}^{\pi}\) is the critical energy at which pions decay into muons \[N_{\mu}^{A_{k}}=\bar{N}_{\mu}^{A_{k}}\,A_{k}^{1-\beta}\,e^{\rm\,e_{\rm p}}\,A_ {k}^{-\Delta\beta}. \tag{2}\] where the scaling factor can be defined as: \(r_{\mu,k}:=1+\varepsilon_{k}:=e^{\varepsilon_{\rm p}}\,A_{k}^{-\Delta\beta}= \exp(\varepsilon_{\rm p}-\Delta\beta\ln\,A_{k})\). Thus, having MC values of the \(\beta^{\rm mc}\) for the hadron interaction model and the value of the parameter \(\Delta\beta\), we can calculate the \(\beta\) exponent from \(\beta=\beta^{\rm mc}+\Delta\beta\). In the context of this work, this then corresponds to saying that the number of muons \(N_{\mu}^{A_{k}}\) in the input dataset is proportional to the muon number \(\bar{N}_{\mu}^{A_{k}}\) in the matched dataset, with the usual Matthews-Heitler progression with mass \(A_{k}\), but with a slight scaling \(1+\varepsilon_{\rm p}\) and modification \(\Delta\beta\). In this work, the input dataset is constructed from Epos-LHC simulations (mock dataset) and is built by taking MC simulations from the TD simulation chain obtained with Epos-LHC around \(10^{18}\) eV. The matched dataset is a dataset from QGSJerII-04 simulations. Details regarding these two datasets can be also found in Ref. [4]. Since these simulations were performed for p, He, N, and Fe primaries for both Epos-LHC and QGSJetII-04, we can plot the evolution of the average muon signal as a function of the primary mass for both hadronic models, as shown in Fig. 1. Since QGSJetII-04 has, on average, fewer muons than Epos-LHC, one can imagine that the muon problem can be recreated by comparing the two hadronic models. Therefore, we can try in this work and figure what is the best way to rescale QGSJerII-04 in order to match the muon signal of the mock dataset built with Epos-LHC. From Fig. 1 we can also see, that the average muon signal increases as a function of the primary mass. As expected, both considered hadronic models display a similar ratio with the average about \(r_{\rm true}^{\rm mc}=\bar{S}_{\rm epos}^{\mu}/\bar{S}_{\rm qgsjet}^{\mu}=1.10 \pm 0.04\); upon closer examination we also see that larger rescaling is needed for protons (\(1.12\pm 0.03\)) than for iron (\(1.08\pm 0.03\)). In Fig. 1 we show also the linear fit to the MC muon signal from Epos-LHC and QGSJerII-04, motivated by the Heitler-Matthews model. The calculated value of \(\beta^{\rm mc}\) from the fit is about 0.92, so it is pretty close to the values from Ref. [10]. This cross-check of \(\beta\)-calculation is a validation of our TD simulations. ## 3 Fitting the \(z\)-histogram The mean signal \(\langle S\rangle\) of the input dataset is the sum of the mean electromagnetic (em) and muonic components \[\langle S\rangle=\sum_{k}f_{k}\ \langle S\rangle_{k}=\sum_{k}f_{k}\ (\langle S ^{\rm em}\rangle_{k}+\langle S^{\rm H}\rangle_{k})=\langle S^{\rm em} \rangle+\langle S^{\rm H}\rangle, \tag{3}\] where \(\langle\cdot\rangle_{k}\) denotes a mean within a given primary class \(k\). Note that for the input dataset the averages for given \(k\) are not really observable, but it is clear that a sum over the composition fractions \(f_{k}\) gives then the average in the whole input dataset, a quantity which is fully available. Equivalently, for the mean signal \(\langle\bar{S}\rangle\) in the matched dataset, where the quantities are known for various primary groups \(k\), we can explicitly write \[\langle\bar{S}\rangle=\sum_{k}f_{k}\ \langle\bar{S}\rangle_{k}=\sum_{k}f_{k} \ \bigl{(}\langle\bar{S}^{\rm em}\rangle_{k}+\langle\bar{S}^{\rm H}\rangle_{k} \bigr{)}=\langle\bar{S}^{\rm em}\rangle+\langle\bar{S}^{\rm H}\rangle, \tag{4}\] where \(\langle\bar{S}\rangle_{k}:=(\sum_{n}^{N}\sum_{i}^{M_{nk}}\bar{S}_{nki})/\sum_{ n}^{N}M_{nk}\) is the signal \(\bar{S}_{nki}\) of the matched dataset averaged over all \(n\) and \(i\) for a given \(k\). Figure 2: The \(z_{k}\)-distributions for stations at 1000 m from the shower core, from TD simulations at energy \(10^{19}\) eV for proton (left) and iron (right) induced air showers simulated with Epos-LHC and QGSJerII-04 for the so-called mock dataset, see Ref. [4] for more details. Since we assume a perfect matching of the longitudinal profile and thus the EM component of the signal, all the \(\bar{S}^{\rm em}_{nki}\) are very close or identical to the corresponding input events with signals \(S^{\rm em}_{n}\). The mean difference \(\Delta S\) of the signals in the two datasets thus only depends on the muonic part \[\Delta S:=\langle S\rangle-\langle\bar{S}\rangle=\sum_{k}f_{k}\left(\langle S \rangle_{k}-\langle\bar{S}\rangle_{k}\right)=\sum_{k}f_{k}\left(\langle S^{ \rm H}\rangle_{k}-\langle\bar{S}^{\rm H}\rangle_{k}\right)=\langle S^{\rm H} \rangle-\langle\bar{S}^{\rm H}\rangle=\Delta S^{\rm H}. \tag{5}\] The mean muonic signals \(\langle S^{\rm H}\rangle_{k}\) of the primary \(k\) in the input data can be obtained by rescaling the muonic signals \(\langle\bar{S}^{\rm H}\rangle_{k}\) in the matched dataset with corresponding scaling factors \(1+\varepsilon_{k}\), \[\langle S^{\rm H}\rangle_{k}=\left(1+\varepsilon_{k}\right)\langle\bar{S}^{ \rm H}\rangle_{k}. \tag{6}\] With this scaling we can simplify the difference \(\Delta S\) from Eq. (5) into \[\Delta S^{\rm H}=\sum_{k}f_{k}\,\varepsilon_{k}\,\langle\bar{S}^{\rm H} \rangle_{k}. \tag{7}\] On the other hand, as it is clear from Eq. (5), \(\Delta S\equiv\Delta S_{\rm H}\). The third term of Eq. (5) can be rewritten as \[\sum_{k}f_{k}\,\left(\langle S\rangle_{k}-\langle\bar{S}\rangle_{k}\right)= \langle S\rangle-\sum_{k}f_{k}\,\langle\bar{S}\rangle_{k}, \tag{8}\] so that we can define for each event \(n\) and match \(i\) an observable \[z_{ni}=S_{n}-\sum_{k}f_{k}\,\bar{S}_{nki}. \tag{9}\] Equivalently, based on Eq. (7) we can define a scaling-dependent quantity \[\bar{z}_{ni}=\sum_{k}f_{k}\,\varepsilon_{k}\,\bar{S}^{\rm H}_{nki}=\sum_{k}f_ {k}\,\varepsilon_{k}\,g_{k}(\theta_{n})\,\bar{S}_{nki}, \tag{10}\] where \(\bar{S}^{\rm H}_{nki}\) is obtained either directly from the MC events or, like here, by using a factor \(g\) from Universality, \(\bar{S}^{\rm H}_{nki}=g_{k}(\theta_{n})\,\bar{S}_{nki}\).The average muon signal as a fraction of the total signal at the ground, \(g_{k}(\theta_{n})\) has been calculated in our previous analyses, see for example [4]4. Footnote 4: It is worth mentioning that this fraction depends on the shower zenith angle and the type of the primary cosmic ray, and only slightly on different hadronic interaction models [11]. For each event \(n\) and \(i\) we can also define a variable \(z_{nki}=S_{nki}-\bar{S}_{nki}\), which is a simple difference between the total signal for data and MC, for given primary \(k\). In Fig. 2 we show corresponding distributions of this variable for the considered primaries obtained from TD simulations with Epos-LHC and QGSJetII-04 (for simplicity we use notation \(z_{k}\) for each individual histogram). As we can see from Fig. 2 for the considered number of events, the corresponding \(z_{k}\)-distribution can be quite well described by a Gaussian function, the fit to histograms gives \(\chi^{2}/{\rm ndf}\approx 1.5\). From the fit for individual distributions we can get the mean value of signal difference \(\langle z_{k}\rangle\) and the corresponding standard deviation \(\sigma(z_{k})\). These variables can be used to define the probability density function (PDFs) for each primary \(k\), which is given by \[P_{k}(z_{k},\sigma(z_{k}))=\frac{1}{\sqrt{2\pi}\sigma(z_{k})}\exp\left[-\frac {(z_{nki}-\langle z_{k}\rangle)^{2}}{2\sigma^{2}(z_{k})}\right], \tag{11}\] where again index \(k\) spans over different primaries. Note, that according to Eq. (10) the mean position of \(z_{k}\)-distribution should be connected with an average ground muon signal expected for given primary. However, such conversion is possible, if we already know proportionality constants i.e. scaling factors \(\varepsilon_{k}\). In other words, if we plot rescaled distribution shown in Fig. 2 in \(\langle S^{\mu}\rangle\) phase-space, the means of such distributions should give us average muon signals on the ground for considered masses. Moreover, we should expect from physics of extensive air showers that position of mean for lighter element should be smaller that for heavier element i.e. \(\langle S^{\mu}\rangle_{\rm p}<\langle S^{\mu}\rangle_{\rm He}<\langle S^{\mu} \rangle_{\rm N}<\langle S^{\mu}\rangle_{\rm Fe}\). Based on the Heitler-Matthews model it is also expected, that logarithm of the muon signal should increase linearly with logarithm of the primary mass, therefore corresponding linearity conditions were introduced by using two-parameter scaling model \(\varepsilon_{k}\). In order to find \(\varepsilon_{k}\) and thus to convert the mean of \(z_{k}\)-distribution to \(S^{\mu}\) phase-space, we can use the Minuit minimization, where the fitted function is a combination of four Gaussian PDFs, which have the form \[F(\vec{\varepsilon},A_{\rm mpl})=A_{\rm mpl}\sum_{k}f_{k}\,\frac{1}{\sqrt{2\pi }\sigma(z_{k})}\exp\left[-\frac{(z_{nik}-\varepsilon_{k}\langle\bar{S}^{\mu} \rangle_{k})^{2}}{2\sigma^{2}(z_{k})}\right]\,, \tag{12}\] where \(\varepsilon_{k}=e^{\varepsilon_{p}-\Delta\beta\ln A_{k}}-1\) and \(\rm const=A_{\rm mpl}\). The \(f_{k}\) is fraction of \(N=68\) pure mass samples and \(const\) gives possibility to rescale the normalized individual \(z_{k}\)-distribution to overall \(z_{ni}\)-histogram. In this way from the Gaussian fit given by Eq. (12) to overall \(z_{ni}\)-histogram, correction factors \(\varepsilon_{k}\) and \(\Delta\beta\) for hadronic models can be calculated. In other words, Eq. (12) tells us by how much each \(z_{k}\)-distribution must be shifted, rescaled, and then weighted and summed, in order to retrieve the \(z_{ni}\)-distribution and also its first and second moments, see also Fig. 3. Figure 3: (Left): The \(z_{ni}\)-distribution as described by Eq. (9) with \(f_{\rm p}=0.15\), \(f_{\rm He}=0.38\), \(f_{\rm N}=0.46\), and \(f_{\rm Fe}=0.01\) for mock dataset. Since we have 68 mock events (Eros-LHC) and 10 QGSJerII-04 events associated to each of the mock events we have 680 events contained in this histogram. The distribution is fitted (red line) with a Gaussian function in order to get its mean \(\langle z_{ni}\rangle=2.825\pm 0.16\) and the standard deviation \(\sigma(z_{ni})=3.80\pm 0.14\). (Right): Sketch showing the idea of the method i.e. each \(z_{k}\)-distribution must be shifted, rescaled, and then weighted and summed, in order to retrieve the \(z_{ni}\)-histogram. ## 4 Results of the fit of four individual \(z_{k}\)-distributions to \(z_{ni}\)-histogram The results are shown in Fig. 4 and Table 1. We see that the fit can reproduce the ratio of the muon signals of simulations using Eros-LHC (mock data) and QGSJerII-04 within \(\sim\)5%: as we already previously mentioned, the ratio for MC-true is \(r_{\rm true}^{\rm MC}=1.10\pm 0.04\) and the average reconstructed ratio (from Table 1) is \(1.15\pm 0.06\). The difference is caused by the fact that the signal for the mock dataset is not exactly equal to the one for Eros-LHC (Table 1). We also recover the \(\beta\) parameter (average \(\simeq\)0.92) for the studied set, because parameter \(\Delta\beta\) is zero within its error i.e. \(\Delta\beta=0.003\pm 0.035\). Finally we can check our solution by comparing the mean given by Eq. (10) and this from a Gaussian fit to the \(z\)-histogram shown in Fig. 3. We get \(2.74\pm 0.49\,\)VEM vs. \(\langle z_{ni}\rangle=2.83\pm 0.16\,\)VEM, which agree very well within the limits. We have a standard deviation match by definition, because \(\sigma^{2}(z_{ni})=\int\,\sum_{k}\,f_{k}z_{nik}^{2}\,P_{k}(z_{k},\sigma(z_{k} ))\,{\rm d}z_{nik}=\sum_{k}\,f_{k}\,\sigma^{2}(z_{k})\). Since the true value of the hybrid dataset may differ from that of the hadron interaction models used in this analysis, it would be interesting to perform the same analysis for a sample dataset built from the Eros-LHC sample, but with hadron interaction evolution. For a sample dataset built from the Eros-LHC sample, we constructed a mockdataset with the evolution of the mean muon signal \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \(k\) & \(r_{\rm\mu,k}\) & \(\langle\bar{S}_{k}^{\rm\mu}\rangle/\rm VEM\) & \(\langle S_{\rm\mu,k}^{\rm rec}\rangle/\rm VEM\) & \(\delta\) & \(r_{\rm\mu,k}\) & \(\langle S_{\rm\mu,k}^{\rm rec}\rangle/\rm VEM\) & \(\delta\) \\ \hline p & \(1.16\pm 0.06\) & \(15.57\pm 0.17\) & \(18.03\pm 0.18\) & \(4.2\%\) & \(1.13\pm 0.04\) & \(17.52\pm 0.17\) & \(1.0\%\) \\ He & \(1.15\pm 0.06\) & \(17.25\pm 0.19\) & \(19.90\pm 0.20\) & \(4.3\%\) & \(1.07\pm 0.07\) & \(18.11\pm 0.30\) & \(1.4\%\) \\ N & \(1.15\pm 0.06\) & \(19.37\pm 0.20\) & \(22.26\pm 0.21\) & \(5.3\%\) & \(1.01\pm 0.09\) & \(19.24\pm 0.38\) & \(1.9\%\) \\ Fe & \(1.14\pm 0.06\) & \(21.61\pm 0.23\) & \(24.73\pm 0.24\) & \(5.6\%\) & \(0.96\pm 0.10\) & \(20.23\pm 0.25\) & \(2.4\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values of the muon rescaling factors obtained with the fitting procedure, and of the MC muon signal, the reconstructed muon signals, for all primaries considered and with \(f_{\rm p}=0.15\), \(f_{\rm He}=0.38\), \(f_{\rm N}=0.46\), and \(f_{\rm Fe}=0.01\). The overestimation \(\delta=(\langle S_{\rm\mu,i}^{\rm rec}\rangle-\langle S_{\rm\mu,i}^{\rm mock} \rangle)/\langle S_{\rm\mu,i}^{\rm mock}\rangle\) of the reconstructed muon signal compared to the one from the mock dataset is also provided. The errors shown in the four column are the square root of the sum of the squares of the errors \(\delta r_{\rm\mu,k}\) and \(\delta\langle\bar{S}_{\rm\mu,k}^{\rm rec}\rangle\), i.e. those listed in the second and third columns, respectively. The last free columns show results for mockdataset with \(\beta=0.96\). as a function of primary mass, leading to a significantly different exponent value \(\beta\simeq 0.96\). This allows us to investigate whether the fitting procedure is able to recover this value as well. The average muon signal of the new sample set as a function of primary mass is shown in Table 1. Two features of this mock dataset can be noticed: that for nitrogen, a slight rescaling from QGSJetII-04 to the mock dataset is needed (\(r_{\mu,\mathrm{N}}=1.01\)) and that for iron the average muon signal is lower than 1 for the mock dataset (\(r_{\mu,\mathrm{Fe}}=0.96\)). The results of the fit are shown in Fig. 4 (right) and in Table 1. We can see that the negative scaling of the primary iron is slightly underestimated, while the signal is well recovered for all other elements. The muon signal from the mocked-up dataset is recovered within 2.4%. Moreover, the fitting of the reconstructed muon signal gives a value of \(\Delta\beta=0.04\) which agrees quite well with the expectation \(\beta=0.955\pm 0.005\), although error of the \(\Delta\beta\) is quite large 0.02. ## 5 Summary and Conclusion The method presented in this work recovers the mean muon signal and provides the ability to calculate muon signals for each element in the considered sample of real-like events. In this work, we have been performed calculations of muon scaling factors and \(\beta\) exponents, by fitting a four-element Gaussian distribution to the overall z-histogram, with two-parameter scaling model which should follow Heitler-Matthews progression. This work shows that the \(z\)-method can be applied to hybrid events to determine the muon signal, the scaling factor (total and for each element), and the \(\beta\) exponent. **Acknowledgments:** The authors are very grateful to the Pierre Auger Collaboration for providing the tools necessary for simulation for this contribution. The authors would like to thank the colleagues from the Pierre Auger Collaboration for all the fruitful discussions 5. Footnote 5: We want to acknowledge the support in Poland from the National Science Centre grant No. 2016/23/B/ST9/01635, grant No. 2020/39/B/ST9/01398, grant No. 2022/45/B/ST9/02163 and from the Ministry of Education and Science grant No. DIR/WK/2018/11 and grant No. 2022/WK/12.
2305.19832
Problems of search and pursuit of unmanned aerial vehicles using the game-theoretic approach
Unmanned aerial vehicles (UAVs) have become increasingly prevalent in various domains, ranging from military operations to civilian applications. However, the proliferation of UAVs has also given rise to concerns regarding their potential misuse and security threats. As a result, the search and pursuit of UAVs have become crucial tasks for law enforcement agencies and security organizations. In this paper, we use a game theoretic approach to explore the problem of searching for and pursuing submarines and translate the problem into a UAV search and pursuit problem. Game theory provides a mathematical framework for modeling and analyzing strategic interactions among multiple decision makers. By applying game theoretic principles to the search and pursuit problem, we aim to improve the effectiveness of UAV detection and capture strategies. We begin by formulating the problem as a game, where the UAV represents the evader, and the search and pursuit team represents the pursuers. Each player's objective is to optimize their own utility while considering the actions and strategies of the other players. By leveraging game theory, we can gain insights into the optimal decision-making strategies for both the UAV and the pursuers, leading to improved search and pursuit outcomes and enhanced security in the face of UAV threats.
Oleg Malafeyev, Kun Zhang
2023-05-31T13:18:59Z
http://arxiv.org/abs/2305.19832v1
# Problems of search and pursuit of unmanned aerial vehicles using the game-theoretic approach ###### Abstract Unmanned aerial vehicles (UAVs) have become increasingly prevalent in various domains, ranging from military operations to civilian applications. However, the proliferation of UAVs has also given rise to concerns regarding their potential misuse and security threats. As a result, the search and pursuit of UAVs have become crucial tasks for law enforcement agencies and security organizations. In this paper, we use a game theoretic approach to explore the problem of searching for and pursuing submarines and translate the problem into a UAV search and pursuit problem. Game theory provides a mathematical framework for modeling and analyzing strategic interactions among multiple decision makers. By applying game theoretic principles to the search and pursuit problem, we aim to improve the effectiveness of UAV detection and capture strategies. We begin by formulating the problem as a game, where the UAV represents the evader, and the search and pursuit team represents the pursuers. Each player's objective is to optimize their own utility while considering the actions and strategies of the other players. By leveraging game theory, we can gain insights into the optimal decision-making strategies for both the UAV and the pursuers, leading to improved search and pursuit outcomes and enhanced security in the face of UAV threats. Dynamic Models of Inspections Currently, the method of mathematical modeling has gained wide popularity, with the benefit of constructing and studying mathematical models for the purpose of analyzing and forecasting various processes in natural, technical, economic, and other sciences [1-7]. The application of mathematical theory can be used to prevent illegal actions. Terrorists and members of drug cartels use modern means of communication and transportation in their activities. There is a need to conduct inspection measures to prevent the spread of their actions. To organize successful countermeasures, it is also necessary to use modern technical means and an apparatus for optimizing the use of resources, and to design dynamic models of inspections. Let us consider the following situation. An interceptor ship equipped with sonar detected the periscope of a submarine, which immediately disappeared in an unknown direction. It is necessary to intercept the submarine in the shortest possible time. Let us assume that the interceptor ship does not know the exact speed of the submarine. However, a discrete set of speeds is known, one of which is the actual speed of the submarine. Next, we will refer to the interceptor ship as P and the submarine as E, respectively. First, let us present an algorithm for finding the search time under conditions where the speed of the escaping submarine is unknown to the interceptor. Suppose that the speed of the interceptor is much greater than the speed of the escaping submarine. At the initial moment of time of detection, the ship accurately determines the location of the submarine. Thus, the distance between him and the escaping submarine, denoted by \(\textbf{{D}}_{0}\), is known. To find the interception time, it is necessary to determine the trajectory along which the interceptor ship should move. We introduce the polar coordinate system \(\rho\) and \(\varphi\) in such a way that the pole, point O, is located at the point of detection of the submarine, and the polar axis passes through the point where the interceptor ship is located. Then, the dynamics of the escaping submarine are described by equations: \[\overset{\cdot}{\boldsymbol{\rho}^{E}}=\boldsymbol{v}\] \[\overset{\cdot}{\boldsymbol{\varphi}^{E}}=0\] The pursuer does not know the speed v with certainty, but it is known that it is chosen from a discrete set \(\boldsymbol{V^{E}}\). The maximum possible speed of the pursuer ship is denoted by \(\boldsymbol{V^{P}}\). The pursuer can guarantee the capture by trying all elements of the set \(\boldsymbol{V^{E}}\). Initially, the ship assumes that the runaway has a speed \(\boldsymbol{v_{1}}\)\(\in\)\(\boldsymbol{V^{E}}\). To capture the submarine at time \(\boldsymbol{t_{0}}\), the pursuer begins moving at a speed of \(\boldsymbol{V^{P}}\) towards point O and continues until time\(\boldsymbol{t_{1}}\), at which point the players are at the same distance from point O, meaning that the equation is satisfied. \[\boldsymbol{\rho}_{1}^{\text{p}}=\boldsymbol{\rho}_{1}^{\text{E}}\] And \[\int_{\boldsymbol{t_{0}}}^{\boldsymbol{t_{1}}}\boldsymbol{v_{1}}\boldsymbol{ dt}+\boldsymbol{V^{P}}(\boldsymbol{t_{1}}-\boldsymbol{t_{0}})=\textbf{{D}}_{0}\] From time \(\boldsymbol{t_{1}}\), the pursuer must move, selecting a speed such that they constantly remain at the same distance from point O as the fleeing ship. To achieve this, the speed of the intercepting ship is divided into two components: radial \(\boldsymbol{V_{\rho}}\) and tangential \(\boldsymbol{V_{\varphi}}\). The radial component is the speed at which the ship moves away from the pole, i.e. \[\boldsymbol{V_{\rho}}=\overset{\cdot}{\boldsymbol{\rho}}\] The tangential component is the linear rotational velocity with respect to the pole, i.e. \[\mathbf{V}_{\mathbf{\varphi}}=\mathbf{\rho}\mathbf{\varphi}\] To make the encounter happen, the radial component of the pursuer's velocity is assumed to be equal to the velocity of the fugitive. Then, to find the trajectory of the pursuer, the system of differential equations must be solved: \[\dot{\mathbf{\rho}}=\mathbf{v}_{1}\] \[\dot{\mathbf{\varphi}}^{2}\mathbf{\rho}^{2}=(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}\] The initial conditions for this system will be: \[\mathbf{\varphi}(\mathbf{t}^{*})=0\] \[\mathbf{\rho}(\mathbf{t}_{1})=\mathbf{v}_{1}\mathbf{t}_{1}\] Solving it, we find: \[\mathbf{\varphi}(\mathbf{t})=\frac{\sqrt{(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}}}{\bm {v}_{1}}\mathbf{ln}\frac{\mathbf{v}_{1}\mathbf{t}}{\mathbf{v}_{1}\mathbf{t}_{1}}\] \[\mathbf{\rho}(\mathbf{t})=\mathbf{v}_{1}\mathbf{t}\] Let's express time as a function of the polar angle: \[\mathbf{t}(\mathbf{\varphi})=\mathbf{t}_{1}\mathbf{exp}\left(\frac{\mathbf{v}_{1}\mathbf{\varphi}}{ \sqrt{(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}}}\right)\] Thus, the trajectory consists of linear segments and logarithmic spiral segments. In [2], it is proven that during movement along the spiral, the encounter will occur in a time not exceeding the time of passing one turn. Therefore, if the ship, having bypassed the turn of the spiral, does not find the submarine, then the initial assumption about the speed of the fleeing vessel was incorrect. Then the next speed \(\mathbf{v}_{2}\in\mathbf{V}^{\mathbf{E}}\) is chosen. Thus, the fleeing vessel during time \(\mathbf{t}_{2}\) covered the distance \(\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})=\mathbf{v}_{2}\mathbf{t}_{2}\), and the pursuer \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})=\mathbf{v}_{1}\mathbf{t}_{2}\). If \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})>\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), then the distance between the players will be equal to \(\mathbf{D}_{2}=\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})-\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), and to find the moment of time \(\mathbf{t}_{3}\), it is necessary to solve the equation \[\int_{\mathbf{t}_{2}}^{\mathbf{t}_{3}}\mathbf{v}_{2}\mathbf{dt}+\mathbf{V}^{\mathbf{p}}(\mathbf{t}_{3}-\bm {t}_{2})=\mathbf{D}_{2}\] If \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})<\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), then the distance between the players will be equal to \(\mathbb{[}\mathbf{D}_{2}=\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})-\mathbf{\rho}_{\text{P}}( \mathbf{t}_{2})\) and to find the time \(\mathbf{t}_{3}\) it is necessary to solve the equation \[\mathbf{V}^{\mathbf{p}}(\mathbf{t}_{3}-\mathbf{t}_{2})-\int_{\mathbf{t}_{2}}^{\mathbf{t}_{3}}\mathbf{v}_{ 2}\mathbf{dt}=\mathbf{D}_{2}\] After moving along a straight section, the pursuer moves along a spiral. To reduce the time, it is expedient for the pursuer to order the speed search in descending order. However, if this becomes known to the evader, he can move at a minimum speed, which will maximize the search time. Thus, the following game is obtained. The set of strategies for the submarine is the set of combinations of possible speeds \(\mathbf{v}\) of its movement and directions of movement \(\mathbf{\alpha}\). The set of strategies for the intercepting ship is the set of all possible permutations of elements \(\mathbf{V}^{\mathbf{E}}\). The matrix of the resulting game consists of elements T, which represent the capture time. Now suppose that the intercepting ship needs to detect n submarines, each of which requires \(\tau_{ij}\) hours to capture. To carry out the interception, there are m boats, each of which is directed to a submarine. The matrix \(A=\{\tau_{ij}\}\) is known, which represents the efficiency matrix of search operation for the i-th boat and j-th submarine. The task is to construct such an assignment plan \(X=\{x_{ij}\},t=1..m,j=1..n\), which minimizes the search time, while assigning each boat to search for no more than one submarine, and each submarine can be searched by no more than one boat. The values of\(x_{ij}\)can only take two values: \[x_{ij}=\begin{cases}1,assigned\ \ t\ \text{ boat for }j\ \text{ submarine}\\ 0,assigned\ \ t\ \text{ boat for }j\ \text{ submarine}\end{cases}\] Mathematical formulation of the optimal assignment problem \[min\,z=min\sum_{t=1}^{m}\sum_{j=1}^{n}\tau_{ij}*x_{ij}\] \[\sum_{i=1}^{m}x_{ij}\leq 1,j=1..n\] \[\sum_{j=1}^{n}x_{ij}\leq 1,t=1..m\] \[x_{ij}\geq 0\] In order for the problem of optimal assignments to have an optimal solution, it is necessary and sufficient that the number of boats is equal to the number of submarines, i.e., \(n=m\). Under this condition, the inequality constraints become equalities. \[min\,z=min\sum_{t=1}^{n}\sum_{j=1}^{n}\tau_{ij}*x_{ij}\] \[\sum_{i=1}^{n}x_{ij}=1,j=1..n\] \[\sum_{j=1}^{n}x_{ij}=1,t=1..n\] \[x_{ij}\geq 0\] If \(n\neq m\), then the assignment problem is unbalanced. Any assignment problem can be balanced by introducing the necessary number of dummy boats or submarines. The dual problem of the optimal assignments. \[max\,\omega=max(\sum_{t=1}^{n}i+\sum_{t=1}^{n}\text{i})\] \[i+i\geq\tau_{ij},t=1..n,j=1..n\] In the original performance matrix, A, determine the minimum element in each row and subtract it from all other elements in the row. In the matrix obtained in the first step, find the minimum element in each column, and subtract it from all other elements in the column. If a feasible solution is not obtained after steps 1 and 2, the following should be performed: a. In the last matrix, draw the minimum number of horizontal and vertical lines across rows and columns to cross out all zero elements. b. Find the minimum non-crossed out element and subtract it from all other non-crossed out elements and add it to all elements at the intersection of the lines drawn in the previous step. c. If the new distribution of zero elements does not allow for a feasible solution, repeat step 2a. Otherwise, proceed to step 3. The optimal assignments will correspond to the zero elements obtained in step 2. Let's consider another case. Suppose that an interceptor ship sends n boats after a single submarine. The escaping submarine has a discrete set of speeds and directions of movement, and it needs to choose how to act to maximize the time of capture. In other words, the escaping submarine must choose the best course of action or the best behavioral strategy. Let's use decision theory. Each boat tries to intercept the submarine one at a time in a random order. Therefore, we have n steps. Let's say we are at step t. It is necessary to determine the probability of winning in case strategy t is chosen, assuming that it is better than all the previous ones, i.e. the probability that it is the best one at all. Let's denote this probability by \(\boldsymbol{g_{t}}\) In addition, let's define the probability that the last strategy will be the best, if we skip the first t strategies and then the escaping submarine uses the optimal strategy. Let's denote this probability by \(\boldsymbol{h_{t}}\). According to the principle of dynamic programming, the escaping submarine knows how to act optimally starting from step t+1. The optimal behavioral strategy is: if the strategy at step t is not better than all the previous ones, then it should be rejected; if it is indeed better among the first t, then we need to compare \(\boldsymbol{g_{t}}\) and \(\boldsymbol{h_{t}}\). If \(\boldsymbol{g_{t}}\geq\boldsymbol{h_{t}}\), then the escaping s u b b m a r i n c c h h h \(\mathbf{H}\) is a monotonically non-increasing function. Insert graphs of functions. According to the chosen behavior strategy, if the \(\mathbf{1}\mathbf{m}\)-\(\mathbf{1}\)) \(\mathbf{m}\)-\(\mathbf{2}\mathbf{n}\)-\(\mathbf{1}\mathbf{n}\)It follows from the second case that E skips the \(\mathbf{\pi}\)-strategy. Then the chances of winning\(\mathbf{\lambda}\mathbf{n}\)-\(\mathbf{1}\)=\(\mathbf{1}\mathbf{n}\)Means \[\mathbf{h}_{\mathbf{n}-2}=\frac{1}{\mathbf{n}-1}*\frac{\mathbf{n}-1}{\mathbf{n}}+\frac{\mathbf{n}-2}{ \mathbf{n}-1}*\frac{1}{\mathbf{n}}=\frac{(\mathbf{n}-2)+(\mathbf{n}-1)}{\mathbf{n}*(\mathbf{n}-1)}\] \[\mathbf{h}_{\mathbf{t}}=\frac{\mathbf{t}}{\mathbf{n}}(\frac{1}{\mathbf{t}}+\frac{1}{\mathbf{t}+1}+ \cdots+\frac{1}{\mathbf{n}-1})\] for \[\frac{\mathbf{h}_{\mathbf{t}}}{\mathbf{g}_{\mathbf{t}}}=\frac{1}{\mathbf{t}}+\frac{1}{\mathbf{t}+1}+ \cdots+\frac{1}{\mathbf{n}-1}\] The number corresponding to the intersection point on the graph was found in [3] and is equal to \(\mathbf{t}=\frac{\mathbf{n}}{\mathbf{e}}\). In this case\(\mathbf{h}_{\mathbf{t}}=\mathbf{g}_{\mathbf{t}}=\frac{\mathbf{t}}{\mathbf{n}}=\frac{1}{\mathbf{e}}\), i.e. the probability of success for \(\mathbf{n}\to\infty\) is \(\frac{1}{\mathbf{e}}=0\),368. Using the Maple software package, several examples were solved. \(\mathbf{1}\)Example 1. Let the initial distance between the pursuer and the fugitive be 200 kilometers. The fugitive chooses a speed from the set \(\mathbf{V}^{\mathbf{F}}=\{\)8,56,78\(\}\) and a direction from the set \(\mathbf{\alpha}=\{\)23,137,182\(\}\). The maximum speed of the pursuer is \(\mathbf{V}^{\mathbf{P}}=100\)x\(\mathbf{1}\), \(\mathbf{\alpha}_{1}\). Then the set of fugitive strategies is: \[(\mathbf{\alpha}_{1},\mathbf{v}_{1}),(\mathbf{\alpha}_{1},\mathbf{v}_{2}),(\mathbf{\alpha}_{1}, \mathbf{v}_{3}),(\mathbf{\alpha}_{2},\mathbf{v}_{1}),(\mathbf{\alpha}_{2},\mathbf{v}_{2}),(\mathbf{ \alpha}_{2},\mathbf{v}_{3}),(\mathbf{\alpha}_{3},\mathbf{v}_{1}),(\mathbf{\alpha}_{3},\mathbf{v}_ {2}),\] set of pursuer strategies: \[(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}),(\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{2}),(\mathbf{v} _{2},\mathbf{v}_{1},\mathbf{v}_{3}),(\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{1}),(\mathbf{v}_{3}, \mathbf{v}_{1},\mathbf{v}_{2}),(\mathbf{v}_{3},\mathbf{v}_{2},\mathbf{v}_{1})\] The resulting game matrix looks like this: \(\mathbf{1}\)\(\mathbf{1}\ Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\rm{A}}\)E=[8,56,78] as the X-axis speed, and selects from the set \(\alpha\)={23,37,82} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\rm{A}}\)P=120 m/min Then the fugitive policy set is: \[(\mathbf{\alpha}_{1},\mathbf{v}_{1}),(\mathbf{\alpha}_{1},\mathbf{v}_{2}),(\mathbf{\alpha}_{1},\mathbf{v }_{3}),(\mathbf{\alpha}_{2},\mathbf{v}_{1}),(\mathbf{\alpha}_{2},\mathbf{v}_{2}),(\mathbf{\alpha}_{ 2},\mathbf{v}_{3}),(\mathbf{\alpha}_{3},\mathbf{v}_{1}),(\mathbf{\alpha}_{3},\mathbf{v}_{2})\] **Example 2**. Suppose a interceptor ship has detected 4 submarines. The initial distance to each of them is 100 kilometers, 200 kilometers, 50 kilometers, and 163 kilometers, respectively. The pursuer has 4 boats to catch the submarines. The maximum speed of each boat is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. The first submarine moves along the straight line \(\mathbf{\alpha}_{1}\)=23, at a speed of \(\mathbf{v}_{1}\)=23 km/h, the second one - \(\mathbf{\alpha}_{2}\)=137, \(\mathbf{v}_{2}\)=50 km/h, the third one - \(\mathbf{\alpha}_{3}\)=187, \(\mathbf{v}_{3}\)=67 km/h, and the fourth one -\(\mathbf{\alpha}_{4}\)=50, \(\mathbf{v}_{4}\)=70 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{c c c c} 1903 & 386 & 9,96 & 52 \\ 1,15 * 10\({}^{71}\) & 6,4 * 10\({}^{51}\)1,3 * 10\({}^{34}\) & 1,89 * 10\({}^{26}\) \\ \end{tabular} Figure 1: Nine strategies for simulating the runaway drone. \begin{tabular}{c c c} \(5.6*10^{172}\) & 1,13 \(*10^{90}\) \(2*10^{32}\) & 3,7 \(*10^{51}\) \\ \(2\),4 \(*10^{63}\) & 7,56 \(*10^{26}\),128 \(*10^{9}\) & 5,96 \(*10^{14}\) \\ \end{tabular} The game can be solved using the Hungarian method. We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 74 km/h, 90 km/h, 178 km/h and 124 km/h respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=23m/min, the maximum speed of the Y-axis a_1=23m/min, the height is 100 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=50m/min, a maximum speed of Y-axis a_2=137m/min, and a height of 200 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=67m/min, the maximum speed of the Y-axis a_3=7m/min, and a height of 50 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=70m/min. Y-axis maximum speed a_4=50m/min, height 163 meters matching matrix: \begin{tabular}{c c c c} \(0\) & \(0\) & \(1\) & \(0\) \\ \(1\) & \(0\) & \(0\) & \(0\) \\ \(0\) & \(1\) & \(0\) & \(0\) \\ \(0\) & \(0\) & \(0\) & \(1\) \\ \end{tabular} Figure 2.2:Simulate the X, Y, and Z axis value changes of the chaser and escaper drones. **Example 3**. Suppose the initial distance between the pursuer and the evader was 50 kilometers. The evader chooses a velocity from the set \(\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\ **Example 4**. Suppose a ship-interceptor has detected 4 submarines. The initial distance to each of them is 30 kilometers, 11 kilometers, 62 kilometers, and 8 kilometers, respectively. The pursuer has 4 boats to catch the submarines. The maximum speed of each boat is 60 km/h, 65 km/h, 95 km/h, and 105 km/h, respectively. The first submarine moves along the line \(\boldsymbol{\alpha_{1}}\)=7 with a speed o f f 1,02 0,89 0,49 0,43 1,02 1,03 2,04 2,05 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 91 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 **Example 5**. Suppose a ship-interceptor has detected 5 submarines. The initial distance between him and the escaping submarines is the same and is 20 km. After detection, each submarine goes in an unknown direction and at a different speed. The interceptor must intercept all escaping submarines. To do this, he sequentially intercepts each submarine. The first submarine moves a, the fourth -\(\boldsymbol{\alpha}_{\text{a}}\)=45, \(\boldsymbol{v}_{\text{a}}\)=45 km/h, the fifth - \(\boldsymbol{\alpha}_{\text{a}}\)=50, \(\boldsymbol{v}_{\text{a}}\)=50 km/h. The speed of the ship-interceptor is 250 km/h. Using the o M g suppose 1 Drone Interceptor detects 5 drones. 2 drone interceptor 3 The five intruders are quadcopter drones with heights of 20 m 40 m 60 m 80 m 100 m. 4 The speed of the first submarine along the straight-line X-axis v_1=120 m/min, y-axis speed a_1=20m/min, f The second X-axis velocity v_2=90 m/min, y-axis velocity a_2=50m/min, f The third X-axis speed v_3=70 m/min Y-axis speed a_3=70m/min, w The fourth X-axis velocity v_4=50 m/min, -y-axis velocity a_4=90m/min, i The fifth X-axis velocity v_5=20 m/min- y-axis velocity a_5=120m/min,. g The ship interceptor has a speed of 250 m/min. h t P h h k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k ## 1 Introduction Research on the problem of search began intensively during World War II. Later, search theory methods found application in solving various other problems [8], [9]. Two monographs on the theory of search for moving objects can be distinguished, devoted to various aspects of search theory [10], [11]. All developments in the theory of object search can be conditionally divided into three groups: discrete search, continuous search, and search game problems. The first group includes works devoted to the search for an object with a finite or countable set of states. The second group includes works on the search for an object whose set of positions forms a certain (finite or infinite) area in the plane or in the search space. The tasks of search in the first and second groups are discussed, in which the object being sought does not counteract the search system. Studies that consider search problems in conditions of counteraction form the third group, namely search game problems. The following articles can be distinguished from the works of the first group. In [12], Staroverova O.V. presents a search algorithm that minimizes the average search time under the condition that the probability \(\alpha_{t}\) does not depend on the cell number and the search ends only when the pursuer detects the fleeing object. Kelin investigated the possibility of solving the problem of optimal distribution of search efforts using a Markov model, when the fleeing object E moves randomly. He considered two different cases: 1 The movement of the evader does not depend on their location, and the pursuer does not receive any information about the location of the evader during the search process. 2 The movement of the evader depends on their last location, and the search system knows the location of the evader in the previous moment at each moment in time. In both cases, it is assumed that the search system knows the probabilistic law of the evader's movement. [13] Koopman considered the problem of search on a surface, where the sought objects are uniformly distributed and their course angles are unknown (with equal probability in \([0,\,2\pi\!]\)). Assuming that the searcher P moves at a constant speed, he determined the average number of targets that enter a given circle centered on P per unit time at an arbitrary angle \(\gamma\epsilon(0,2\pi)\)and at a specified angle Figure 5: Quadrotor drone tracking path \(\boldsymbol{\gamma}\in[\boldsymbol{\gamma}^{\prime},\boldsymbol{\gamma}^{\prime}+ \boldsymbol{d}\boldsymbol{\gamma}^{\prime}]\), where \(\boldsymbol{\gamma}\) is the angle between the searcher's velocity vector and the line connecting P and E [14]. In [15], Kupman solved the problem of optimal distribution of given search efforts, maximizing the probability of detecting a stationary target with known a priori distribution of its location and an exponential conditional detection function. This was the first general result obtained in this field. Lapshin presented Kupman's theory and provided examples showing how to use it to solve specific problems[16]. Charnes and Cooper [17] formalized the problem of optimal distribution of search efforts as a convex programming problem. Mac-Kuen and Miller[18] investigated the search problem, in which it is necessary to decide whether to start the search or not, and after starting the search, whether to continue or stop the search. They obtained a general functional equation. Pozner[19] solved the two-stage (preliminary and final) search problem for a lost satellite. Dubrovin and Siro tin[20] solved the problem of determining the average time of finding an escaping object in a rectangular search area, if the initial locations of the pursuer and the escaping object in the specified area follow a uniform probability distribution and their courses are arbitrary. The sought object tries to leave the search area after being detected by the pursuer One of the first solved game search problems is as follows: an airplane is searching for a submarine that is passing through a strait with variable width and a sufficiently large length; the submarine cannot remain submerged for the entire crossing time; it is necessary to determine the optimal distribution of search efforts for the airplane and the probability density distribution of the submarine's submergence or surfacing location [21]. Discrete search game problems can be formulated as follows: the sought-after object is hiding in one of the cells and the searcher sequentially inspects them. One of the first solved search game problems is as follows: a plane is searching for an underwater submarine passing through a strait with varying width and of considerable length; the submarine cannot remain submerged throughout the entire crossing time; the task is to determine the optimal distribution of search efforts for the plane and the probability distribution of the submarine's submergence or surfacing location [21]. Discrete search game problems can be formulated as follows: the sought object is hidden in one of the cells, and the searcher sequentially examines them. When examining cell \(\boldsymbol{t}\) for a duration of\(\boldsymbol{t_{i}}\), the probability of detection, given that the target is in that cell, is equal to \(\boldsymbol{p_{i}}\). The payoff is the sum of m plus the duration of time spent examining the cells, \(\boldsymbol{m}+\sum_{k=1}^{m}\boldsymbol{t_{k}}\), Where m is the number of cells viewed to detect the desired object. Bram [22] solved this problem for \(\boldsymbol{p_{i}}=1\)\(=\)\(1 the pursuer P is the probability of not catching the evader E. In [30], Forman considers the problem of searching for "Princess and Monster" on a circle (with arbitrary initial positions and end time T) and in a certain area on a plane, assuming that the players are far from the boundary and cannot reach it in time T. The cost (for E) is the probability of being caught. Halpern considers a minimax problem that differs from the "Princess and Monster" problem in that E knows the trajectory of P [31]. For the case of a rectangular area, the "Princess and Monster" problem was solved by Gal [32]. Using this result, Fitzgerald obtained a solution for an arbitrary convex area, as well as for an area that is a finite union of convex areas [33]. Wilson [34] considers a wide class of differential search games of given duration, in which the players receive information about the initial position. He showed that such games have a solution in the class of mixed strategies and that a mixed strategy can be implemented by using a pure strategy whose choice depends on a randomly selected number from the unit interval. This work is dedicated to the study of search problems where the target is mobile. Depending on the information available to the search participants, different approaches are proposed for finding a solution, i.e., determining optimal behavior. ## 2 Fundamentals of Object Search Theory (Background Information) Subject of object search theory: The subject of object search theory is the search for real objects in various environments. Search can be defined as the process of purposeful exploration of a specific area of space to detect an object located there. Detection refers to obtaining information about the location of an object by establishing direct energetic contact with it. Detection is carried out using detection means such as optical, radar, hydroacoustic and other devices. One way to study the search process is to build and analyze mathematical models that reflect the objective laws of search and allow us to establish causal relationships between the conditions of search performance and its results. The search process involves two sides: the search object and the observer, who can be both individual and group. The search object is various objects located in different environments, such as aircraft, various objects on the surface of the Earth, ships, and vessels, etc. The search object has two characteristic features: Its properties differ from the properties of the environment in which the search is carried out. Information about the location of the object before the start of the search and during its execution is usually uncertain. It is this uncertainty that causes search actions, the essence of which is to obtain information about the location of the object. The contrast of the search object against the background of the environment creates the possibility of its detection. Search objects can be characterized by the presence or absence of radiation. Therefore, the operation of detection tools is based either on the detection of a signal reflected from the search object or on the reception of the object's own radiation. The search process largely depends on the properties of the detection object, as well as on the parameters of the detection equipment and the characteristics of the surrounding environment. All these issues form the physical basis of the theory of search. During the search process, the use of detection tools is combined with the active maneuvering of the observer who carries these tools. Therefore, the study of the patterns of mutual movement of the observer and the search object becomes especially important. These patterns constitute an integral part of the theory of object search - the kinematics of search. An important place in the theory of object search is occupied by the justification and methods of calculating the indicators of the success of the search - criteria of its effectiveness. The ultimate goal of the theory of search is to choose the optimal methods for performing search actions in a specific situation and under the conditions of the search - the so-called search situation. The choice of the optimal search method is based on an analysis of the mathematical model of the corresponding search situation and reduces to establishing control parameters of the search that ensure the solution of the search task in the shortest or specified time with minimal search efforts. Mathematical Models for Object Search Constructing a mathematical model requires identifying all the essential factors and conditions that determine both the state and the development of the search process, as well as the possible control of this process. These factors and conditions are variables and are called elements of the model. The variables that can be changed are called controllable, while those that cannot be changed are called uncontrollable. Depending on the nature of the search process, its mathematical model may contain only uncontrollable variables, or both controllable and uncontrollable ones. Mathematical models of the first type are called descriptive, while models of the second type are normative. In a descriptive model, there is no observer deciding about the search, nor is there a search object deciding to evade. A normative model is characterized by the presence of at least one of the parties making a decision. Depending on the amount of information about the search situation and the regularities underlying it, such models can be classified into one of the following levels: deterministic, stochastic, and uncertain. At the deterministic level, a normative model is constructed when the outcome of the search situation is subject to regularities and the factors influencing this outcome can be accurately measured or estimated, while random factors are either absent or can be neglected. In this case, it is quite difficult to collect and process data, except for simple situations that fully characterize the search conditions. In addition, purely mathematical difficulties arise in constructing and analyzing such a model. The task of choosing the optimal search method in the conditions of a normative model of the deterministic level is reduced to maximizing or minimizing the efficiency criterion. At the stochastic level, the normative model, in accordance with probabilistic regularities, is represented as a random process, the course and outcome of which are described by certain characteristics of random variables. Construction of a model at this level is possible if there is sufficient factual material to estimate the necessary probability distributions. In constructing a model at the stochastic level, the method of statistical trials is widely used in addition to the classical apparatus of probability theory, and the principle of optimality is based on the maximization of the mathematical expectation of the efficiency criterion. Thus, the task is practically transferred to the deterministic level. The indeterminate level is characterized by such a volume of information at which only a set of possible search situations is known, but without any a priori information about the probability of each of them. Usually, such a volume of information is characteristic of a conflict situation in which the object of the search and the observer pursue directly opposing goals, choosing a certain way of action to achieve them. Building a model and choosing the optimal way at the indeterminate level encounters some difficulties, since the principles of optimality in this case may not be entirely clear. Establishing such principles of optimality and finding solutions to such problems constitute the content of game theory. Game theory deals with the study of mathematical models of decision-making in conditions of conflict. To construct a formal mathematical model of decision-making in conditions of conflict, it is necessary to mathematically describe all possible actions of the participants in the conflict and the results of these actions. The results of the players' actions are evaluated using a numerical function called the payoff function. ## 2 Task formulation Let's move on to the description of the search process, which is the focus of the main part of this work. A ship-interceptor equipped with a hydro locator has detected the periscope of a submarine, which immediately disappeared in an unknown direction. A hydro locator is a means of sound detection of underwater objects using acoustic radiation. It consists of a transceiver that sends sound pulses in the required direction and receives reflected pulses if the transmission, encountering any object on its path, is reflected from it. After the initial detection of the submarine, the task of the ship-interceptor is to catch the submarine in the shortest possible time. It is assumed that although the ship does not know the exact speed of the submarine, it knows a discrete set of speeds, one of which is the actual speed of the submarine. The formulated problem is a problem of secondary search for a moving object. The ship-interceptor will be referred to as the pursuer and the submarine as the evader, denoted as P and E, respectively. The work considers cases of continuous search, when the resistance of the boat and the ship is not considered, game problems of search, and the case of searching for n submarines. Thus, the goals of my research are the mathematical formalization of the process of search and interception of moving objects under various information conditions; the development of a procedure for finding the optimal solution; the implementation of the algorithm using the MAPLE 17 software package. ## 3 One Pursuer and one Evader Scenario Algorithm for finding the guaranteed capture time. Let us present an algorithm for finding the completion time of the search under conditions when the pursuer does not know the speed of the evader with certainty. To do this, let us first show the strategy of the pursuer's behavior. Assume that the speed of the pursuer is so much greater than the speed of the evading submarine that the completion of the search is guaranteed. At the initial moment \(\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ \[\rho^{p}=\alpha,|\alpha|\leq\nu_{\rho}\] \[\dot{\varphi}^{p}=\beta,|\beta|\leq\nu_{\varphi}\] \[\nu^{p}=\sqrt{\left(\nu_{\rho}\right)^{2}+\left(\nu_{\varphi}\right)^{2}}\] Since the speed of the fleeing vessel is not known with certainty, the pursuer makes the assumption that E has a speed of \(\nu_{1}\in V^{E}\). To capture the submarine at time \(t_{0}\), the pursuer begins moving towards point O with a speed of \(\nu^{p}\) and continues until time \(t_{1}\), at which point both players are at the same distance from point O, i.e., the equation. \[\rho_{1}^{p}=\rho_{1}^{E}\] And \[\int_{t_{0}}^{t_{1}}\nu_{1}dt+\nu^{p}(t_{1}-t_{0})=D_{0}\] If the encounter did not occur, then now \(t_{1}\), the pursuer, choosing a direction of circumnavigation, continues to move around point O in such a way as to constantly remain at the same distance from point O as the fleeing ship. Let's find the trajectory of motion corresponding to this behavior strategy. We will consider the direction of circumnavigation coinciding with the positive direction of the polar angle. The speed of the interceptor ship can be decomposed into two components: radial \(\nu_{\rho}\) and tangential \(\nu_{\varphi}\).. The radial component is the speed at which the ship moves away from the pole, i.e. \[\nu_{\rho}=\dot{\rho}\] The tangential component is the linear speed of rotation relative to the pole, i.e. \[\nu_{\varphi}=\dot{\rho}\varphi\] In order for the encounter to occur, the pursuer moves at maximum speed, keeping the radial component of the velocity equal to the speed of the fleeing vessel. Then, to find the trajectory of the pursuer, it is necessary to solve the system of differential equations: \[\dot{\rho}=\nu_{1}\] \[\dot{\varphi}^{2}\rho^{2}=(\nu^{p})^{2}-(\nu_{1})^{2}\] The initial conditions for this system are. \[\varphi(t^{*})=0\] \[\rho(t_{1})=\nu_{1}t_{1}\] Solving it, we find: \[\varphi(t)=\frac{\sqrt{(\nu^{p})^{2}-(\nu_{1})^{2}}}{\nu_{1}}ln\frac{\nu_{1}t }{\nu_{1}t_{1}}\] \[\rho(t)=v_{1}t\] Then the search time can be expressed as a function of the polar angle: \[t(\varphi)=t_{1}exp\left(\frac{v_{1}\varphi}{\sqrt{(v^{P})^{2}-(v_{1})^{2}}}\right)\] Thus, the trajectory consists of straight-line segments and logarithmic spiral segments. By adhering to this behavior strategy, the pursuer will detect the submarine within a time not exceeding one spiral turn. Then, if the ship, having bypassed the spiral turn, does not find the submarine, it means that the initial assumption about the speed of the evader was incorrect. Therefore, it is necessary to choose the next speed\(v_{2}\in V^{E}\) and assume that it is the actual speed. The evader has covered a distance of \(\rho_{E}(t_{2})=v_{2}t_{2}\), during time t_2, while the pursuer has covered \(\rho_{P}(t_{2})=v_{1}t_{2}\). There are two cases. If \(\rho_{P}(t_{2})>\rho_{E}(t_{2})\), then the distance between the players will be equal to \(D_{2}=\rho_{P}(t_{2})-\rho_{E}(t_{2})\)and to find the time t_3, the equation must be solved: \[\int_{t_{2}}^{t_{3}}v_{2}dt+v^{P}(t_{3}-t_{2})=D_{2}\] If \(\rho_{P}(t_{2})<\rho_{E}(t_{2})\), then the distance between the players will be equal to \(D_{2}=\rho_{E}(t_{2})-\rho_{P}(t_{2})\), and to find the moment in time \(t_{3}\), we need to solve the equation: \[v^{P}(t_{3}-t_{2})-\int_{t_{2}}^{t_{3}}v_{2}dt=D_{2}\] This algorithm for computing the guaranteed capture time is implemented using the software package MAPLE 17. In our task, the number of speeds n is finite and known in advance, and each speed needs to be checked for validity. It is assumed that when creating the schedule, the processing can start with any of the speeds. The duration of checking each speed will depend on the created schedule. Using the algorithm for finding the guaranteed search time outlined above, we will construct a matrix of times \(T=(t_{ij})\),where \(t_{ij}\)represents the duration of checking the speed when speed \(v_{i}\) precedes speed \(v_{j}\). Then the time taken by the pursuer to check all speeds, i.e., Guaranteed search time, depends on the order. Denote by\(F_{max}=F_{[n]}=\sum_{t=1}^{n}t_{[i-1],[t]}\) maximum test duration. The task is to check each of the n speeds once and only once, and the order should be such as to minimize the maximum duration of the passage. It is necessary to find such a matrix \(X\) of order \(n\) with elements. \[\mathbf{x}_{ik}=\texttt{0}\texttt{0}\texttt{0}\texttt{0}\texttt{1}\texttt{1}\texttt{ },\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{ }\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} {}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}{}\texttt{}\texttt{}\texttt{}\texttt{} {}\texttt{}\texttt{}\texttt{}{}\texttt{}\texttt{}{}\texttt{}\texttt{}\texttt{}}{\texttt{} {}\texttt{}\texttt{}{}\texttt{}}{\texttt{}\texttt{}{}\texttt{}}{\texttt{}}\texttt{{}} \texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{} {}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}}\texttt{{}}{\texttt{}{}}\texttt{ {}}\texttt{}{}{}\texttt{}{}{}\texttt{}{}}\texttt{{}}{\texttt{}{}}{\texttt{}}{\] \(\text{\text{\text{\text{\text{\text{\text Thus, the resulting matrices are characterized by an increasing lower bound and (or) a larger number of established steps. In addition, for each subsequent matrix, the number of checks is less than for the previous one, and eventually, a state is reached where the permutation is fully defined. The situations where the solution is obtained immediately, or the matrix is excluded are obvious. The essence of branching is the concepts of reduction and selection. The reduction aims to obtain at least one zero in each row and column of the original matrix T. Since each solution to the problem includes one and only one element from each row or column of matrix T, subtracting or adding a constant to each element of its column or row in the same degree changes all solutions and does not lead to a displacement of the optimum. Subtract a constant h from each element of a row or column of matrix T. Let the resulting matrix be \(\mathbf{T}^{\prime}\). Then the optimal solution found from \(\mathbf{T}^{\prime}\) is also optimal for T, i.e., both matrices have the same permutation that minimizes time. We can choose \(\mathbf{\nu}^{\prime}=\mathbf{h}\) as the lower bound for solutions obtained from \(\mathbf{T}^{\prime}\). Subtraction can continue until each column or row contains at least one zero (i.e., the minimum element in each row or column is zero). The sum of all reduction constants determines the lower bound Y for the original problem. The matrix T is reduced if it cannot be further reduced. In this case, finding route options is associated with studying a particular transition, say from i to j. As a result, instead of the original matrix, we consider two matrices: 1. Matrix \(\mathbf{T}_{\mathbf{ij}}\), which is associated with finding the best of all solutions given by matrix T and including the order (i, j). 2. Matrix \(\mathbf{T}_{\mathbf{n(ij)}}\), which is associated with choosing the best of all solutions not including the order (i, j). After fixing the transition from i to j, we need to exclude transitions from i to other speeds except j, and transitions to j from other speeds except i, by setting all elements of row i and column j, except \(\mathbf{t}_{\mathbf{ij}}\), to infinity. We also need to prohibit the order (j, i) in the future by setting\(\mathbf{t}_{\mathbf{ij}}=\infty\). This is because checking all speeds during a single pass cannot include both (i, j) and (j, i) simultaneously. Since these prohibitions may lead to the elimination of some zeros in matrix T, further reduction of T and obtaining a new, larger lower bound for solutions associated with matrix \(\mathbf{T}_{\mathbf{ij}}\) is not excluded. In the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\), it is prohibited to transition from i to j, i.e., \(\mathbf{t}_{\mathbf{ji}}\) is set to infinity. In this case, the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from \(\mathbf{T}_{\mathbf{n(ij)}}\) is not excluded. The choice of (i, j) should be such as to maximize the lower bound for \(\mathbf{T}_{\mathbf{n(ij)}}\), which may allow for the elimination of trajectories without further branching. To achieve this, all possible pairs (i, j) in the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) are examined, and the choice is made in such a way that the sum of two consecutive reducing constants is maximal. Obviously, transitions (i, j) corresponding to zero elements of matrix T should be prohibited first, since the choice with nonzero elements does not contribute to further reducing \(\mathbf{T}_{\mathbf{n(ij)}}\). The second way to order the enumeration of velocities is the method of dynamic programming. Without loss of generality, choose a certain velocity \(\mathbf{v}_{0}\) as the initial one. After that, divide the set of all velocities into four non-intersecting subsets: In the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) the transition from i to j is forbidden, i.e. \(\mathbf{t}_{\mathbf{ji}}\)\(=\infty\) is assumed. In this case, there is also the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from \(\mathbf{T}_{\mathbf{n(ij)}}\). The choice of (i, j) should be such as to maximize the lower bound for \(\mathbf{T}_{\mathbf{n(ij)}}\), which may allow the exclusion of a number of trajectories without further branching. To achieve this, all possible pairs (i, j) in the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) are examined, and the choice is made in such a way that the sum of two consecutive leading constants is maximized. It is obvious that transitions (i, j) that correspond to zero elements of the matrix T should be prohibited in the first place since the choice of non-zero elements do not contribute to further reduction of \(\mathbf{T}_{n(t)}\). The second method for ordering the enumeration of speeds is the dynamic programming approach. Without loss of generality, we choose some speed \(\mathbf{\nu}_{0}\) as the initial speed. After that, we divide all the set of velocity into four disjoint subsets: \(\{\mathbf{v}_{0}\}\) - the set consisting only of the initial speed. \(\{\mathbf{v}_{i}\}\) - the set consisting only of one non-initial speed. \(\{\mathbf{V}_{\mathbf{k}}\}\) - the set consisting of k speeds, except for \(\mathbf{v}_{0}\) and \(\mathbf{v}_{l}\) \(\{\mathbf{V}_{n-k-2}\}\) - the set consisting of the remaining n-k-2 speeds. Let us assume that the optimal order of checking speeds is known, starting with speed \(\mathbf{v}_{0}\). Then we can choose speed \(\mathbf{v}_{0}\) and a subset \(\{\mathbf{V}_{\mathbf{k}}\}\) consisting of k speeds, in such a way that this optimal permutation begins with \(\{\mathbf{v}_{0}\}\) and includes the set \(\{\mathbf{V}_{n-k-2}\}\), then \(\{\mathbf{v}_{i}\}\), after which it checks the set \(\{\mathbf{V}_{\mathbf{k}}\}\). Now let us consider only the part of the permutation that lies between \(\{\mathbf{v}_{l}\}\) and \(\{\mathbf{v}_{0}\}\) with an intermediate check of \(\{\mathbf{V}_{\mathbf{k}}\}\). It can be noted that the minimum time for this segment is known. If this were not the case, then without changing the part of the permutation up to speed \(\mathbf{v}_{l}\), we could find the best guaranteed time for completing its check and, therefore, the minimum time for the whole. However, this is impossible, since it contradicts the initial assumption that the optimal permutation is known. Let \(\mathsf{f}(\mathbf{v}_{i};\{\mathbf{V}_{\mathbf{k}}\})\) be the time for checking the best permutation from \(\mathbf{v}_{l}\) to \(\mathbf{v}_{0}\), including the set \(\{\mathbf{V}_{\mathbf{k}}\}\). Note that when k=0, \(\mathbf{f}(\mathbf{v}_{i};\{\emptyset\})=\mathbf{s}_{i0}\) If T is an element of the matrix T, and k=n-1 and \(\mathbf{v}_{l}\) coincides with the start of the movement, then \(\mathsf{f}(\mathbf{v}_{0};\{\mathbf{V}_{n-1}\})\) is the time of the optimal permutation of the original problem. The idea of dynamic programming is to increment k step by step, starting from k=0. Starting from \(\mathbf{v}_{0}\), the permutation is traversed in reverse order to find the optimal solution. For the problem under consideration, the main functional equation of dynamic programming is given by: \(\mathbf{f}(\mathbf{v}_{i};\{\mathbf{V}_{\mathbf{k}}\})=\underset{\mathbf{v}_{j}\in\{\mathbf{V}_{\mathbf{k }}\}}{\mathbf{min}}[\mathbf{s}_{j}+\mathbf{f}(\mathbf{v}_{j};\{\mathbf{V}_{j}\}-\{\mathbf{v}_{j}\})]\) This equation shows that to find the best permutation starting from \(\mathbf{v}_{l}\) and ending with \(\mathbf{v}_{0}\), with k intermediate velocities, one needs to choose the best among k permutations, starting from the transition from \(\mathbf{v}_{l}\) to one of the k velocities and then moving the fastest way to \(\mathbf{v}_{0}\) with intermediate visits to k-1 others. Each of these k options, in turn, represents the fastest of k-1 permutations according to the equation mentioned earlier. Eventually, a point is reached where the right-hand side of the equation simply represents an element of T. The solution to the problem for five velocities will be considered as an example, with the fifth velocity taken as the starting point. Then, \(\mathsf{f}(\mathbf{v}_{5};\{\mathbf{v}_{1},\ \mathbf{v}_{2},\ \mathbf{v}_{3},\ \mathbf{v}_{4}\})\) represents the shortest time for the best permutation, and any sequence of checking velocities that leads to such time is optimal. At step 0, the solution is sought for five options with k=0: \(\mathbf{f}(\mathbf{v}_{1};\{\emptyset\})=\mathbf{t}_{16}\) \(\mathbf{f}(\mathbf{v}_{2};\{\emptyset\})=\mathbf{t}_{26}\) \(\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})=\mathbf{t}_{36}\) \(\mathbf{f}(\mathbf{v}_{4};\{\emptyset\})=\mathbf{t}_{46}\) At the first step, solutions for k=1 are expressed in terms of known solutions for k=0: \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2}\}) =\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3}\}) =\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4}\}) =\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1}\}) =\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}) =\mathbf{t}_{23}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{3}\}) =\mathbf{t}_{43}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] At the second step, solutions for k=2 are expressed in terms of known solutions for k=1: \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}),\mathbf{t}_{1 3}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{4}\}),\mathbf{t}_{1 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{4}\}),\mathbf{t}_{1 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{4}\})]\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4}\}),\mathbf{t}_{2 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1}\})]\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{42}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}),\mathbf{t}_{4 3}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2}\})]\] We proceed to the third step, using each of the solutions of the second step. \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2},\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{23}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{24}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{31}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{4}\}),\mathbf{t}_{32}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{34}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{2})\}]\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{41}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3}\}),\mathbf{t}_{42}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3}\}),\] \[\mathbf{t}_{43}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{2}\})]\] At the fourth step, the solution of the original problem is obtained. \[\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{51}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{52}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{4}\}),\] \[t_{53}+f(v_{3};\{v_{1},v_{2},v_{4}\}),t_{54}+f(v_{4};\{v_{1},v_{2},v_{3}\})]\] \(\mbox{Exists}\frac{(n-1)!}{k(n-k-2)!}\) variations. For k\(\geq\)1 there are k choices. The number of comparisons to be made between them. The total number of computations at all stages will be equal to this number. \[2\sum_{k=1}^{n-1}\frac{k(n-1)!}{k!\left(n-k-2\right)!}+(n-1)<n^{2}2^{n}\] As an example, let us consider solving the problem for six speeds\(V^{E}=\{\)10,20,30,40,50,60\(\}\). The initial matrix T is obtained by applying the algorithm for computing the guaranteed search time, implemented using the Maple software package. \begin{tabular}{r r r r r} 0.13 & 0.42 & 1.133.05 & 9.13 & 34.1 \\ 0.25 & 0.24 & 1.744.73 & 14.23 & 53.27 \\ 0.54 & 1.29 & 0.44 7.6 & 22.94 & 86.04 \\ 1.23 & 2.84 & 6 0.89 & 39.15 & 147.2 \\ 3.14 & 7.04 & 14.7 31.4 & 2 & 277.2 \\ 9.62 & 21.16 & 43.893.13 & 217.76 & 5.6 \\ \end{tabular} Theoretical game model of search and interception. To reduce the guaranteed interception time, it is advisable for the pursuer to order the search of escape speeds. However, if the escapee becomes aware of this, they can move at a speed that the pursuer intends to check last, which would allow the escapee to maximize their search time. Thus, the search problem can be considered a game problem in the conditions of opposition. The system G = (X, Y, K), where X and Y are non-empty sets and the function K: X \(\times\) Y \(\rightarrow\) R I is called an antagonistic game in normal form. Elements x \(\in\) X and y \(\in\) Y are called player 1 and player 2 strategies, respectively, in game G. The Cartesian product elements (i.e., strategy pairs (x, y), where x \(\in\) X and y \(\in\) Y) are called situations, and the function K is the function of player 1's gain. Player 2's gain in an antagonistic game in situation (x, y) is assumed to be [-K (x, y)], so the function K is also called the game's gain function, and game G is a zero-sum game. Let's define the game for the search problem under consideration. Let the escapee choose any speed from the set \(V^{E}=\{v_{1},...,v_{n}\}\) and any direction from the set \(\alpha=\{\alpha_{1},...,\alpha_{n}\}\). Then, the set of pure strategies for the escapee (player 1) will be the set of combinations of possible velocities \(v_{t}\) of their movement and movement directions \(\alpha_{i}\) and the set of pure strategies for the pursuer will be the set of all possible permutations of the escapee's velocities. The gain will be the time it takes to catch the escapee, which is found using the algorithm described above. The game G is interpreted as follows: players independently and simultaneously choose strategies x \(\in\) X and y \(\in\) Y. After that, player 1 receives a gain equal to K(x, y), and player 2 receives a gain equal to (-K(x, y)). Antagonistic games in which both players have finite sets of strategies are called matrix games. Let player 1 in a matrix game have a total of m strategies. We establish a one-to-one correspondence between the set X of strategies and the set M = {1, 2,..., m}. Similarly, if player 2 has n strategies, we can establish a one-to-one correspondence between the sets N = {1, 2,..., n} and Y. Then, the game G is fully determined by the matrix A = \(\{\alpha_{ij}\}\), where \(\alpha_{ij}=K(x_{i},y_{j})\), \((i,j)\in M\times N\), \((x_{i},y_{j})\in X\times Y,i\in M,j\in N\). In this case, the game G is played as follows: player 1 chooses a row i \(\in\) M, and player 2 (simultaneously with player 1) chooses a column j \(\in\) N. After that, player 1 receives a payoff of \(\alpha_{ij}\), and player 2 receives (-\(\alpha_{ij}\)). Each player aims to maximize their own winnings by choosing a strategy. However, for player 1, their winnings are determined by the function K(x,y), while for the second player it is (-K(x,y)), i.e. the players' goals are directly opposite. It should be noted that the winnings of player 1 (2) are determined in the situations (x,y)\(\in\) X\(\times\)Y that arise during the game. However, each situation, and therefore the winnings of a player, depend not only on their own choice, but also on what strategy their opponent will choose. Therefore, in seeking to obtain the maximum winnings possible, each player must take into account the behavior of their opponent. In game theory, it is assumed that both players act rationally, i.e., strive to achieve maximum winnings, assuming that their opponent acts in the best possible way for themselves. Let player 1 choose a strategy x. Then in the worst case, they will win \(min_{y}K(x,y)\). Therefore, player 1 can always guarantee themselves a win of \(max_{x}min_{y}K(x,y)\). If we abandon the assumption of the attainability of the extremum, then player 1 can always obtain winnings that are arbitrarily close to this value. \(\overline{v}=sup_{x\in X}Inf_{y\in Y}K(x,y)\) which is called the lower value of the game. If the external extremum is reached, then the value \({}^{-}\)v is also called the maximin, the principle of constructing the strategy x, based on maximizing the minimum payoff, is called the maximin principle, and the strategy x chosen in accordance with this principle is the maximin strategy of player 1. For player 2, similar reasoning can be applied. Suppose they choose strategy y. Then in the worst case, they will lose \(max_{x}K(x,y)\).. Therefore, the second player can always guarantee a loss of \(min_{y}max_{x}K(x,y)\). The number... (the text appears to be cut off here, so I am unable to translate the complete sentence). \(\overline{v}=inf_{y\in Y}sup_{x\in X}K(x,y)\) The upper value of the game G is called the maximum-minimum, and in the case of achieving an external extremum, it is called the minimax. The principle of constructing the strategy y, based on minimizing the maximum losses, is called the minimax principle, and the strategy y chosen in accordance with this principle is the minimax strategy of player 2. It should be emphasized that the existence of a minimax (maximin) strategy is determined by the achievability of an external extremum. In the matrix game G, the extremums are achieved, and the lower and upper values of the game are respectively equal. \(\overline{v}=min_{1\leq i\leq n}max_{1\leq i\leq n}\alpha_{ij}\) \(\overline{v}=max_{1\leq i\leq n}min_{1\leq i\leq n}\alpha_{ij}\) The minimax and maximin for the game G can be found as follows: \(\left[\begin{array}{cccc}\alpha_{11}&...&\alpha_{1n}\\ \vdots&\ddots&\vdots\\ \alpha_{m1}&...&\alpha_{mn}\end{array}\right]min_{i}\alpha_{mj}\) \(max_{i}min_{j}\alpha_{ij}=\overline{v}\) \(max_{i}max_{i}\alpha_{i1}\)... \(max_{i}\alpha_{in}\)) \(min_{j}max_{i}\alpha_{ij}=\overline{v}\) Let's consider the question of optimal behavior of players in an antagonistic game. It is natural to consider a situation \((x^{*},y^{*})\in X\times Y\) in game G=(X,Y,K) optimal if neither player has an incentive to deviate from it. Such a situation \((x^{*},y^{*})\) is called an equilibrium, and the optimality principle based on constructing an equilibrium situation is called the principle of equilibrium. For antagonistic games, the principle of equilibrium is equivalent to the principles of minimax and maximin. In an antagonistic game G=(X, Y,K), a situation \((\mathbf{x}^{*},\mathbf{y}^{*})\) is called an equilibrium or a saddle point if \[\mathbf{K}(\mathbf{x},\mathbf{y}^{*})\leq\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\] \[\mathbf{K}(\mathbf{x}^{*},\mathbf{y})\geq\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\] or all x \(\in\) X,y \(\in\) Y. For the matrix game G, we are talking about the saddle points of the payoff matrix A, i.e., points \((\mathbf{I}^{*},\mathbf{J}^{*})\) such that for all i \(\in\) M, j \(\in\) N the inequalities. \[\mathbf{\alpha}_{\mathbf{i}\mathbf{j}^{*}}\leq\mathbf{\alpha}_{\mathbf{i}\mathbf{j}^{*}}\leq\mathbf{\alpha }_{\mathbf{i}\mathbf{j}}\] Theorem. Let \((\mathbf{x}_{1}^{*},\mathbf{y}_{1}^{*})\) and \((\mathbf{x}_{2}^{*},\mathbf{y}_{2}^{*})\) be two arbitrary equilibrium situations in the antagonistic game G. Then \[\mathbf{K}(\mathbf{x}_{1}^{*},\mathbf{y}_{1}^{*})=\mathbf{K}(\mathbf{x}_{2}^{*},\mathbf{y}_{2}^{*}); \mathbf{K}(\mathbf{x}_{1}^{*},\mathbf{y}_{2}^{*})=\mathbf{K}(\mathbf{x}_{2}^{*},\mathbf{y}_{1}^{*})\] \((\mathbf{x}_{1}^{*},\mathbf{y}_{2}^{*})\in\mathbf{Z}(\mathbf{G}),(\mathbf{x}_{2}^{*},\mathbf{y}_{1}^{* })\in\mathbf{Z}(\mathbf{G})\)where \(\mathbf{Z}(\mathbf{G})\) the set of all equilibrium situations. Let \((\mathbf{x}^{*},\mathbf{y}^{*})\) be an equilibrium situation in game G. The number \(\mathbf{v}=\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\) is called the value of game G. Now we establish a connection between the principle of equilibrium and the principles of minimax in an antagonistic game. Theorem. In order for there to exist an equilibrium situation in game \(\mathbf{G}=(\mathbf{X},\mathbf{Y},\mathbf{K})\), it is necessary and sufficient that the minimax and maximin \(\mathbf{min}_{\mathbf{y}}\mathbf{sup}_{\mathbf{x}}\mathbf{K}(\mathbf{x},\mathbf{y}),\mathbf{max}_{\mathbf{x}}\mathbf{ ln}\mathbf{f}_{\mathbf{y}}\mathbf{K}(\mathbf{x},\mathbf{y})\) exist, and the equality is satisfied: \[\overline{\mathbf{v}}=\mathbf{max}_{\mathbf{x}}\mathbf{inf}_{\mathbf{y}}\mathbf{K}(\mathbf{x},\mathbf{y})=\bm {min}_{\mathbf{y}}\mathbf{sup}_{\mathbf{x}}\mathbf{K}(\mathbf{x},\mathbf{y})=\overline{\mathbf{v}}\] If there exists a situation of equilibrium in a matrix game, then the minimax is equal to the maximin, and according to the definition of equilibrium situation, each player can communicate their optimal (maximin) strategy to their opponent, and from this neither player can gain any additional advantage. Now, suppose that in game G there is no situation of equilibrium. Then, we have \[\mathbf{min}_{\mathbf{j}}\mathbf{max}_{\mathbf{i}}\mathbf{\alpha}_{\mathbf{i}\mathbf{j}}-\underset{\mathbf{l} }{\max}\mathbf{min}_{\mathbf{j}}\mathbf{\alpha}_{\mathbf{i}\mathbf{j}}>0\] If there is an equilibrium situation in a matrix game, then the minimax is equal to the maximin, and according to the definition of the equilibrium situation, each player can inform their optimal (maximin) strategy to the opponent, and neither player can gain additional advantage from this. Now suppose there is no equilibrium situation in game G. In this case, the maximin and minimax strategies are not optimal. Moreover, it may not be beneficial for the players to adhere to them, as they may obtain a greater gain. However, informing the opponent about the choice of strategy can lead to even greater losses than in the case of the maximin or minimax strategy. In this case, it is reasonable for players to act randomly, which provides the greatest secrecy in choosing a strategy. The result of the choice cannot become known to the opponent, as the player themselves do not know it until the random mechanism is implemented. A random variable, whose values are the player's strategies, is called their mixed strategy. Since a random variable is characterized by its distribution, we will identify a mixed strategy x of player 1 in a game with an m-dimensional vector. \[\mathbf{x}=(\mathbf{\xi}_{1},...,\mathbf{\xi}_{\mathbf{m}})\in\mathbf{R}^{\mathbf{m}},\sum_{t=1}^{\bm {m}}\mathbf{\xi}_{t}=1,\mathbf{\xi}_{t}\geq 0,\mathbf{l}=1,...,\mathbf{m}\] Similarly, player 2's mixed strategy y is the n-dimensional vector. \[\mathbf{y}=(\mathbf{\eta}_{1},...,\mathbf{\eta}_{\mathbf{n}})\in\mathbf{R}^{\mathbf{n}},\sum_{t=1}^{ \mathbf{n}}\mathbf{\eta}_{t}=1,\mathbf{\eta}_{t}\geq 0,\mathbf{j}=1,...,\mathbf{n}\] In this case, \(\xi_{i}\)\(\geq\)0 and \(\eta_{j}\)\(\geq\)0 are the probabilities of choosing pure strategies i \(\in\) M, j \(\in\) N respectively when players use mixed strategies x and y. Let X and Y denote the sets of mixed strategies for the first and second players respectively. Let x=(\(\xi_{1}\),..., \(\xi_{m}\)) \(\in\) X be a mixed strategy. The set of mixed strategies for a player is an extension of their pure strategy space. A pair (x, y) of mixed strategies for players in matrix game G is called a situation in mixed strategies. Let's define the payoff of player 1 in the situation (x, y) in mixed strategies for the matrix game G as the mathematical expectation of their payoff given that players use mixed strategies x and y respectively. The players choose their strategies independently of each other, therefore the expected payoff K (x, y) in the situation (x, y) in mixed strategies x=(\(\xi_{1}\),..., \(\xi_{m}\)) and y= (\(\eta_{1}\),..., \(\eta_{n}\)) is equal to: \[K(x,y)=\sum_{l=1}^{m}\sum_{j=1}^{n}\alpha_{ij}\xi_{i}\eta_{j}\] The situation(\(x^{*},y^{*}\))is called an equilibrium situation if \[K(x,y^{*})\leq K(x^{*},y^{*})\] \[K(x^{*},y)\geq K(x^{*},y^{*})\] for all x \(\in\) X,y \(\in\) Y. Theorem. Every matrix game has a situation of equilibrium in mixed strategies. A common way to solve a matrix game is by reducing it to a linear programming problem. However, difficulties arise when solving matrix games of large dimensions. Therefore, the iterative Brown-Robinson method is often used to find a solution. The idea of the method is to repeatedly play a fictitious game with a given payoff matrix. One repetition of the game is called a round. Let A = \(\{\alpha_{ij}\}\) be an (m x n)-matrix game. In the first round, both players choose their pure strategies completely randomly. In the k-th round, each player chooses the pure strategy that maximizes their expected payoff against the observed empirical probability distribution of the opponent's moves in the previous (k-1) rounds. So, suppose that in the first k rounds, player 1 used the i-th strategy \(\xi_{i}^{k}\) times, and player 2 used the j-th strategy \(\eta_{j}^{k}\) times. Then in the (k+1)-th round, player 1 will use the \(t_{k+1}\)-th strategy, and player 2 will use their\(j_{k+1}\)strategy, where: \[\overline{v}^{k}=m_{l}x\sum_{j}\alpha_{ij}\eta_{j}^{k}=\sum_{j}\alpha_{i_{k+l }j}\eta_{j}^{k}\] \[\overline{v}^{k}=m_{l}n\sum_{i}\alpha_{ij}\xi_{j}^{k}=\sum_{j}\alpha_{i_{k+1 }}\xi_{j}^{k}\] Let v be the value of the matrix game G. Consider the relations \[\overline{v}^{k}/k=m_{l}ax\sum_{l}\alpha_{ij}\eta_{j}^{k}/k=\sum_{j}\alpha_{i_ {k+l}j}\eta_{j}^{k}/k\] \[\overline{v}^{k}/k=m_{j}\underset{k}{\sum}\alpha_{ij}\xi_{j}^{k}/k=\sum_{j}\alpha_{ ij_{k+1}}\xi_{j}^{k}/k\] \(\text{Vectors}\mathbf{x}^{k}=(\frac{\xi_{1}^{k}}{k},...,\frac{\xi_{m}^{k}}{k}) \text{ }n\text{ }\textbf We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\alpha}\)E={8,56,78} as the X-axis speed, and selects from the set \(\alpha\)={23,37,82} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\alpha}\)P=100 m/min Then the fugitive policy set is: \[(\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}},\bm {v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{\alpha_{ 2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2}})\] **Example 2.** Let the initial distance between the pursuer and the evader be 50 kilometers. The evader chooses a speed from the set \(\mathbf{V^{E}}\)= {4,10,16} and a direction from the set \(\alpha\)={8,10,16}. The maximum speed of the pursuer is \(\mathbf{V^{P}}\)=80 km/h. Then the set of strategies for the evader is: \((\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}}, \mathbf{v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{ \alpha_{2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2 }})\), and the set of strategies for the pursuer: \[(\mathbf{v_{1}},\mathbf{v_{2}},\mathbf{v_{3}}),(\mathbf{v_{1}},\mathbf{v_{3}},\mathbf{v_{2}}),(\mathbf{v_{ 2}},\mathbf{v_{1}},\mathbf{v_{3}}),(\mathbf{v_{2}},\mathbf{v_{3}},\mathbf{v_{1}}),(\mathbf{v_{3}},\mathbf{ v_{1}},\mathbf{v_{2}}),(\mathbf{v_{3}},\mathbf{v_{2}},\mathbf{v_{1}})\] The resulting game matrix looks like this: \begin{tabular}{r r r r r} 0,6 & 0,6 & 1,325,56 & 2,161 & 4,77 \\ 0,9 & 3,79 & 0,570,57 & 3,249 & 2,039 \\ 2,2 & 0,9 & 2,191,38 & 0,536 & 0,536 \\ 0,6 & 0,6 & 1,33 5,57 & 2,165 & 4,778 \\ 0,9 & 3,8 & 0,570,568 & 3,263 & 2,048 \\ 2,21 & 1 & 2,21 1,39 & 0,54 & 0,54 \\ \end{tabular} Figure 6: All optional paths and pursuit results of the quadrotor UAV We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\alpha}\)E={4,10,16} as the X-axis speed, and selects from the set \(\alpha\)={8,10,16} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\alpha}\)P=100 m/min Then the fugitive policy set is: \[(\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}},\bm {v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{\alpha_{ 2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2}})\] The game was solved using the method of Brown-Robinson, and the value of the game is 1.57. The strategy for the evader is (1/20, 0, 0, 0, 0, 0, 1/10, 1/4, 3/5), and the strategy for the pursuer is (9/20, 1/20, 3/20, 1/20, 1/4, 1/20). The solutions of the examples showed that the most probable speed for the evader will be the maximum of the possible speeds. Therefore, the pursuer should start checking the speeds from the maximum possible speed. ## 6 Task of one pursuer and group of fugitives Given a set J of n submarines, for which the times of their capture by the pursuer are known, so that \(|\textbf{J}|\) denotes the time of capture of J. For each fugitive, the readiness times of the submarines are also known (possibly times when they will reach a certain place Figure 7: All optional paths and pursuit results of the quadrotor UAV where pursuit cannot continue) \(\overline{D}(J)\) and the times required for their execution \(\overline{D}(J)\). For each fugitive, a weight coefficient w(J) is given, which participates in the objective function that needs to be optimized. T h e The end time of the search is denoted by \(t^{i}\). Thus, \(t^{i}=t_{i}+T_{i}\) F e Examples of criteria functions: h 1. Minimize the total penalty for delays. \[f_{1}=\sum_{k=1}^{n-1}w(J_{k})(t^{k}-\overline{D}_{k})^{+}\] h 2. Minimize the maximum penalty for delays. \[f_{2}=\underset{k}{\text{\emph{max}}}w(J_{k})(t^{k}-\overline{D}_{k})^{+}\] h 3. Minimize the amount of fines \[f_{3}=\sum_{k=1}^{n-1}w(J_{k})(t^{k}-\overline{D}_{k})\] h e 4.Minimize the amount of tied funds \(\underset{k}{\text{\emph{+}}}\neq\) denotes its positive part, defined by the formula \(x+=1Z(x+\underset{k}{\text{\emph{+}}})\)Then the delay in catching i of the evaders is equal \[f_{4}=\sum_{k=1}^{n-1}w(J_{k})t^{k}\] h e 4.Minimize the amount of tied funds \(\underset{k}{\text{\emph{+}}}\neq\) denotes its positive part, defined by the formula \(x+=1Z(x+\underset{k}{\text{\emph{+}}})\)Then the delay in catching i of the evaders is equal \[f_{4}=\sum_{k=1}^{n-1}w(J_{k})t^{k}\] h Decision by criterion \(f_{4}\) S Let's consider the solution for the function \(f_{4}\) only in the case where \(\overline{D}(J)=0\) for any J\(\in\)J. Let's consider the optimal sequence and swap two adjacent elements \(J_{k}\) and\(J_{k+1}\) in it. In this case, the capture time of the last one may increase (otherwise the considered solution is not optimal). However, the difference in the criterion function between searching for the first k+1 Ieeing members in the modified and optimal order does not exceed. \[\big{(}w(J_{k+1})|J_{k+1}|+w(J_{k})(|J_{k}|+|J_{k+1}|)\big{)}-(w(J_{k})|J_{k} |+w(J_{k+1})(|J_{k}|+|J_{k+1}|)\big{)}\geq 0\] i Hence, after reductions, we get. \[w(J_{k})|J_{k+1}|\geq w(J_{k+1})|J_{k}|\] d Therefore, for the optimal schedule for any k, we obtain the inequality of relations. \[\frac{|J_{k}|}{w(J_{k})}\leq\frac{|J_{k+1}|}{w(J_{k+1})}\] e Note that if this ratio is equal, the permutation of k+1 and k does not change the value of the criterion. Therefore, any ## 5 Consider the following situation. Suppose a intercepting ship, having n boats with depth bombs on board, at time t detects periscopes of n submarines at various distances from it on the surface of the sea, which at the same moment dived underwater and began to move in different directions at fixed speeds. It is required to send the boats to intercept the submarines in an optimal way, that is, so that the sum of the guaranteed times of interception of the submarines would be minimal. To solve the problem, we will create a matrix of efficiency A=(a i j), where each element is the guaranteed time of interception of submarine j by boat i, which consists of the time of reaching the periscope detection point by the boat and its total time of passing along the logarithmic interception spiral. Let \(\mathbf{x_{ij}}\) be variables that can take only 2 values 0 or 1 as follows. \[\mathbf{x_{ij}}=\begin{cases}1,assigned\ \ \mathbf{t}\ \text{ boat for }\ \mathbf{j}\ \text{ submarine}\\ 0,assigned\ \ \mathbf{t}\ \text{ boat for }\ \mathbf{j}\ \text{ submarine}\end{cases}\] It is necessary to find an assignment plan - a matrix X= {\(\mathbf{x_{ij}}\)}, i=1...m, j=1...n, which minimizes the search time, while ensuring that each boat is assigned to search for no more than one submarine, and each submarine can be searched by no more than one boat. Mathematical formulation of the optimal assignment problem. \[\mathbf{min\ z}=\mathbf{min}\sum_{t=1}^{m}\sum_{j=1}^{n}\mathbf{r_{ij}}*\mathbf{x_{ij}}\] \[\sum_{t=1}^{m}\mathbf{x_{ij}}\leq 1,\mathbf{j}=1..\mathbf{n}\] \[\sum_{j=1}^{n}\mathbf{x_{ij}}\leq 1,\mathbf{t}=1..\mathbf{m}\] \[\mathbf{x_{ij}}\geq 0\] In order for the optimal assignment problem to have an optimal solution, it is necessary and sufficient that the number of boats is equal to the number of submarines, i.e., n=m. Under this condition, the inequality constraints become equality constraints. \[\mathbf{min\ z}=\mathbf{min}\sum_{t=1}^{m}\sum_{j=1}^{n}\mathbf{r_{ij}}*\mathbf{x_{ij}}\] \[\sum_{t=1}^{n}\mathbf{x_{ij}}=1,\mathbf{j}=1..\mathbf{n}\] \[\sum_{j=1}^{n}\mathbf{x_{ij}}=1,\mathbf{t}=1..\mathbf{n}\] \[\mathbf{x_{ij}}\geq 0\] If n\(\neq\)m, then the assignment problem is unbalanced. Any assignment problem can be balanced by introducing the necessary number of dummy boats or submarines. The dual problem of the optimal assignment problem. \[\mathbf{max\omega=max(\sum_{i=1}^{n}i+\sum_{i=1}^{n}i)}\] \[i+i\geq\mathbf{\tau_{ij}},\mathbf{t=1..n},\mathbf{j=1..n}\] The Hungarian method can be used to solve the assignment problem. The essence of the method is as follows: In the original matrix A of performances, determine the minimum element in each row and subtract it from all other elements in the row. In the matrix obtained in the first step, determine the minimum element in each column and subtract it from all other elements in the column. If a feasible solution is not obtained after steps 1 and 2, perform: In the last matrix, draw the minimum number of horizontal and vertical lines through rows and columns to cross out all zero elements. Find the minimum non-crossed-out element and subtract it from all other non-crossed-out elements and add it to all elements at the intersection of the lines drawn in the previous step. If the new distribution of zero elements does not allow a feasible solution to be constructed, repeat step 2a. Otherwise, proceed to step 3. The optimal assignments will correspond to the zero elements obtained in step 2. Let's consider some numerical examples of solving the problem of distributing boats for catching several submarines. **Example 3**. Let a interceptor ship detect 4 submarines. The initial distance to each of them is 100 km, 200 km, 50 km, and 163 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. The first submarine moves along the straight line \(\mathbf{\alpha}_{1}\)=23, with the speed \(\mathbf{\nu}_{1}\)=23 km/h, the second one \(\mathbf{\alpha}_{2}\)=137, \(\mathbf{\nu}_{2}\)=50 km/h, the third one \(\mathbf{\alpha}_{3}\)=187, \(\mathbf{\nu}_{3}\)=67 km/h, and the fourth one \(\mathbf{\alpha}_{4}\)=50, \(\mathbf{\nu}_{4}\)=70 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{r r r} 1,18 & 0,980,52 & 0,73 \\ 14,43 & 7,061,77 & 3,3 \\ 373,78 & 12,120,77 & 2,13 \\ 14,43 & 3 & 0,96 & 1,53 \\ \end{tabular} We solve the game using the Hungarian method. The value of the objective function is 8.08, the final table looks like this. \begin{tabular}{r r r r} 0 & 0 & 2,37 & 1,22 \\ 9,63 & 2,46 & 0 & 0,17 \\ 369,98 & 8,52 & 0 & 0 \\ 11,22 & 0 & 0,79 & 0 \\ \end{tabular} We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 74 km/h, 90 km/h, 178 km/h and 124 km/h respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=23m/min, the maximum speed of the Y-axis a_1=23m/min, the height is 100 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=50m/min, a maximum speed of Y-axis a_2=137m/min, and a height of 200 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=67m/min, the maximum speed of the Y-axis a_3=7m/min, and a height of 50 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=70m/min. Y-axis maximum speed a_4=50m/min, height 163 meters matching matrix: \[0 0 1 0\] \[1 0 0 0\] \[0 1 0 0\] \[0 0 0 1\] The value of the objective function is **3.0888** **Example 4**. Let an intercept ship detect 4 submarines. The initial distance to each of them is 30 km, 11 km, 62 km, and 8 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 60 km/h, 65 km/h, 95 km/h, and 105 km/h, respectively. The first submarine moves along the straight line \(\boldsymbol{\alpha}_{1}\)=7, with the speed \(\boldsymbol{v}_{1}\)=7 km/h, the second one \(\boldsymbol{\alpha}_{2}\)=11, \(\boldsymbol{v}_{2}\)=11 km/h, the third one \(\boldsymbol{\alpha}_{3}\)=30, \(\boldsymbol{v}_{3}\)=30 km/h, and the fourth one \(\boldsymbol{\alpha}_{4}\)=44, \(\boldsymbol{v}_{4}\)=44 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{c c c c} 0,46 & 0,420,297 & 0,27 \\ 0,16 & 0,15 0,11 & 0,097 [7] \\ 0,93 & 0,860,59 & 0,54 \\ 0,18 & 0,150,09 & 0,08 \\ \end{tabular} We solve the game using the Hungarian method. The value of the objective function is 1.147, the final table looks like this. \begin{tabular}{c c c c} 0,093 & 0,063 & 0 & 0 \\ 0 & 0 & 0,02 & 0,034 \\ 0,29 & 0,230,023 & 0 \\ 0,02 & 0 & 0 & 0,017 \\ \end{tabular} We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 60 m/min, 65 m/min, 95 m/min and 105 m/min respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=7m/min, the maximum speed of the Y-axis \(\alpha\)_1=7m/min, the height is 30 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=11m/min, a maximum speed of Y-axis \(\alpha\)_2=11m/min, and a height of 11 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=30m/min, the maximum speed of the Y-axis \(\alpha\)_3=30m/min, and a height of 62 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=44m/min. Y-axis maximum speed \(\alpha\)_4=44m/min, height 44 meters matching matrix: \begin{tabular}{c c c c} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{tabular} The value of the objective function is: 0.8390 ## 6 Conclusion With reasonable parameters of chasing UAVs, considering the success rate and interception efficiency, after calculating all UAV motion parameters using the Hungarian algorithm, all escaped quadrotor UAVs can be chased and successfully intercepted considering the performance parameters such as the speed of each interceptor and the compatibility between the quadrotor UAVs in the interceptor UAV camp and the UAVs in the escaped UAV camp.
2309.08432
Topological K-theory of quasi-BPS categories of symmetric quivers with potential
In previous works, we introduced and studied certain categories called quasi-BPS categories associated to symmetric quivers with potential, preprojective algebras, and local surfaces. They have properties reminiscent of BPS invariants/ cohomologies in enumerative geometry, for example they play important roles in categorical wall-crossing formulas. In this paper, we make the connections between quasi-BPS categories and BPS cohomologies more precise via the cycle map for topological K-theory. We show the existence of filtrations on topological K-theory of quasi-BPS categories whose associated graded are isomorphic to the monodromy invariant BPS cohomologies. Along the way, we also compute the topological K-theory of categories of matrix factorizations in terms of the monodromy invariant vanishing cycles (a version of this comparison was already known by work of Blanc-Robalo-To\"en-Vezzosi), prove a Grothendieck-Riemann-Roch theorem for matrix factorizations, and prove the compatibility between the Koszul equivalence in K-theory and dimensional reduction in cohomology. In a separate paper, we use the results from this paper to show that the quasi-BPS categories of K3 surfaces recover the BPS invariants of the corresponding local surface, which are Euler characteristics of Hilbert schemes of points on K3 surfaces.
Tudor Pădurariu, Yukinobu Toda
2023-09-15T14:32:44Z
http://arxiv.org/abs/2309.08432v2
# Topological K-theory of quasi-BPS categories of symmetric quivers with potential ###### Abstract. In previous work, we studied quasi-BPS categories (of symmetric quivers with potential, of preprojective algebras, of surfaces) and showed they have properties analogous to those of BPS invariants/ cohomologies. For example, quasi-BPS categories are used to formulate categorical analogues of the PBW theorem for cohomological Hall algebras (of Davison-Meinhardt) and of the Donaldson-Thomas/BPS wall-crossing for framed quivers (of Meinhardt-Reineke). The purpose of this paper is to make the connections between quasi-BPS categories and BPS cohomologies more precise. We compute the topological K-theory of quasi-BPS categories for a large class of symmetric quivers with potential. In particular, we compute the topological K-theory of quasi-BPS categories for a large class of preprojective algebras, which we use (in a different paper) to compute the topological K-theory of quasi-BPS categories of K3 surfaces. A corollary is that there exist quasi-BPS categories with topological K-theory isomorphic to BPS cohomology. We also compute the topological K-theory of categories of matrix factorizations for smooth affine quotient stacks in terms of the monodromy invariant vanishing cohomology, prove a Grothendieck-Riemann-Roch theorem for matrix factorizations, and check the compatibility between the Koszul equivalence in K-theory and dimensional reduction in cohomology. ## 1. Introduction The BPS invariants are integer virtual counts of semistable (compactly supported) coherent sheaves on a smooth complex Calabi-Yau 3-fold. They are fundamental enumerative invariants which determine many other enumerative invariants of interest for Calabi-Yau 3-folds, such as Gromov-Witten, Donaldson-Thomas (DT), or Pandharipande-Thomas invariants [14, Section 2 and a half]. Let \(X\) be a smooth Calabi-Yau 3-fold, let \(v\in H^{\cdot}(X,\mathbb{Z})\), let \(\sigma\) be a stability condition, let \(M^{\sigma}_{X}(v)\) be the good moduli space of \(\sigma\)-semistable (compactly supported) sheaves on \(X\) of support \(v\), and let \(\Omega^{\sigma}_{X}(v)\) be the corresponding BPS invariant. An important problem in enumerative algebraic geometry is to define a natural BPS cohomology theory for \(M^{\sigma}_{X}(v)\) which recovers \(\Omega^{\sigma}_{X}(v)\) as its Euler characteristic. Even more, one could attempt to construct a natural dg-category \[\mathcal{B}\mathcal{P}\mathcal{S}^{\sigma}_{X}(v) \tag{1.1}\] which recovers a 2-periodic version of BPS cohomology (via periodic cyclic homology or topological K-theory [1]), and thus also the BPS invariant \(\Omega^{\sigma}_{X}(v)\). The BPS cohomology, the BPS category (1.1), and the K-theory of (1.1) (BPS K-theory) are alternatives to their classical counterparts for \(M^{\sigma}_{X}(v)\). By dimensional reduction, one also obtains a BPS cohomology/ category/ K-theory for moduli of semistable sheaves on a surface. One may hope that the constructed BPS theories are more tractable than their classical counterparts, and that they will have applications in ###### Contents * 1 Introduction * 2 Preliminaries * 3 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1.1 The BPS cohomology of a BPS category * 3.1.2 The BPS cohomology of a BPS category * 3.1.3 The BPS cohomology of a BPS category * 3.1.4 The BPS cohomology of a BPS category * 3.1.5 The BPS cohomology of a BPS category * 3.1.6 The BPS cohomology of a BPS category * 3.1.7 The BPS cohomology of a BPS category * 3.1.8 The BPS cohomology of a BPS category * 3.1.9 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2 The BPS cohomology of a BPS category * 3.2.3 The BPS cohomology of a BPS category * 3.2.4 The BPS cohomology of a BPS category * 3.2.5 The BPS cohomology of a BPS category * 3.2.6 The BPS cohomology of a BPS category * 3.2.7 The BPS cohomology of a BPS category * 3.2.8 The BPS cohomology of a BPS category * 3.2.9 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.2 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.3 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.4 The BPS cohomology of a BPS category * 3.2.5 The BPS cohomology of a BPS category * 3.2.1.1 The BPS cohomology of a BPS category * 3.2.1.2 The BPS cohomology of a BPS category * 3.2.1.3 The BPS cohomology of a BPS category * 3.2.2.5 The BPS cohomology of a BPS category * 3.2.1.6 The BPS cohomology of a BPS category * 3.2.1.7 The BPS cohomology of a BPS category * 3.2.1.8 The BPS cohomology of a BPS category * 3.2.1.9 The BPS cohomology of a BPS category * 3.2.2.1.9 The BPS cohomology of a BPS category * 3.2.2.1.1 The BPS cohomology of a BPS category * 3.2.2.2.2.2.3 The BPS cohomology of a BPS category * 3.2.2.4 The BPS cohomology of a BPS category * 3.2.1.1.1 The BPS cohomology of a BPS category * 3.2.2.5 The BPS cohomology of a BPS category * 3.2.2.6 The BPS cohomology of a BPS category * 3.2.7 The BPS cohomology of a BPS category * 3.2.2.8 The BPS cohomology of a BPS category * 3.2.9.1 The BPS cohomology of a BPS category * 3.2.1.1.2.1.1.2.3.4 The BPS cohomology of a BPS category * 3.2.1.1.5.1.6 The BPS cohomology of a BPS category * 3.2.1.7.1.8 The BPS cohomology of a BPS category such as a BPS cohomology theory which recovers \(\Omega^{\sigma}_{X}(v)\) as its Euler characteristic. Davison-Meinhardt [10] defined BPS cohomology for all symmetric quivers with potentials (thus for all local models), and Davison-Hennecart-Schlegel Mejia [12] defined it for \(X=\operatorname{Tot}_{S}K_{S}\), where \(S\) is a Calabi-Yau surface. For a general CY 3-fold, up to the existence of a certain orientation data, the BPS cohomology is defined in [14, Definition 2.11]. By dimensional reduction, we also regard BPS cohomology as a cohomology theory for good moduli spaces of objects in categories of dimension 2, for example of the good moduli spaces \(P(d)\) of the classical truncation of the quasi-smooth stack \(\mathcal{P}(d)\) of dimension \(d\) representations of the preprojective algebra of \(Q^{\circ}\). For a quiver \(Q^{\circ}\), there is a perverse sheaf \[\mathcal{BPS}^{p}_{d}\in\operatorname{Perv}(P(d))\] whose cohomology is the BPS cohomology of the preprojective algebra \(Q^{\circ}\). ### Categorical DT theory We are interested in constructing a category (1.1) which recovers (and has analogous properties to) the BPS invariants/ cohomology. If there are no strictly \(\sigma\)-semistable sheaves of support \(v\), such a category will recover the DT invariants by taking the Euler characteristic of its periodic cyclic homology, see [14] for a definition for local surfaces and [HHR] for work in progress addressing the general case. In previous work, we introduced and studied quasi-BPS categories: * for symmetric quivers with potential (1.2), * for preprojective algebras, which we denote by (1.4) \[\mathbb{T}(d)_{v}\subset D^{b}(\mathcal{P}(d)),\] * for points on smooth surfaces [13, PTc], and * for semistables sheaves on K3 surfaces [PTd]. These categories have analogous properties to BPS cohomology. Indeed, there are semiorthogonal decompositions of the categorical Hall algebras (of symmetric quivers with potential, and thus also of preprojective algebras, or of K3 surfaces) [13, Theorem 1.1] and [14, PTe, PTd], or of Donaldson-Thomas categories (of symmetric quivers with potential) [PTa, PTe] in products of quasi-BPS categories. These semiorthogonal decompositions are analogous to the PBW theorem for cohomological Hall algebras [10, DHSMb], or of the DT/ BPS wall-crossing of Meinhardt-Reineke for framed quivers [15]. For weights \(v\in\mathbb{Z}\) as in Theorem 1.1, we proved categorical versions of the Davison support lemma for BPS sheaves [14, PTe, PTd]. However, we observed in [14] that quasi-BPS categories do not categorify BPS cohomology for every \(v\in\mathbb{Z}\). ### Matrix factorizations and vanishing cycles Locally, BPS sheaves are vanishing cycles of IC sheaves of coarse spaces of smooth quotient stacks, see (1.9). We thus first study vanishing cycles for regular function \[f\colon\mathcal{X}\to\mathbb{C},\] where \(\mathcal{X}=X/G\) is a smooth quotient stack, where \(G\) is a reductive group and \(X\) is a smooth affine variety. It is well-known that the category of matrix factorizations \(\operatorname{MF}(\mathcal{X},f)\) is a categorification of vanishing cohomology \(H\left(\mathcal{X},\varphi_{f}\mathbb{Q}\chi\right)\), see [10, 11]. Let \(\mathrm{T}\) be the monodromy operator and let \(\varphi_{f}^{\mathrm{inv}}\mathbb{Q}\chi\) be the cone of the endomorphism \(1-\mathrm{T}\) on \(\varphi_{f}\mathbb{Q}\chi\). Inspired by [11, 12, 13], we construct a Chern character map for \(f\) quasi-homogeneous: \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathcal{X},f))\to\bigoplus_{j\in\mathbb{Z}}H^{i+2j}(\mathcal{X},\varphi_{f}^{ \operatorname{inv}}\mathbb{Q}_{\mathcal{X}}), \tag{1.5}\] which is an isomorphism if \(\mathcal{X}\) is a variety, see (4.14). We note that the construction of (1.5) is fairly elementary: by the Koszul equivalence and dimensional reduction, both sides are isomorphic to relative theories; under this identification, (1.5) is the Chern character from relative topological K-theory to relative singular cohomology. The Chern character map (1.5) induces a cycle map on an associated graded of topological K-theory: \[\operatorname{c}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{X},f))\to H^{2\dim\mathcal{X}(d)-i-2\ell}( \mathcal{X},\varphi_{f}^{\operatorname{inv}}\mathbb{Q}_{\mathcal{X}}[-2]). \tag{1.6}\] In Section 4, we discuss functoriality of (1.5) and (1.6), in particular we prove a Grothendieck-Riemann-Roch theorem, see Theorem 4.7. ### Quasi-BPS categories for symmetric quivers with potential We briefly explain the construction of the quasi-BPS categories (1.2). Consider a symmetric quiver \(Q\) with potential \(W\). For any \(v\in\mathbb{Z}\), Spenko-Van den Bergh [10] constructed twisted non-commutative resolutions \[\mathbb{M}(d)_{v}\subset D^{b}(\mathcal{X}(d))_{v} \tag{1.7}\] of \(X(d)\). The category \(\mathbb{M}(d)_{v}\) is generated by certain vector bundles corresponding to lattice points inside a polytope. Then \[\mathbb{S}(d)_{v}:=\operatorname{MF}(\mathbb{M}(d)_{v},\operatorname{Tr}W) \subset\operatorname{MF}(\mathcal{X}(d),\operatorname{Tr}W)\] is the category of matrix factorizations \((\alpha\colon A\rightleftarrows B\colon\beta)\), where \(A,B\) are direct sums of the generating vector bundles of \(\mathbb{M}(d)_{v}\), and \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(\operatorname{Tr}W\). For \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) and \(v\in\mathbb{Z}\), we define a set \(S_{v}^{d}\) of partitions of \(d\) from the combinatorics of the polytope used to define (1.7). For each partition \(A\in S_{v}^{d}\), there is a corresponding constructible sheaf \(\mathcal{BPS}_{A}\). Let \[\mathcal{BPS}_{d,v}:=\bigoplus_{A\in S_{v}^{d}}\mathcal{BPS}_{A}. \tag{1.8}\] If \(\gcd{(\underline{d},v)}=1\) and \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex, then \(S_{v}^{d}\) consists only of the one term partition of \(d\) and then \[\mathcal{BPS}_{d,v}=\mathcal{BPS}_{d}:=\begin{cases}\varphi_{\operatorname{ Tr}W}\mathrm{IC}_{X(d)}[-1],\text{ if }R(d)^{\operatorname{st}}\neq\emptyset,\\ 0,\text{ otherwise.}\end{cases} \tag{1.9}\] There is thus a monodromy action on the cohomology of \(\mathcal{BPS}_{d,v}\) induced from the monodromy of vanishing cycles. **Theorem 1.2**.: (Theorems 6.2 and 6.3) _Let \(Q\) be a symmetric quiver, let \(W\) be a quasi-homogeneous potential, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\). For any \(i,\ell\in\mathbb{Z}\), there is an injective cycle map induced from (1.6):_ \[\operatorname{c}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}} \left(\mathbb{S}(d)_{v}\right)\hookrightarrow H^{\dim\mathcal{X}(d)-2\ell-i}( X(d),\mathcal{BPS}_{d,v}^{\operatorname{inv}}[-1]). \tag{1.10}\] _If \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex, then the map (1.10) is an isomorphism._ A first ingredient in the proof of Theorem 1.2 is the explicit computation of the pushforward \(\pi_{*}\mathrm{IC}_{\mathfrak{X}(d)}\) as a sum of shifted perverse sheaves, where \(\pi\colon\mathcal{X}(d)\to X(d)\) is the good moduli space map, due to Meinhardt-Reineke [10] and Davison-Meinhardt [11]. The main ingredient in the proof of Theorem 1.2 is the construction of a cycle map from the topological K-theory of quasi-BPS category to BPS cohomology, see Theorem 6.3. We construct coproduct-like maps on the topological K-theory of the quasi-BPS category \(K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\) which we use to restrict the image of \(\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\subset\mathrm{gr}_{ \ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X}(d),\mathrm{Tr}\,W))\) under (1.6). Finally, to show that (1.10) is an isomorphism under the hypothesis above, we use a categorification of the (Meinhardt-Reineke) \(\mathrm{DT}/\mathrm{BPS}\) wall-crossing, which we prove in [PTe]. The case of zero potential of Theorem 1.2 is related to the categorification of intersection cohomology for good moduli spaces of smooth symmetric stacks pursued in [2]. **Theorem 1.3**.: (Theorem 6.6 and Corollary 6.7) _Assume \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex. Let \(d\in\mathbb{N}^{I}\) and let \(v\in\mathbb{Z}\) such that \(\gcd{(\underline{d},v)}=1\). Then \(K_{1}^{\mathrm{top}}(\mathbb{M}(d)_{v})=0\) and there is an isomorphism of \(\mathbb{Q}\)-vector spaces for all \(\ell\in\mathbb{Z}\):_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v}) \xrightarrow{\sim}\mathrm{IH}^{\dim\mathcal{X}(d)-2\ell-1}(X(d)).\] ### Quasi-BPS categories for preprojective algebras Theorem 1.2 can be also used, in conjunction with dimensional reduction, to compute the topological K-theory of quasi-BPS categories for preprojective algebras. For a quiver \(Q^{\circ}\), consider its tripled quiver with potential \((Q,W)\). The subcategory (1.4) is Koszul equivalent [15] to the subcategory of graded matrix factorizations with summands in \(\mathbb{M}(d)_{v}\): \[\mathbb{S}^{\mathrm{gr}}(d)_{v}:=\mathrm{MF}^{\mathrm{gr}}\left(\mathcal{X}(d ),\mathrm{Tr}\,W\right).\] We define constructible sheaves \(\mathcal{BPS}^{p}_{d,v}\) on \(P(d)\) as in (1.8). There is a cycle map \[\mathrm{c}\colon\mathrm{gr}_{\ell}G_{0}^{\mathrm{top}}(\mathcal{P}(d))\to H _{2\ell}^{\mathrm{BM}}(\mathcal{P}(d)) \tag{1.11}\] induced from the Chern character map of \(\mathcal{P}(d)\). **Theorem 1.4**.: (Corollary 7.3 and Theorem 7.6) _Let \(Q^{\circ}\) be a quiver, let \(d\in\mathbb{N}^{I}\), and let \(v,\ell\in\mathbb{Z}\). Then \(K_{1}^{\mathrm{top}}(\mathbb{T}(d)_{v})=0\) and the cycle map (1.11) induces an injective map:_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}) \hookrightarrow H^{-2\ell}(P(d),\mathcal{BPS}^{p}_{d,v}). \tag{1.12}\] _Next, assume that for any two different vertices of \(Q^{\circ}\), there is an even number of unoriented edges between them. Then the map (1.12) is an isomorphism._ The preprojective algebras which locally model the moduli of semistable sheaves on a K3 surface are of quivers \(Q^{\circ}\) with the property that, for any two different vertices, there is an even number of unoriented edges between them. In Section 9, we prove a version of Theorem 1.4 for etale covers of stacks of representations of preprojective algebra of such quivers, which suffices to compute the topological K-theory of quasi-BPS categories of K3 surfaces [PTd]. ### Weight-independence We revisit the discussion from Subsection 1.4, but the same observations apply in the setting of Subsection 1.5. Let \(Q=(I,E)\) be a quiver with an even number of edges between any two different vertices and an odd number of loops at every vertex, and let \(W\) be a quasi-homogeneous potential of \(Q\). Note that there are equivalences, where \(k\in\mathbb{Z}\): \[\mathbb{S}(d)_{v}\simeq\mathbb{S}(d)_{v+k\underline{d}},\ \mathbb{S}(d)_{v} \simeq\mathbb{S}(d)_{-v}^{\mathrm{op}}\] given by tensoring with the \(k\)th power of the determinant line bundle and by taking the derived dual, respectively. There are no obvious other relations between \(\mathbb{S}(d)_{v}\) and \(\mathbb{S}(d)_{v^{\prime}}\) for \(v,v^{\prime}\in\mathbb{Z}\). However, by Theorem 1.4, we obtain: **Corollary 1.5**.: _Let \(v,v^{\prime}\in\mathbb{Z}\) be such that \(\gcd(v,\underline{d})=\gcd(v^{\prime},\underline{d})\). Let \(i\in\mathbb{Z}\). Then there is an equality of dimensions:_ \[\dim_{\mathbb{Q}}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})=\dim_{\mathbb{Q}}K_{ i}^{\mathrm{top}}(\mathbb{S}(d)_{v^{\prime}}).\] Note that the statement is reminiscent to the \(\chi\)-independence phenomenon [10], [11], see especially [11, Corollary 1.5]. We observed an analogous statement for quasi-BPS categories of K3 surfaces in [20]. We do not know whether a stronger categorical statement, or at the level of algebraic K-theory, should hold for quivers with potential, see [20, Conjecture 1.4] for a conjecture in the case of K3 surfaces. It is natural to ask whether one can use a coproduct to define a primitive part \(\mathrm{P}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\subset K_{i}^{\mathrm{top}}( \mathbb{S}(d)_{v})\) of dimension equal to the dimension of the (total) monodromy invariant BPS cohomology, and thus independent of \(v\in\mathbb{Z}\). We defined such spaces in the localized equivariant algebraic K-theory for the tripled quiver with potential in [20]. We do not pursue this idea further in this paper. ### Complements In Section 3 we review the Chern character, the cycle map, and the (topological) Grothendieck-Riemann-Roch theorem for quotient stacks. In Section 5, we compare the Koszul equivalence [14] for dg-categories (and its induced isomorphism in K-theory) with the dimensional reduction theorem in cohomology [13]. In particular, we construct a Chern character map from the topological K-theory of a class of graded matrix factorizations to vanishing cohomology. In Section 8, we discuss some explicit computations of the topological K-theory of quasi-BPS categories. We mention two examples. First, let \(g\geqslant 0\). The coarse space of representations of the \(2g+1\) loop quiver is the variety of matrix invariants \(X(d)=\mathfrak{gl}(d)^{2g+1}/\!\!/G(d)\). By Theorem 1.3, we obtain a combinatorial formula for the dimensions of the intersection cohomology \(\mathrm{IH}^{\bullet}(X(d))\), which recovers a formula of Reineke [15]. Second, we compute the topological K-theory for quasi-BPS categories of points in \(\mathbb{C}^{3}\), see Proposition 8.11. It would be interesting to extend the methods in this paper and obtain computations beyond the case of quivers satisfying Assumption 2.1. ### Acknowledgements We thank Andrei Okounkov and Spela Spenko for useful discussions. T. P. is grateful to Columbia University in New York and to Max Planck Institute for Mathematics in Bonn for their hospitality and financial support during the writing of this paper. Y. T. is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan. ## 2. Preliminaries For a \(\mathbb{Z}\)-graded space \(V=\bigoplus_{j\in\mathbb{Z}}V^{j}\), let \(\widetilde{V}^{i}:=\prod_{j\in\mathbb{Z}}V^{i+2j}\). For a set \(S\), let \(\#S\) be the cardinal of \(S\). We list the main notation used in the paper in Table (1). Figure 1. Notation introduced in the paper ### Stacks and semiorthogonal decompositions The spaces \(\mathcal{X}=X/G\) considered are quasi-smooth (derived) quotient stacks over \(\mathbb{C}\), where \(G\) is a reductive group. The classical truncation of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{\mathrm{cl}}=X^{\mathrm{cl}}/G\). We assume that \(X^{\mathrm{cl}}\) is quasi-projective. We denote by \(\mathbb{L}_{\mathcal{X}}\) the cotangent complex of \(\mathcal{X}\). For \(G\) a reductive group and \(X\) a dg-scheme with an action of \(G\), denote by \(X/G\) the corresponding quotient stack. When \(X\) is affine, we denote by \(X/\!\!/G\) the quotient dg-scheme with dg-ring of regular functions \(\mathcal{O}_{X}^{G}\). We will consider semiorthogonal decompositions \[D^{b}(\mathcal{X})=\langle\mathbb{A}_{i}\mid i\in I\rangle, \tag{2.1}\] where \(I\) is a partially ordered set. Consider a morphism \(\pi\colon\mathcal{X}\to S\). We say the semiorthogonal decompositions (2.1) is _\(S\)-linear_ if \(\mathbb{A}_{i}\otimes\pi^{*}\mathrm{Perf}(S)\subset\mathbb{A}_{i}\) for all \(i\in I\). Same as in the papers [PTd, PTe], we use the terminology of _good moduli spaces_ of Alper, see [1, Example 8.3]. ### Constructible sheaves For \(\mathcal{X}=X/G\) a quotient stack, denote by \(D^{b}_{\mathrm{con}}(\mathcal{X})\) the category of bounded complexes of constructible sheaves on \(\mathcal{X}\), see [10], and by \(\mathrm{Perv}(\mathcal{X})\subset D^{b}_{\mathrm{con}}(\mathcal{X})\) the abelian category of perverse sheaves on \(\mathcal{X}\), see [10]. We denote by \[{}^{p}\tau^{\leq\bullet}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to D^{b}_{ \mathrm{con}}(\mathcal{X})\] the truncation functors with respect to the perverse t-structure and by \[{}^{p}\mathcal{H}^{\bullet}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to \mathrm{Perv}(\mathcal{X})\] the perverse cohomology sheaves. For \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\), consider its total perverse cohomology: \[{}^{p}\mathcal{H}\!\cdot\!(F):=\bigoplus_{i\in\mathbb{Z}}{}^{p}\mathcal{H}^{i }(F)[-i].\] We say \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\) is _a shifted perverse sheaf in degree \(\ell\)_ if \(F[\ell]\in\mathrm{Perv}(\mathcal{X})\) and _a shifted perverse sheaf_ if there exists \(\ell\in\mathbb{Z}\) such that \(F[\ell]\in\mathrm{Perv}(\mathcal{X})\). Let \(\mathbb{D}\) denote the Verdier duality functor on a stack \(\mathcal{X}\). Let \(\omega_{\mathcal{X}}:=\mathbb{D}\mathbb{Q}_{\mathcal{X}}\). When \(\mathcal{X}\) is a smooth stack, equidimensional of dimension \(d\), then \(\omega_{\mathcal{X}}=\mathbb{Q}_{\mathcal{X}}[2d]\). For \(\mathcal{X}=X/G\), denote by \(H^{i}(\mathcal{X}):=H^{i}(\mathcal{X},\mathbb{Q})=H^{i}_{G}(X,\mathbb{Q})\) the singular cohomology of \(\mathcal{X}\) and by \(H^{\mathrm{BM}}_{i}(\mathcal{X})=H^{\mathrm{BM}}_{i}(\mathcal{X},\mathbb{Q})= H^{\mathrm{BM}}_{i,G}(X,\mathbb{Q})\) the Borel-Moore homology of \(\mathcal{X}\) with rational coefficients. For \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\), we use the notation \(H^{\bullet}(\mathcal{X},F)\) for individual cohomology spaces (that is, for \(\bullet\) an arbitrary integer) and \(H^{\cdot}(\mathcal{X},F)\) for the total cohomology \(H^{\cdot}(\mathcal{X},F):=\bigoplus_{i\in\mathbb{Z}}H^{i}(\mathcal{X},F)\). ### Nearby and vanishing cycles For \(\mathcal{X}\) a smooth quotient stack and \[f\colon\mathcal{X}\to\mathbb{C} \tag{2.2}\] a regular function, consider the vanishing and nearby cycle functors: \[\varphi_{f},\psi_{f}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to D^{b}_{ \mathrm{con}}(\mathcal{X}). \tag{2.3}\] In this paper, we consider regular functions (2.2) such that \(0\) is the only critical value, equivalently that \(\mathrm{Crit}(f)\subset\mathcal{X}_{0}:=f^{-1}(0)\). Note that we consider the pushforward along \(\iota\colon\mathcal{X}_{0}:=f^{-1}(0)\hookrightarrow\mathcal{X}\) of the usual vanishing and nearby functors. There is an exact triangle: \[\iota_{*}\iota^{*}\bullet\to\psi_{f}\bullet\to\varphi_{f}\bullet\to\iota_{*} \iota^{*}\bullet[1].\] The functors (2.3) restrict to functors \[\varphi_{f}[-1],\psi_{f}[-1]\colon\operatorname{Perv}(\mathscr{X})\to \operatorname{Perv}(\mathscr{X}).\] Further, \(\varphi_{f}[-1]\) and \(\psi_{f}[-1]\) commute with \(\mathbb{D}\). We will abuse notation and let \(\varphi_{f}:=\varphi_{f}\mathbb{Q}_{\mathscr{X}}\), \(\psi_{f}:=\psi_{f}\mathbb{Q}_{\mathscr{X}}\), \(\varphi_{f}\mathrm{IC}:=\varphi_{f}\mathrm{IC}_{\mathscr{X}}\), \(\psi_{f}\mathrm{IC}:=\psi_{f}\mathrm{IC}_{\mathscr{X}}\). We may drop \(f\) from the notation if there is no danger of confusion. For more details on vanishing cycles on quotient stacks, see [1, Subsection 2.2], [1, Proposition 2.13]. ### Topological K-theory For a dg-category \(\mathscr{D}\), Blanc [1] defined the topological K-theory spectrum \[K^{\mathrm{top}}(\mathscr{D}).\] For \(i\in\mathbb{Z}\), consider its (rational) homotopy groups, which are \(\mathbb{Q}\)-vector spaces (we drop \(\mathbb{Q}\) from the notation): \[K^{\mathrm{top}}_{i}(\mathscr{D}):=K^{\mathrm{top}}_{i}(\mathscr{D})\otimes_ {\mathbb{Z}}\mathbb{Q}:=\pi_{i}(K^{\mathrm{top}}(\mathscr{D}))\otimes_{ \mathbb{Z}}\mathbb{Q}.\] We have that \(K^{\mathrm{top}}_{i}(\mathscr{D})\cong K^{\mathrm{top}}_{i+2}(\mathscr{D})\) for every \(i\in\mathbb{Z}\) by multiplication with a Bott element, see [1, Definition 1.6]. The topological K-theory spectrum sends exact triangles of dg-categories to exact triangles of spectra [1, Theorem 1.1(c)]. We denote the total topological K-theory of \(\mathscr{D}\) by: \[K^{\mathrm{top}}_{\cdot}(\mathscr{D})=K^{\mathrm{top}}_{0}(\mathscr{D})\oplus K ^{\mathrm{top}}_{1}(\mathscr{D}).\] Given a filtration (indexed by integers) on \(K^{\mathrm{top}}_{i}(\mathscr{D})\) for some \(i\in\mathbb{Z}\), we consider the associated graded pieces \(\operatorname{gr}_{\bullet}K^{\mathrm{top}}_{i}(\mathscr{D})\) for \(\bullet\in\mathbb{Z}\) and we let \[\operatorname{gr}_{\cdot}K_{i}(\mathscr{D}):=\bigoplus_{j\in\mathbb{Z}} \operatorname{gr}_{j}K_{i}(\mathscr{D}).\] Consider a quotient stack \(\mathscr{X}=X/G\) such that \(G\) is reductive and \(X^{\mathrm{cl}}\) is quasi-projective. Let \(M\subset G\) be a compact Lie group such that \(G\) is the complexification of \(M\). Denote by \(K^{\mathrm{top}}_{\bullet}(\mathscr{X}):=K^{\mathrm{top}}_{\bullet,M}(X)\) the \(M\)-equivariant topological K-theory of \(X\) (defined by Atiyah and Segal [11]) and by \(G^{\mathrm{top}}_{\bullet}(\mathscr{X}):=G^{\mathrm{top}}_{\bullet,M}(X)\) the \(M\)-equivariant K-homology of \(X\) (also referred to as the dual of compactly supported equivariant topological K-theory in the literature, defined by Thomason [13]). We refer to [1] for a brief review of properties of topological K-theory, K-homology of varieties, and Grothendieck-Riemann-Roch theorems for varieties. For references on K-homology, see [1] for the non-equivariant case and [1, Subsection 2.1.2] for the equivariant case. By [1, Theorem C and the remark following it], we have that: \[K^{\mathrm{top}}_{\bullet}(\operatorname{Perf}(\mathscr{X}))\cong K^{ \mathrm{top}}_{\bullet}(\mathscr{X}),\ K^{\mathrm{top}}_{\bullet}(D^{b} \mathrm{Coh}(\mathscr{X}))\cong G^{\mathrm{top}}_{\bullet}(\mathscr{X}). \tag{2.4}\] Note that \(K^{\mathrm{top}}_{\bullet}(\mathscr{X})=G^{\mathrm{top}}_{\bullet}(\mathscr{ X})\) if \(\mathscr{X}\) is smooth. For a quotient stack \(\mathscr{X}\), there are Chern character maps for \(i\in\mathbb{Z}\): \[\operatorname{ch}\colon K^{\mathrm{top}}_{i}(\mathscr{X})\to\prod_{j\in \mathbb{Z}}H^{i+2j}(\mathscr{X}),\ \operatorname{ch}\colon G^{\mathrm{top}}_{i}(\mathscr{X})\to\prod_{j\in \mathbb{Z}}H^{\mathrm{BM}}_{i+2j}(\mathscr{X}).\] If \(\mathscr{X}\) is a scheme, then \(\mathscr{X}\) is a finite CW complex, and the above Chern character maps are isomorphisms. The first one is the usual Atiyah-Hirzebruch theorem. The second one follows as the dual of the analogous isomorphism for compactly supported topological K-theory, see [1, Section 3.6 and Section 6] or [1, Section 11]. The above Chern characters can be also obtained from the Chern character \[K_{i}^{\mathrm{top}}\to\mathrm{HP}_{i}\] from topological K-theory to periodic cyclic homology applied to the dg-categories \(\mathrm{Perf}(\mathcal{X})\) and \(D^{b}\mathrm{Coh}(\mathcal{X})\), respectively, see [1, Section 4.4]. The Chern character maps are not isomorphisms (in general) for \(\mathcal{X}\) a quotient stack. In Section 3.1 we review the approximation of the Chern character of quotient stacks by Chern characters for varieties following Edidin-Graham [1]. Note that both cohomology (with coefficients in a constructible sheaf \(F\)) and topological K-theory depend only on the underlying classical stack. Let \[l\colon\mathcal{X}^{\mathrm{cl}}\to\mathcal{X} \tag{2.5}\] The pushforward functor induces isomorphisms: \[l_{*}\colon H^{\mathrm{BM}}_{\bullet}(\mathcal{X}^{\mathrm{cl}})\xrightarrow{ \sim}H^{\mathrm{BM}}_{\bullet}(\mathcal{X}),\,l_{*}\colon G^{\mathrm{top}}_{ \bullet}(\mathcal{X}^{\mathrm{cl}})\xrightarrow{\sim}G^{\mathrm{top}}_{\bullet }(\mathcal{X}). \tag{2.6}\] The pullback functor induces isomorphisms: \[l^{*}\colon H^{\bullet}(\mathcal{X})\xrightarrow{\sim}H^{\bullet}(\mathcal{X }^{\mathrm{cl}}),\,l^{*}\colon K^{\mathrm{top}}_{\bullet}(\mathcal{X}) \xrightarrow{\sim}K^{\mathrm{top}}_{\bullet}(\mathcal{X}^{\mathrm{cl}}). \tag{2.7}\] ### Approximation of stacks by varieties In the study of cohomology theories for quotient stacks, it is useful to approximate quotient stacks by varieties. We use the method of Totaro [15], Edidin-Graham [1]. We exemplify the method for Borel-Moore homology and singular cohomology, but it can be applied in many other situations (such as equivariant Chow groups, see loc. cit., approximation of the algebraic or topological Chern character, see loc. cit. and Subsection 3.1, or vanishing cohomology, see [14, Subsection 2.2]). Let \(\mathcal{X}=X/G\) be a quotient stack with \(G\) an algebraic group and \(X\) quasi-projective scheme with a \(G\)-linearized action. Choose representations \(V_{n}\twoheadrightarrow V_{n-1}\) such that \[\mathrm{codim}(S_{n}\text{ in }V_{n})\geqslant n,\] where \(S_{n}\subset V_{n}\) is the closed set of points with non-trivial stabilizer. Further, we may choose \(V_{n}\) such that, for \(U_{n}:=V_{n}\setminus S_{n}\), the quotient \(U_{n}/G\) is a scheme [1, Lemma 9]. Then the quotient \((X\times U_{n})/G\) is also a scheme because \(X\) is quasi-projective [1, Proposition 23]. For \(\ell\) fixed and for \(n\) large enough, there are isomorphisms induced by pullback maps: \[H^{\mathrm{BM}}_{\ell}(\mathcal{X})\xrightarrow{\sim}H^{\mathrm{ BM}}_{\ell+2\dim V_{n}}((X\times V_{n})/G)\xrightarrow{\sim}H^{\mathrm{BM}}_{\ell+2 \dim V_{n}}\left((X\times U_{n})/G\right),\] \[H^{\ell}(\mathcal{X})\xrightarrow{\sim}H^{\ell}((X\times V_{n} )/G)\xrightarrow{\sim}H^{\ell}\left((X\times U_{n})/G\right).\] ### The Grothendieck-Riemann-Roch theorem We state the (topological) Grothendieck-Riemann-Roch (GRR) theorem for lci morphisms of (classical and possibly singular) varieties of Baum-Fulton-MacPherson [1, 1]. Let \(\mathcal{X}\) be a classical quotient stack. Recall that there is an intersection product \[H^{i}(\mathcal{X})\otimes H^{\mathrm{BM}}_{j}(\mathcal{X})\to H^{ \mathrm{BM}}_{j-i}(\mathcal{X})\] for all \(i,j\in\mathbb{Z}\). Further, if \(\mathcal{X}^{\prime}\hookrightarrow\mathcal{X}\) a closed immersion, consider the topological K-theory and the Borel-Moore homology with closed supports \(G^{\mathrm{top}}_{\bullet,\mathcal{X}^{\prime}}(\mathcal{X})\cong G^{ \mathrm{top}}_{\bullet}(\mathcal{X}^{\prime})\) and \(H^{\mathrm{BM}}_{\bullet,\mathcal{X}^{\prime}}(\mathcal{X})\cong H^{\mathrm{ BM}}_{\bullet}(\mathcal{X}^{\prime})\). **Theorem 2.1**.: _Assume \(\mathscr{X}\) and \(\mathscr{Y}\) are classical quotient stacks and let \(f\colon\mathscr{X}\to\mathscr{Y}\) be an lci morphism. Let \(\mathscr{X}^{\prime}\subset\mathscr{X}\) and \(\mathscr{Y}^{\prime}\subset\mathscr{Y}\) be closed quotient substacks._ _(a) Assume that \(f^{-1}(\mathscr{Y}^{\prime})\subset\mathscr{X}^{\prime}\). Then the following diagram commutes:_ (2.8) _(b) Assume that \(f(\mathscr{X}^{\prime})\subset\mathscr{Y}^{\prime}\). Let \(T_{f}\) be the virtual tangent bundle of \(f\) and consider its Todd class \(\operatorname{td}(T_{f})\in\widetilde{H}^{0}(\mathscr{X}):=\prod_{i\geqslant 0 }H^{2i}(\mathscr{X})\). Assume \(f\) is proper and define \(f^{\prime}_{*}(-):=f_{*}(\operatorname{td}(T_{f})\cdot(-))\). The following diagrams commute:_ (2.9) Proof.: (a) There are such pullback (Gysin) functors for any quasi-smooth morphism for Borel-Moore homology [10, Construction 3.4] and for derived categories [13], and thus for topological K-theory by (2.4). Such maps have also been constructed for lci morphisms of classical schemes in [1, Section 4.4], see also [1, Section 4.2 and 5]. The commutativity of the diagram (2.8) follows from standard properties of the Chern character [1, Section 5]. (b) For \(f\) a proper and lci morphism, there are pushforward (Gysin) maps \(f_{*}\colon K_{\bullet}^{\operatorname{top}}(\mathscr{X})\to K_{\bullet}^{ \operatorname{top}}(\mathscr{Y})\) and \(f_{*}\colon H_{\bullet}^{\operatorname{BM}}(\mathscr{X})\to H_{\bullet}^{ \operatorname{BM}}(\mathscr{Y})\), see [1, Section 4.4], [1, Section 5, Remark (2)]. The diagrams commute by the Grothendieck-Riemann-Roch for lci morphisms, see [1, Section 5, Remark (2)] (note that there are typos in the statement of the diagram for \(K_{\bullet}^{\operatorname{top}}\), see [10] for a statement in the algebraic case). These are the topological versions of the usual algebraic GRR theorems, see [1, Section 4.3], [10]. Note that Baum-Fulton-MacPherson state the above theorems only for \(\bullet=0\) because they are interested in stating a result for \(K_{0}^{\operatorname{alg}}\) or \(G_{0}^{\operatorname{alg}}\) obtained by composing with the natural map to \(K_{0}^{\operatorname{top}}\) or \(G_{0}^{\operatorname{top}}\), respectively. However, the same proofs (based on deformation to the normal cone and the excess intersection formula) apply for \(\bullet=1\) as well. Finally, note that Baum-Fulton-MacPherson treat the case when \(\mathscr{X}\) and \(\mathscr{Y}\) are schemes, but the extension to stacks is obtained using the approximation from Subsection 2.5, see also Subsection 3.1. ### Matrix factorizations We refer to [12, Subsection 2.6] for complete definitions and references related to categories of matrix factorizations. Let \(\mathscr{X}=X/G\) be a quotient stack with \(X\) affine smooth and \(G\) reductive. For a regular function \(f\colon\mathscr{X}\to\mathbb{C}\), we denote the corresponding category of matrix factorizations by \[\operatorname{MF}(\mathscr{X},f).\] Its objects are tuples \((\alpha\colon E\rightleftarrows F\colon\beta)\) such that \(E,F\in\operatorname{Coh}(\mathscr{X})\) and \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(f\). If \(\mathbb{M}\subset D^{b}(\mathscr{X})\) is a subcategory, let \[\operatorname{MF}(\mathbb{M},f)\subset\operatorname{MF}(\mathscr{X},f)\] the subcategory of totalizations of tuples \((E,F,\alpha,\beta)\) with \(E,F\in\mathbb{M}\). The category \(\mathbb{M}\) has a description in terms of categories of singularities [PTa, Subsection 2.6]. In this paper, we consider categories \(\mathbb{M}\) generated by a collection \(\mathcal{C}\) of vector bundles, then \(\operatorname{MF}(\mathbb{M},f)\) is the category of matrix factorizations with summands \(E,F\) which are direct sums of vector bundles in \(\mathcal{C}\), see [PTa, Lemma 2.3]. Assume there exists an extra action of \(\mathbb{C}^{*}\) on \(X\) which commutes with the action of \(G\) on \(X\), and trivial on \(\mathbb{Z}/2\subset\mathbb{C}^{*}\). Assume that \(f\) is weight two with respect to the above \(\mathbb{C}^{*}\)-action. Denote by (1) the twist by the character \(\operatorname{pr}_{2}\colon G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\). Consider the category of graded matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{X},f)\). It has objects pairs \((P,d_{P})\) with \(P\) an equivariant \(G\times\mathbb{C}^{*}\)-sheaf on \(X\) and \(d_{P}\colon P\to P(1)\) a \(G\times\mathbb{C}^{*}\)-equivariant morphism. Note that as the \(\mathbb{C}^{*}\)-action is trivial on \(\mathbb{Z}/2\), we have the induced action of \(\mathbb{C}^{*}=\mathbb{C}^{*}/(\mathbb{Z}/2)\) on \(X\) and \(f\) is weight one with respect to the above \(\mathbb{C}^{*}\)-action. The objects of \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{X},f)\) can be alternatively described as tuples \[(E,F,\alpha\colon E\to F(1)^{\prime},\beta\colon F\to E), \tag{2.10}\] where \(E\) and \(F\) are \(G\times\mathbb{C}^{*}\)-equivariant coherent sheaves on \(X\), \((1)^{\prime}\) is the twist by the character \(G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\), and \(\alpha\) and \(\beta\) are \(\mathbb{C}^{*}\)-equivariant morphisms such that \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(f\). For a subcategory \(\mathbb{M}\subset D^{b}_{\mathbb{C}^{*}}(\mathcal{X})\), we define \[\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\subset\operatorname{MF}^ {\operatorname{gr}}(\mathcal{X},f)\] the subcategory of totalizations of tuples \((E,F,\alpha,\beta)\) with \(E,F\in\mathbb{M}\). Alternatively, if \(\mathbb{M}\) is generated by a collection of \(\mathbb{C}^{*}\)-equivariant vector bundles \(\mathcal{C}\), then \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\) is the category of matrix factorizations with summands \(E,F\) which are direct sums of vector bundles in \(\mathcal{C}\). In this paper, we will consider either ungraded categories of matrix factorizations or graded categories which are Koszul equivalent (see Subsection 2.8) to derived categories of bounded complexes of coherent sheaf on a quasi-smooth stack. When considering the product of two categories of matrix factorizations, which is in the context of the Thom-Sebastiani theorem, we consider the product of dg-categories over \(\mathbb{C}(\!(\beta)\!)\) for \(\beta\) of homological degree \(-2\) in the ungraded case, see [Pre, Theorem 4.1.3], and the product of dg-categories over \(\mathbb{C}\) in the graded case, see [BFK14, Corollary 5.18] (alternatively in the graded case, one can use the Koszul equivalence). ### The Koszul equivalence Let \(\mathcal{X}\) be a smooth quotient stack, let \(\eta\colon\mathcal{E}\to\mathcal{X}\) be a rank \(r\) vector bundle with a section \(s\in\Gamma(\mathcal{X},\mathcal{E})\), let \[j\colon\mathcal{K}:=s^{-1}(0)\hookrightarrow\mathcal{X} \tag{2.11}\] be the derived zero locus of \(s\), and let \[f\colon\mathcal{E}^{\vee}\to\mathbb{C} \tag{2.12}\] be the regular function defined by \(f(x,v_{x})=\langle s(x),v_{x}\rangle\) for \(x\in\mathcal{X}\) and \(v_{x}\in\mathcal{E}^{\vee}|_{x}\). Let \(\mathcal{E}^{\vee}_{0}\) be the derived zero locus of \(f\). We use the following diagram (2.13) Let \(\mathbb{C}^{*}\) act with weight \(2\) on the fibers of \(\mathcal{E}^{\vee}\) and consider the corresponding graded category of matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\). The Koszul equivalence says that [13, 14, 15]: \[\kappa\colon D^{b}(\mathcal{K})\xrightarrow{\sim}\operatorname{MF}^{ \operatorname{gr}}(\mathcal{E}^{\vee},f). \tag{2.14}\] Note that \(\kappa\) restricts to an equivalence: \[\kappa\colon\operatorname{Perf}(\mathcal{K})\xrightarrow{\sim}\operatorname{ MF}^{\operatorname{gr}}_{\mathcal{X}}(\mathcal{E}^{\vee},f).\] Consider the natural closed immersion \(l\colon\mathcal{K}^{\operatorname{cl}}\hookrightarrow\mathcal{K}\). The pushforward map induces a weak equivalence \(l_{*}\colon G^{\operatorname{top}}(\mathcal{K}^{\operatorname{cl}}) \xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{K})\). The functor \(\kappa\) has the following explicit description on complexes from the classical stack, see the formula for \(\kappa\) in [15, Section 2.3.2]. **Proposition 2.2**.: _The composition_ \[D^{b}(\mathcal{K}^{\operatorname{cl}})\xrightarrow{l_{*}}D^{b}(\mathcal{K}) \xrightarrow{\kappa}\operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\] _is isomorphic to the functor \(j^{\prime}_{*}\eta^{\prime*}l_{*}\) and it induces a weak equivalence_ \[j^{\prime}_{*}\eta^{\prime*}l_{*}\colon G^{\operatorname{top}}(\mathcal{K}^{ \operatorname{cl}})\xrightarrow{\sim}K^{\operatorname{top}}\left( \operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\right).\] _There is thus also a weak equivalence:_ \[j^{\prime}_{*}\eta^{\prime*}\colon G^{\operatorname{top}}(\mathcal{K}) \xrightarrow{\sim}K^{\operatorname{top}}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{E}^{\vee},f)\right).\] ### Quivers Let \(Q=(I,E)\) be a quiver and let \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) be a dimension vector. Denote by \[\mathcal{X}(d)=R(d)/G(d)\] the stack of representations of \(Q\) of dimension \(d\), alternatively the stack of representations of dimension \(d\) of the path algebra \(\mathbb{C}[Q]\). Here \(R(d)\), \(G(d)\) are given by \[R(d)=\bigoplus_{(i\to j)\in E}\operatorname{Hom}(V^{i},V^{j}),\ G(d)=\prod_{i \in I}GL(V^{i}).\] Consider its good moduli space map: \[\pi_{d}:=\pi_{X,d}\colon\mathcal{X}(d)\to X(d).\] We denote by \(T(d)\) a maximal torus of \(G(d)\), by \(M(d)\) the weight lattice of \(T(d)\), and by \(\mathfrak{g}(d)\) the Lie algebra of \(G(d)\). Let \(M(d)_{\mathbb{R}}=M(d)\otimes_{\mathbb{Z}}\mathbb{R}\). We drop \(d\) from notation when there is no danger of ambiguity. Let \(\mathfrak{S}_{a}\) be the permutation group on \(a\in\mathbb{N}\) letters. Let \(W_{d}:=\times_{i\in I}\mathfrak{S}_{d^{i}}\) be the Weyl group of \(G(d)\). For \(i\in I\) and \(d^{i}\in\mathbb{N}\), denote by \(\beta_{a}^{i}\) for \(1\leqslant a\leqslant d^{i}\) the weights of the standard representation of \(T(d^{i})\). Let \(M(d)_{\mathbb{R}}^{+}\subset M(d)_{\mathbb{R}}\) be the dominant chamber consisting of weights \[\chi=\sum_{i\in I}\sum_{a=1}^{d^{i}}c^{i}_{a}\beta_{a}^{i}\text{ such that }c^{i}_{a}\geqslant c^{i}_{b}\text{ for all }i\in I,d^{i}\geqslant a \geqslant b\geqslant 1.\] For \(\chi\in M(d)^{+}\), we denote by \(\Gamma_{G(d)}(\chi)\) the irreducible representation of \(G(d)\) with highest weight \(\chi\). Let \(\rho_{d}\) be half the sum of positive roots of \(\mathfrak{g}(d)\). We denote by \(1_{d}\) the diagonal cocharacter of \(T(d)\) (which acts on \(\beta_{a}^{i}\) by weight one). For \(d=(d^{i})_{i\in I}\), denote by \(\underline{d}=\sum_{i\in I}d^{i}\). Define the weights \[\sigma_{d}:=\sum_{i\in I,1\leqslant i\leqslant d^{i}_{a}}\beta_{a}^{i}\in M( d),\ \tau_{d}:=\frac{\sigma_{d}}{\underline{d}}\in M(d)_{\mathbb{R}}.\] We denote the cocharacter lattice by \(N(d)\). We denote by \(\langle\,,\,\rangle\colon N(d)\times M(d)\to\mathbb{Z}\) the natural pairing, and we use the same notation for its real version. If \(\lambda\) is a cocharacter of \(T(d)\) and \(V\) is a \(T(d)\)-representation, we may abuse notation and write \[\langle\lambda,V\rangle=\langle\lambda,\det(V)\rangle\] to ease notation. ### Framed quivers Let \(Q=(I,E)\) be a quiver. Define the framed quiver \(Q^{f}=(I^{f},E^{f})\) with vertices \(I^{f}=I\sqcup\{\infty\}\) and edges \(E^{f}=E\sqcup\{e_{i}\mid i\in I\}\), where \(e_{i}\) is an edge from \(\infty\) to \(i\in I\). For \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\), let \(V(d)=\bigoplus_{i\in I}V^{i}\), where \(\dim V^{i}=d^{i}\). Denote by \[R^{f}(d)=R(d)\oplus V(d)\] the affine space of representations of \(Q^{f}\) of dimension \(d\) and consider the moduli stack of framed representations of \(Q\): \[\mathcal{X}^{f}(d):=R^{f}(d)/G(d).\] We consider GIT stability on \(Q^{f}\) given by the character \(\sigma_{\underline{d}}\). It coincides with the King stability condition on \(Q^{f}\) such that the (semi)stable representations of dimension \((1,d)\) are the representations of \(Q^{f}\) with no subrepresentations of dimension \((1,d^{\prime})\) for \(d^{\prime}\) different from \(d\), see [Tod, Lemma 5.1.9]. Consider the smooth variety obtained as a GIT quotient, which is a smooth quasi-projective variety: \[\mathcal{X}^{f}(d)^{\rm ss}:=R^{f}(d)^{\rm ss}/G(d).\] ### Double quivers and preprojective algebras Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. For an edge \(e\) of \(Q\), denote by \(\overline{e}\) the edge with opposite orientation. Consider the multiset \(E^{\circ,d}=\{e,\overline{e}\mid e\in E\}\). Define _the doubled quiver_ \[Q^{\circ,d}=(I,E^{\circ,d}).\] Let \(\mathcal{I}\subset\mathbb{C}[Q^{\circ,d}]\) be the two-sided ideal generated by \(\sum_{e\in E^{\circ}}[e,\overline{e}]\). The preprojective algebra of \(Q^{\circ}\) is \(\Pi_{Q^{\circ}}:=\mathbb{C}[Q^{\circ,d}]/\mathcal{I}\). For \(d\in\mathbb{N}^{I}\), recall the stack of representations of dimension \(d\) of \(Q^{\circ}\): \[\mathcal{X}^{\circ}(d)=R^{\circ}(d)/G(d).\] The stack of representations of \(Q^{\circ,d}\) is: \[\mathcal{Y}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee})/G(d).\] The stack of representations of the preprojective algebra \(\pi_{Q^{\circ}}\) is: \[\mathcal{P}(d):=T^{*}\left(\mathcal{X}^{\circ}(d)\right):=\mu^{-1}(0)/G(d),\] where \[\mu\colon T^{*}R^{\circ}(d)=R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\to \mathfrak{g}(d)^{\vee}\cong\mathfrak{g}(d)\] is the moment map and \(\mu^{-1}(0)\) is the derived zero of \(\mu\). The image of \(\mu\) lies in the traceless Lie subalgebra \(\mathfrak{g}(d)_{0}\subset\mathfrak{g}(d)\), and thus induces a map \(\mu_{0}\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d)_{0}\). We define the reduced stack to be \[\mathcal{P}(d)^{\rm red}:=\mu_{0}^{-1}(0)/G(d).\] Note that \(\mathcal{P}(d)^{\rm red,cl}=\mathcal{P}(d)^{\rm cl}\). Consider the good moduli space map: \[\pi_{P,d}\colon\mathcal{P}(d)^{\rm cl}\to P(d).\] ### Tripled quivers with potential Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver and consider its doubled quiver \(Q^{\circ,d}=(I,E^{\circ,d})\). _The tripled quiver with potential_ \[(Q,W)\] is defined as follows. The quiver \(Q=(I,E)\) has set of edges \(E=E^{\circ,d}\sqcup\{\omega_{i}\mid i\in I\}\), where \(\omega_{i}\) is a loop at the vertex \(i\in I\). The potential \(W\) is \[W:=\left(\sum_{i\in I}\omega_{i}\right)\left(\sum_{e\in E^{\circ}}[e,\overline {e}]\right)\in\mathbb{C}[Q].\] We say \((Q,W)\) is a tripled quiver with potential if it is obtained as above for some quiver \(Q^{\circ}\). Consider the stack of representations of \(Q\) of dimension \(d\): \[\mathcal{X}(d)=R(d)/G(d)=\left(T^{*}R^{\circ}(d)\oplus\mathfrak{g}(d)\right)/G (d)=\left(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\oplus\mathfrak{g}(d)\right)/ G(d).\] The potential \(W\) induces a regular function: \[\operatorname{Tr}W\colon\mathcal{X}(d)\to\mathbb{C}.\] Consider the grading on \(\mathcal{X}(d)\) which scales with weight \(2\) the linear maps corresponding to the loops \(\{\omega_{i}\mid i\in I\}\) and fixes the linear maps in \(E^{\circ,d}\). The Koszul equivalence (2.11) says that: \[\kappa\colon D^{b}\left(\mathcal{P}(d)\right)\stackrel{{\sim}}{{ \to}}\operatorname{MF}^{\operatorname{gr}}\left(\mathcal{X}(d),\operatorname {Tr}W\right). \tag{2.15}\] ### Quasi-BPS categories Consider a symmetric quiver \(Q=(I,E)\). Let \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) be a dimension vector and let \(w\in\mathbb{Z}\) be a weight. Consider the multiset of \(T(d)\)-weights on \(R(d)\): \[\mathcal{A}:=\{\beta_{a}^{i}-\beta_{b}^{j}\mid i,j\in I,(i\to j)\in E,1\leqslant a \leqslant d^{i},1\leqslant b\leqslant d^{j}\}.\] Define the polytope \[\mathbf{W}(d):=\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{A}}[0,\beta]\subset M (d)_{\mathbb{R}}, \tag{2.16}\] where the sums above are Minkowski sums in the space of weights \(M(d)_{\mathbb{R}}\). Let \(\lambda\) be an antidominant cocharacter with associated partition \((d_{a})_{a=1}^{k}\) of \(d\in\mathbb{N}^{I}\), meaning that \[\mathcal{X}(d)^{\lambda}=\times_{a=1}^{k}\mathcal{X}(d_{a}).\] The multiplication for the categorical Hall algebra of \(Q\), or of \((Q,W)\) for a potential \(W\) of \(Q\) and a possible grading, is defined as the functor [10]: \[p_{\lambda*}q_{\lambda}^{*} \colon\ \mathbb{S}_{a=1}^{k}\;D^{b}(\mathcal{X}(d_{a}))\to D^{b}( \mathcal{X}(d)),\] \[p_{\lambda*}q_{\lambda}^{*} \colon\ \mathbb{S}_{a=1}^{k}\;\mathrm{MF}^{\bullet}(\mathcal{X}(d_{a}), \operatorname{Tr}W)\to\mathrm{MF}^{\bullet}(\mathcal{X}(d),\operatorname{Tr}W), \tag{2.17}\] where \(\bullet\in\{\emptyset,\operatorname{gr}\}\) and \(p_{\lambda}\), \(q_{\lambda}\) are the maps \[\mathcal{X}(d)^{\lambda}=\times_{i=1}^{k}\mathcal{X}(d_{i})\stackrel{{ d\lambda}}{{\longleftarrow}}\mathcal{X}(d)^{\lambda\geqslant 0} \stackrel{{ p_{\lambda}}}{{\longrightarrow}}\mathcal{X}(d).\] Define the sets of weights \[\mathcal{A}_{\lambda} :=\{\beta\in\mathcal{A}\mid\langle\lambda,\beta\rangle>0\},\] \[\mathfrak{g}_{\lambda} :=\{\beta_{a}^{i}-\beta_{b}^{i}\mid i\in I,1\leqslant a,b \leqslant d^{i},\langle\lambda,\beta_{a}^{i}-\beta_{b}^{i}\rangle>0\}. \tag{2.18}\] For a weight \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), let \[\mathbb{M}(d;\delta_{d})\subset D^{b}(\mathcal{X}(d)) \tag{2.19}\] be the full subcategory of \(D^{b}(\mathscr{X}(d))\) generated by vector bundles \(\mathcal{O}_{\mathscr{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) such that \[\chi+\rho-\delta_{d}\in\mathbf{W}(d).\] For \(\lambda\) a cocharacter of \(T(d)\), define \[n_{\lambda}=\big{\langle}\lambda,\det\big{(}\mathbb{L}_{\mathscr{X}(d)}|_{0}^{ \lambda>0}\big{)}\big{\rangle}=\big{\langle}\lambda,\det\big{(}(R(d)^{\vee})^ {\lambda>0}\big{)}\big{\rangle}-\big{\langle}\lambda,\det\big{(}(\mathfrak{g}(d )^{\vee})^{\lambda>0}\big{)}\big{\rangle}. \tag{2.20}\] Note that any complex \(F\in D^{b}(B\mathbb{C}^{*})\) splits as a direct sum \(F=\bigoplus_{w\in\mathbb{Z}}F_{w}\) such that \(\mathbb{C}^{*}\) acts with weight \(w\) on \(F_{w}\). We say \(w\in\mathbb{Z}\)_is a weight of \(F\)_ if \(F_{w}\neq 0\). The category (2.19) has the following alternative description. **Lemma 2.3**.: ([7, Corollary 3.11]) _The category \(\mathbb{M}(d;\delta_{d})\) is the subcategory of \(D^{b}(\mathscr{X}(d))\) of objects \(F\in D^{b}(\mathscr{X}(d))\) such that, for any \(\nu:B\mathbb{C}^{*}\to\mathscr{X}(d)\), the weights of \(\nu^{*}F\) are contained in the set \(\big{[}-\frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle,\frac{1}{2}n_ {\lambda}+\langle\lambda,\delta_{d}\rangle\big{]}\). Here \(\nu\) corresponds to a point \(x\in R(d)\) and a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\) which fixes \(x\)._ Given a potential \(W\) for the quiver \(Q\), and possibly a grading as in Subsection 2.7, we define the quasi-BPS categories: \[\mathbb{S}^{\bullet}(d;\delta_{d}):=\operatorname{MF}^{\bullet}\left(\mathbb{ M}(d;\delta_{d}),\operatorname{Tr}W\right)\text{ for }\bullet\in\{\emptyset,\operatorname{gr}\}. \tag{2.21}\] If \(\delta_{d}=v\tau_{d}\), we use the notations: \[\mathbb{M}(d)_{v}:=\mathbb{M}(d;v\tau_{d})\text{ and }\mathbb{S}(d)_{v}:= \mathbb{S}(d;v\tau_{d}).\] In the setting of Subsection 2.11, there is a subcategory \(\mathbb{T}(d;\delta_{d})\subset D^{b}(\mathscr{P}(d))\) such that, under the Koszul equivalence (2.15), we have that: \[\kappa\colon\mathbb{T}(d;\delta_{d})\xrightarrow{\sim}\mathbb{S}^{ \operatorname{gr}}(d;\delta_{d}), \tag{2.22}\] see also [7, Definition 2.14] for an alternative description of \(\mathbb{T}(d;\delta_{d})\). Let \(\mathscr{X}(d)^{\operatorname{red}}:=(T^{*}R^{\circ}(d)\oplus\mathfrak{g}(d)_ {0})/G(d)\). There is also a Koszul equivalence \[\kappa^{\prime}\colon D^{b}(\mathscr{P}(d)^{\operatorname{red}})\xrightarrow {\sim}\operatorname{MF}^{\operatorname{gr}}\big{(}\mathscr{X}(d)^{ \operatorname{red}},\operatorname{Tr}W\big{)}.\] Define \(\mathbb{M}(d;\delta_{d})^{\operatorname{red}}\subset D^{b}\left(\mathscr{X}(d )^{\operatorname{red}}\right)\) as in (2.19), and let \(\mathbb{T}(d;\delta_{d})^{\operatorname{red}}\subset D^{b}(\mathscr{P}(d)^{ \operatorname{red}})\) be the subcategory such that, under the Koszul equivalence \(\kappa^{\prime}\), we have that: \[\kappa^{\prime}\colon\mathbb{T}(d;\delta_{d})^{\operatorname{red}} \xrightarrow{\sim}\mathbb{S}^{\operatorname{gr}}(d;\delta_{d})^{ \operatorname{red}}.\] We next discuss the compatibility between reduced and non-reduced quasi-BPS categories. For an isomorphism \(\mathfrak{g}(d)\cong\mathfrak{g}(d)_{0}\times\mathbb{C}\) of \(G(d)\)-representation, the projection onto the first factor induces a map \(t\colon\mathscr{X}(d)\to\mathscr{X}(d)^{\operatorname{red}}\). We have \(t\circ\operatorname{Tr}W=\operatorname{Tr}W\). Let \(l^{\prime}\colon\mathscr{P}(d)^{\operatorname{red}}\hookrightarrow\mathscr{P}(d)\) be the natural closed immersion. The next proposition follows from [7, Lemma 2.4.4]: **Proposition 2.4**.: _The following diagram commutes:_ _It induces a commutative diagram:_ ### Semiorthogonal decompositions We recall several semiorthogonal decompositions from [PTe], see [PTe, Subsection 3.3] for the ordering of summands in all the semiorthogonal decompositions. Recall the convention about the product of categories from Subsection 2.7. We will consider quivers satisfying the following: **Assumption 2.1**.: The quiver \(Q=(I,E)\) is symmetric and: * for all \(a,b\in I\) different, the number of edges from \(a\) to \(b\) is even, and * for all \(a\in I\), the number of loops at \(a\) is odd. For \(\alpha\in\mathbb{N}\), we define the quiver \[Q^{\alpha f}=(I^{f},E^{\alpha f}),\] which is a generalization of the framed quiver \(Q^{f}\). The set of vertices is \(I^{f}=I\sqcup\{\infty\}\), and the set of edges \(E^{\alpha f}\) is the disjoint union of \(E\) and \(\alpha\) edges from \(\infty\) to any vertex of \(I\). Consider the moduli of semistable representations \(\mathcal{X}^{\alpha f}(d)^{\text{ss}}\) of the quiver \(Q^{\alpha f}\) for the King stability condition \(\sigma_{d}\), which is a smooth quasi-projective variety. **Theorem 2.5**.: ([PTe, Corollary 4.17]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1. Let \(\alpha\in\mathbb{N}\) and \(\mu\in\mathbb{R}\setminus\mathbb{Q}\). There is a \(X(d)\)-linear semiorthogonal decomposition_ \[D^{b}\left(\mathcal{X}^{\alpha f}(d)^{\text{ss}}\right)=\left\langle\bigotimes _{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+v_{i}\tau_{d_{i}}):\mu\leqslant\frac{v_ {1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}}<\alpha+\mu \right\rangle. \tag{2.23}\] _Here \((d_{i})_{i=1}^{k}\) is a partition of \(d\), \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\), and \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\) is defined by_ \[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g} (d)^{\lambda>0} \tag{2.24}\] _where \(\lambda\) is an antidominant cocharacter corresponding to the partition \((d_{i})_{i=1}^{k}\). The functor from a summand on the right hand side to \(D^{b}\left(\mathcal{X}^{f}(d)^{\text{ss}}\right)\) is the composition of the Hall product with the pullback along the projection map \(\mathcal{X}^{f}(d)\to\mathcal{X}(d)\)._ **Remark 2.6**.: Note that there are equivalences \[\mathbb{M}(d_{i})_{v_{i}}=\mathbb{M}(d_{i};v_{i}\tau_{d_{i}})\overset{\sim}{ \to}\mathbb{M}(d_{i};\theta_{i}+v_{i}\tau_{d_{i}}) \tag{2.25}\] by taking the tensor product with \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\). Thus the summands in Theorem 2.5 are equivalent to \(\bigotimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\). We next discuss a semiorthogonal decomposition of the stack of representations of \(Q\). **Theorem 2.7**.: ([PTe, Theorem 4.2]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1. There is a \(X(d)\)-linear semiorthogonal decomposition_ \[D^{b}\left(\mathcal{X}(d)\right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{M}( d_{i})_{v_{i}}:\frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}} \right\rangle, \tag{2.26}\] _where \((d_{i})_{i=1}^{k}\) is a partition of \(d\) and \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\). The functor from a summand on the right hand side to \(D^{b}(\mathcal{X}(d))\) is given by the Hall product composed with tensoring with the line bundle \(\boxtimes_{i=1}^{k}\theta_{i}\), see Remark 2.6._ Using [Pada, Proposition 2.1], [PTa, Proposition 2.5], there are analogous semiorthogonal decompositions in the non-zero potential case constructed from the semiorthogonal decompositions above. The analogue of Theorem 2.5 is the following: **Theorem 2.8**.: ([PTe, Theorem 4.18]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1 and let \(\alpha\geqslant 1\). Let \(\mu\in\mathbb{R}\setminus\mathbb{Q}\). There is a semiorthogonal decomposition_ \[\operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{ss},\operatorname{Tr}W \right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{S}(d_{i})_{v_{i}}:\mu\leqslant \frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}}<\alpha +\mu\right\rangle,\] _where the right hand side is as in (2.23). If \((Q,W)\) is a tripled quiver with potential, there is an analogous semiorthogonal decomposition of \(\operatorname{MF}^{\operatorname{gr}}\left(\mathcal{X}^{\alpha f}(d)^{ss}, \operatorname{Tr}W\right)\) for the grading introduced in Subsection 2.12._ We note the following assumption on a quiver \(Q^{\circ}=(I,E^{\circ})\), which says its tripled quiver \(Q\) satisfies Assumption 2.1 and thus Theorems 2.7 and 2.8 can be applied for its tripled quiver with potential: **Assumption 2.2**.: For all \(a,b\in I\), we have that \[\#(a\to b\text{ in }E^{\circ})-\#(b\to a\text{ in }E^{\circ})\in 2 \mathbb{Z}. \tag{2.27}\] We end with a corollary of [Pada, Theorem 1.1]. We will use it only for quivers \(Q^{\circ}\) satisfying Assumption 2.2, and then the corollary can be also deduced from a version of Theorem 2.7 for an arbitrary \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) (see [PTe, Theorem 4.2]) using Koszul equivalence and [PTa, Proposition 2.5]. **Theorem 2.9**.: _Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver, let \(d\in\mathbb{N}^{I}\), let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) with \(\langle 1_{d},\delta_{d}\rangle=v\). Recall the quasi-BPS categories from Subsection 2.13. The category \(\mathbb{M}(d;\delta_{d})\) is right admissible in \(D^{b}(\mathcal{X}(d))_{v}\), so there is a \(X(d)\)-linear semiorthogonal decomposition:_ \[D^{b}(\mathcal{X}(d))_{v}=\langle\mathbb{B}(d;\delta_{d}),\mathbb{M}(d;\delta _{d})\rangle. \tag{2.28}\] _The category \(\mathbb{M}(d;\delta_{d})^{\operatorname{red}}\) is right admissible in \(D^{b}(\mathcal{X}(d)^{\operatorname{red}})\)._ _Applying matrix factorizations and using the Koszul equivalence, the category \(\mathbb{T}(d;\delta_{d})\) is right admissible in \(D^{b}(\mathcal{P}(d))_{v}\), so there is a semiorthogonal decomposition:_ \[D^{b}(\mathcal{P}(d))_{v}=\langle\mathbb{A}(d;\delta_{d}),\mathbb{T}(d;\delta _{d})\rangle. \tag{2.29}\] _The category \(\mathbb{T}(d;\delta_{d})^{\operatorname{red}}\) is right admissible in \(D^{b}(\mathcal{P}(d)^{\operatorname{red}})_{v}\)._ We note the following: **Corollary 2.10**.: _Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver, let \(d\in\mathbb{N}^{I}\), and let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\). The closed immersion \(l^{\prime}\colon\mathcal{P}(d)^{\operatorname{red}}\hookrightarrow\mathcal{P} (d)\) induces a weak equivalence of spectra:_ \[l^{\prime}_{*}\colon K^{\operatorname{top}}(\mathbb{T}(d;\delta)^{ \operatorname{red}})\xrightarrow{\sim}K^{\operatorname{top}}(\mathbb{T}(d; \delta)).\] Proof.: There is an equivalence of spectra \(l^{\prime}_{*}\colon G^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{ red}})\xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{P}(d))\), see the isomorphism (2.6). The claim follows from Proposition 2.4 and Theorem 2.9. ### Base-change and semiorthogonal decompositions In Section 9, we need to construct semiorthogonal decompositions for etale covers of moduli of representations of a quiver, or for moduli of representations of preprojective algebras. It will be convenient to use the following base-change result for semiorthogonal decompositions, see [10] for the case of derived categories of varieties. For a pretriangulated dg-category \(\mathcal{D}\) and a subcategory \(\mathcal{C}\subset\mathcal{D}\), we say that \(\mathcal{D}\)_is classically generated by \(\mathcal{C}\)_ if the smallest pretriangulated subcategory of \(\mathcal{D}\) which contains \(\mathcal{C}\) and is closed under direct summands is \(\mathcal{D}\). **Proposition 2.11**.: _Let \(\mathcal{X}\) be a QCA (quasi-compact with affine stabilizers) derived stack with a morphism \(\pi\colon\mathcal{X}\to S\) to a scheme \(S\). Let_ \[D^{b}(\mathcal{X})=\langle\mathcal{C}_{i}\mid i\in I\rangle\] _be a \(S\)-linear semiorthogonal decomposition. Then, for any etale map \(f\colon T\to S\) and \(f_{T}\colon\mathcal{X}_{T}=\mathcal{X}\times_{S}T\to\mathcal{X}\), there is a semiorthogonal decomposition_ \[D^{b}(\mathcal{X}_{T})=\langle\mathcal{C}_{i,T}\mid i\in I\rangle,\] _where \(\mathcal{C}_{i,T}\subset D^{b}(\mathcal{X}_{T})\) is the subcategory classically generated by \(f_{T}^{*}\mathcal{C}_{i}\)._ Proof.: The image of \(f_{T}^{*}\colon\operatorname{Ind}D^{b}(\mathcal{X})\to\operatorname{Ind}D^{b}( \mathcal{X}_{T})\) classically generates \(\operatorname{Ind}D^{b}(\mathcal{X}_{T})\), as any \(A\in\operatorname{Ind}D^{b}(\mathcal{X}_{T})\) is a direct summand of \(f_{T}^{*}f_{T*}A\). Indeed, consider the diagram: Then \(f_{T}^{*}f_{T*}A=g_{T*}g_{T}^{*}A=A\otimes g_{T*}\mathcal{O}_{\mathcal{X}^{ \prime}}\). The map \(g_{T}\) has a section given by the diagonal map \(\Delta\colon\mathcal{X}_{T}\to\mathcal{X}^{\prime}\), thus \(g_{T*}\mathcal{O}_{\mathcal{X}^{\prime}}\) has \(\mathcal{O}_{\mathcal{X}_{T}}\) as a direct summand, and so \(A\) is indeed a direct summand of \(f_{T}^{*}f_{T*}A\). By the QCA assumption, objects in \(D^{b}(\mathcal{X}_{T})\subset\operatorname{Ind}D^{b}(\mathcal{X}_{T})\) are compact, see [11]. Therefore \(D^{b}(\mathcal{X}_{T})\) is classically generated by \(f_{T}^{*}D^{b}(\mathcal{X})\), thus by \(\mathcal{C}_{i,T}\) for \(i\in I\). To show semiorthogonality, consider \(i,j\in I\) such that \(\operatorname{Hom}(A_{i},A_{j})=0\) for all \(A_{i}\in\mathcal{C}_{i}\) and \(A_{j}\in\mathcal{C}_{j}\). We have \[\operatorname{Hom}_{D^{b}(\mathcal{X}_{T})}(f_{T}^{*}A_{i},f_{T}^ {*}A_{j}) =\operatorname{Hom}_{\operatorname{Ind}D^{b}(\mathcal{X})}(A_{i},f_{T*}f_{T}^{*}A_{j})\] \[=\operatorname{Hom}_{\operatorname{Ind}D^{b}(\mathcal{X})}(A_{i},A_{j}\otimes_{\mathcal{O}_{S}}f_{*}\mathcal{O}_{T}). \tag{2.30}\] Here \(f_{*}\mathcal{O}_{S}\in D_{\operatorname{qc}}(S)=\operatorname{Ind}\operatorname {Perf}(S)\), and the \(S\)-linearity of \(\mathcal{C}_{j}\) implies \(A_{j}\otimes_{\mathcal{O}_{S}}f_{*}\mathcal{O}_{T}\in\operatorname{Ind} \mathcal{C}_{j}\). Then the vanishing of (2.30) follows from the compactness of \(A_{i}\) (see the end of the proof of [12, Lemma 5.5] for how compactness is used). ## 3. Topological K-theory of quotient stacks In this section, we recall the definition of the Chern character maps for quotient stacks and we prove versions of the Atiyah-Hirzebruch theorem for quotient stacks. The main tool we use is the approximation of cohomology theories of quotient stacks by varieties [14, 15]. We also construct a Chern character map for quasi-smooth quotient stacks and discuss versions of the GRR and Atiyah-Hirzebruch theorems for quasi-smooth morphisms. The results are most probably well known to the experts, but we do not know a reference for them. ### The Chern character map for a classical quotient stack Consider a quotient stack \[\mathscr{X}=X/G,\] where \(G\) is a connected reductive group and \(X\) is a classical quasi-projective scheme with an action of \(G\). Let \(M\) be a compact Lie group such that \(G\) is the complexification of \(M\). Let \(EM\) be a contractible CW complex with a free action of \(M\). For \(i\in\mathbb{Z}\), consider the Chern character map of the CW complex \(EM\times_{M}X\): \[\operatorname{ch}\colon G_{i}^{\operatorname{top}}(\mathscr{X}) =G_{i}^{\operatorname{top}}(EM\times_{M}X)\] \[\to\widetilde{H}_{i}^{\operatorname{BM}}(EM\times_{M}X)= \widetilde{H}_{i}^{\operatorname{BM}}(\mathscr{X}):=\prod_{j\in\mathbb{Z}}H_{ i+2j}^{\operatorname{BM}}(\mathscr{X}). \tag{3.1}\] The above Chern character map is, in general, neither injective nor surjective. It becomes an isomorphism when the K-theory is completed with respect to an augmentation ideal by theorems of Atiyah-Segal [1], and Edidin-Graham [1] in the algebraic case. The Chern character map for a stack can be approximated by the Chern character map for varieties as follows [1], see also Subsection 2.5. For \(V\) a representation of \(G\), denote by \(S\subset V\) the closed set of points with non-trivial stabilizer. Let \(U:=V\setminus S\). We may choose \(V\) such that \(U/G\) and \((X\times U)/G\) are schemes. Then the following diagram commutes, where the vertical maps are pullback maps and the bottom map is an isomorphism by the Atiyah-Hirzebruch theorem: Choose representations \(V_{n}\twoheadrightarrow V_{n-1}\) and closed subsets \(S_{n}\subset V_{n}\) as in Subsection 2.5. For \(\ell\) fixed and for \(n\) large enough, recall that we have isomorphisms induced by pullbacks: \[H_{\ell}^{\operatorname{BM}}(\mathscr{X})\overset{\sim}{\to}H_{\ell+2 \dim V_{n}}^{\operatorname{BM}}((X\times V_{n})/G)\overset{\sim}{\to}H_{\ell+ 2\dim V_{n}}^{\operatorname{BM}}\left((X\times U_{n})/G\right).\] Then \(\operatorname{ch}(y)\) for \(y\in G_{i}^{\operatorname{top}}(\mathscr{X})\) equals the limit of \(\operatorname{ch}_{V_{n}}(\operatorname{res}_{V_{n}}(y))\). Note that, in the algebraic case, Edidin-Graham show in [1, Proposition 3.1] that the limit of \(\operatorname{ch}_{V_{n}}(\operatorname{res}_{V_{n}}(y))\) is well-defined and use it to define the Chern character. Let \(\mathscr{X}^{\prime}\subset\mathscr{X}\) be a closed quotient stack. There are also Chern character maps with closed supports: \[\operatorname{ch}_{\mathscr{X}^{\prime},\mathscr{X}}\colon G_{i,\mathscr{X}^{ \prime}}^{\operatorname{top}}(\mathscr{X})\to H_{i,\mathscr{X}^{\prime}}^{ \operatorname{BM}}(\mathscr{X}). \tag{3.2}\] There is also a Chern character map: \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathscr{X})\to \widetilde{H}^{i}(\mathscr{X}):=\prod_{j\in\mathbb{Z}}H^{i+2j}(\mathscr{X}), \tag{3.3}\] where \(i\in\mathbb{Z}\). As above, the Chern character map (3.3) can be approximated by Chern character maps of varieties. The Chern character maps (3.1) and (3.3) are compatible as follows, where \(\varepsilon\) and \(\varepsilon^{\prime}\) are the maps induced by intersecting with the fundamental class, see [1, Section 5 and Property 2 from Section 4.1]: (3.4) ### An Atiyah-Hirzeburch type theorem for quotient stacks We assume \(\mathcal{X}\) is a classical quotient stack as in the previous subsection. Let \(i\in\mathbb{Z}\). Consider the increasing filtration \[E_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X}):=\mathrm{ch}^{-1}\left(H_{\leq i+2 \ell}^{\mathrm{BM}}(\mathcal{X})\right)\subset G_{i}^{\mathrm{top}}(\mathcal{ X}). \tag{3.5}\] Note that \(E_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})=G_{i}^{\mathrm{top}}(\mathcal{X})\) for \(\ell\) large enough. Denote by \[\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X}):=E_{\ell}G_{i}^{\mathrm{ top}}(\mathcal{X})/E_{\ell-1}G_{i}^{\mathrm{top}}(\mathcal{X}).\] The Chern character induces a map, which we call _the cycle map_: \[\mathrm{c}\colon\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})\to H_{i+2 \ell}^{\mathrm{BM}}(\mathcal{X}). \tag{3.6}\] Note that the cycle map is injective by construction. We prove the following version of the Atiyah-Hirzebruch theorem for quotient stacks. **Proposition 3.1**.: _For \(i,\ell\in\mathbb{Z}\), the map (3.6) is an isomorphism._ Proof.: Let \(i,\ell\in\mathbb{Z}\) and let \(a=i+2\ell\). Let \(x\in H_{a}^{\mathrm{BM}}(\mathcal{X})\). Let \(V\) be a representation of \(G\) such that \(S\subset X\times V\), the locus of points with non-trivial stabilizer, satisfies: \[\mathrm{codim}\left(S\text{ in }X\times V\right)>\dim X-\frac{a}{2}+1.\] Let \(b:=\dim V\), \(U:=V\setminus S\) and let \(t\colon(X\times V)/G\to X/G\) be the natural projection. The map \(t\) induces an isomorphism \[t^{*}\colon H_{i+2\ell}^{\mathrm{BM}}(X/G)\overset{\sim}{\to}H_{i+2\ell+2b}^{ \mathrm{BM}}((X\times V)/G).\] Next, the restriction map \(\alpha\) is an isomorphism for \(\delta\geqslant\ell+b\). \[\alpha\colon H_{i+2\delta}^{\mathrm{BM}}((X\times V)/G)\overset{\sim}{\to}H_ {i+2\delta}^{\mathrm{BM}}\left((X\times U)/G\right). \tag{3.7}\] It suffices to check that \(H_{i+2\delta-\eta,S/G}^{\mathrm{BM}}((X\times V)/G)\cong H_{i+2\delta-\eta}^ {\mathrm{BM}}(S/G)=0\) for \(\eta\in\{0,1\}\). This is true because \(i+2\delta-\eta>2\dim S\). Indeed, it suffices to check that \(\frac{1}{2}a+b>\dim S+1\), alternatively that \(\mathrm{codim}(S\text{ in }X\times V)>\dim X-\frac{1}{2}a+1\), which is true by our assumption on \(V\). There is a commutative diagram with rows exact sequences: \[\begin{CD}G_{i}^{\mathrm{top}}((X\times V)/G)@>{}>{}>G_{i}^{\mathrm{top}} \left((X\times U)/G\right)@>{}>{}>G_{i-1,S/G}^{\mathrm{top}}((X\times V)/G)\\ @V{}V{\mathrm{ch}}V@V{}V{\mathrm{ch}}V\\ \widetilde{H}_{i}^{\mathrm{BM}}((X\times V)/G)@>{}>{}>\widetilde{H}_{i}^{ \mathrm{BM}}((X\times U)/G)@>{}>{}>\widetilde{H}_{i-1,S/G}^{\mathrm{BM}}((X \times V)/G)\\ @V{}V{\beta}V@V{\beta}V{\beta}V\\ \prod H_{i+2\delta}^{\mathrm{BM}}((X\times V)/G)@>{\alpha}>{}>\prod H_{i+2 \delta}^{\mathrm{BM}}\left((X\times U)/G\right)@>{}>{}>0.\end{CD}\] In the above, the products are after \(\delta>\ell+b\) and and the maps \(\beta\) are natural projections. The kernels of the maps \(\beta\circ\mathrm{ch}\) lie in exact sequences for \(\ell\) and \(\ell-1\), and by their taking their quotient we obtain a diagram: \[\begin{CD}\mathrm{gr}_{\ell+b}G_{i}^{\mathrm{top}}((X\times V)/G)@>{\sim}>{}> \mathrm{gr}_{\ell+b}G_{i}^{\mathrm{top}}\left((X\times U)/G\right)\\ @V{\mathbb{C}}V{\mathbb{C}}V@V{\mathbb{C}^{\prime}}V{\mathbb{C}^{\prime}}V \\ H_{i+2\ell+2b}^{\mathrm{BM}}((X\times V)/G)@>{\alpha}>{\sim}>H_{i+2\ell+2b}^{ \mathrm{BM}}\left((X\times U)/G\right).\end{CD}\] The map \(c^{\prime}\) is an isomorphism by the Atiyah-Hirzebruch theorem, thus the map \(\mathrm{c}\) is also an isomorphism, and the claim follows. For \(\mathcal{X}\) a quotient stack as above, recall that there is an intersection product \(H_{i}^{\mathrm{BM}}(\mathcal{X})\otimes H^{j}(\mathcal{X})\to H_{i+j}^{ \mathrm{BM}}(\mathcal{X})\) for \(i,j\in\mathbb{Z}\). Note the following immediate statement. **Proposition 3.2**.: _Let \(\alpha\in\prod_{i\geq 0}H^{i}(\mathcal{X})\) such that \(\alpha=1+\alpha^{\prime}\) for \(\alpha^{\prime}\in\prod_{i\geq 1}H^{i}(\mathcal{X})\). Define \(\mathrm{ch}^{\prime}(-):=\mathrm{ch}(-)\cdot\alpha\). Then \(\mathrm{ch}^{\prime}\) induces a function on the associated graded pieces \(\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})\), and this function equals the cycle map (3.6)._ ### The Chern character map for quasi-smooth stacks Assume \(\mathcal{X}=X/G\) is a quotient stack with \(X\) a quasi-smooth scheme. One can define a Chern character map for \(\mathcal{X}\) using the the Chern character map for \(\mathcal{X}^{\mathrm{cl}}\) and the isomorphisms (2.6). However, we define a topological Chern character map which takes into account the derived structure. For a closed immersion \(\mathcal{X}\hookrightarrow\mathcal{Y}\), consider the topological K-theory with closed supports \(G_{\bullet,\mathcal{X}}^{\mathrm{top}}(\mathcal{Y})\cong G_{\bullet}^{ \mathrm{top}}(\mathcal{X})\) and the Borel-Moore homology with closed supports \(H_{\bullet,\mathcal{X}}^{\mathrm{BM}}(\mathcal{Y})\cong H_{\bullet}^{ \mathrm{BM}}(\mathcal{X})\). Let \(\mathcal{X}\) be a quasi-smooth quotient stack. Consider a closed immersion \(i\colon\mathcal{X}\hookrightarrow\mathcal{Y}\), where \(\mathcal{Y}\) is a smooth classical quotient stack. Let \(N\) be the normal bundle of \(i\), which is a vector bundle on \(\mathcal{X}\) and thus has a Todd class \(\mathrm{td}(N)\in\widetilde{H}^{0}(\mathcal{X})\). Consider the local Chern character map \[\mathrm{ch}_{\mathcal{X},\mathcal{Y}}\colon G_{\bullet,\mathcal{X}}^{\mathrm{ top}}(\mathcal{Y})\to\widetilde{H}_{\bullet,\mathcal{X}}^{\mathrm{BM}}(\mathcal{Y}).\] Define \[\mathrm{ch}_{\mathcal{X}}:=\mathrm{ch}_{\mathcal{X},\mathcal{Y}}\cdot\mathrm{td }(N)\colon G_{\bullet}^{\mathrm{top}}(\mathcal{X})\to\widetilde{H}_{\bullet}^{ \mathrm{BM}}(\mathcal{X}). \tag{3.8}\] **Lemma 3.3**.: _The map \(\mathrm{ch}_{\mathcal{X}}\) is independent of a choice of \(\mathcal{Y}\) as above._ Proof.: Let \(i^{\prime}\colon\mathcal{X}\hookrightarrow\mathcal{Y}^{\prime}\) be a different closed immersion. Choose \(\mathcal{Y}^{\prime\prime}\) and closed immersions \(j\colon\mathcal{Y}\hookrightarrow\mathcal{Y}^{\prime\prime}\) and \(j^{\prime}\colon\mathcal{Y}^{\prime}\hookrightarrow\mathcal{Y}^{\prime\prime}\). Note that the Todd classes for the normal bundles of \(ji\) and \(j^{\prime}i^{\prime}\) are the same. The statement then follows from the GRR theorem for the closed immersions \(j\) and \(j^{\prime}\), see Theorem 2.1. **Remark 3.4**.: If \(\mathcal{X}\) is a classical stack, then the Chern character constructed above coincides with the usual Chern character (3.2), as one can see using the GRR theorem for a closed immersion of \(\mathcal{X}\) in a smooth ambient stack. Similarly, one proves a topological GRR theorem for quasi-smooth morphisms using Theorem 2.1. **Proposition 3.5**.: _(i) Let \(f\colon\mathcal{X}\to\mathcal{Y}\) be a quasi-smooth proper map of quasi-smooth quotient stacks. Let \(T_{f}\) be the virtual tangent bundle and consider the Todd class \(\operatorname{td}(T_{f})\in\widetilde{H}^{0}(\mathcal{X})\). Define \(f^{\prime}_{*}(-):=f_{*}(\operatorname{td}(T_{f})\cdot(-))\). Then the following diagram commutes:_ (3.9) _(ii) Further, for any smooth morphism \(f\colon\mathcal{X}\to\mathcal{Y}\) between quasi-smooth quotient stacks, the following diagram commutes:_ (3.10) Define a filtration on \(G^{\operatorname{top}}_{\bullet}(\mathcal{X})\) as in (3.5), the associated graded, and a cycle map as in (3.6), which is also an isomorphism by a relative version of Proposition 3.1 and Proposition 3.2: \[\operatorname{c}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}( \mathcal{X})\xrightarrow{\sim}H^{\operatorname{BM}}_{i+2\ell}(\mathcal{X}). \tag{3.11}\] We have that \(\operatorname{td}(T_{f})=1+x\in\widetilde{H}^{0}(\mathcal{X})\) for \(x\in\prod_{i\geqslant 2}H^{i}(\mathcal{X})\). We record the following corollary of the diagrams (3.9) and (3.10), see also Proposition 3.2. **Corollary 3.6**.: _Let \(f\colon\mathcal{X}\to\mathcal{Y}\) be a quasi-smooth morphism of quasi-smooth quotient stacks of relative dimension \(d\). Let \(i,\ell\in\mathbb{Z}\). If \(f\) is smooth, then it induces a pullback map:_ \[f^{*}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{Y}_{0} )\to\operatorname{gr}_{\ell+d}G^{\operatorname{top}}_{i}(\mathcal{X}_{0}).\] _If \(f\) is proper, then it induces a pushforward map:_ \[f_{*}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{X}_{0} )\to\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{Y}_{0}).\] ## 4. Topological K-theory of categories of singularities In this section, we compute the topological K-theory of categories of matrix factorizations in terms of the monodromy invariant cohomology of vanishing cycles. The results and approach are inspired by work of Efimov [1], Blanc-Robalo-Toen-Vesozzi [1], and Brown-Dyckerhoff [1]. Let \(\mathcal{X}\) be a smooth quotient stack, let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function with \(0\) the only singular value, let \(\iota\colon\mathcal{X}_{0}\hookrightarrow\mathcal{X}\) be the (derived) fiber over \(0\). Let \(d:=\dim_{\mathbb{C}}\mathcal{X}\). The category of singularities \[D_{\operatorname{sg}}(\mathcal{X}_{0}):=D^{b}\mathrm{Coh}(\mathcal{X}_{0})/ \mathrm{Perf}(\mathcal{X}_{0}) \tag{4.1}\] is equivalent to the category of matrix factorizations [10, 1]: \[\operatorname{MF}(\mathcal{X},f)\xrightarrow{\sim}D_{\operatorname{sg}}( \mathcal{X}_{0}). \tag{4.2}\] We denote by \(K^{\operatorname{sg}}_{\bullet}(\mathcal{X}_{0})\) the topological K-theory of \(D_{\operatorname{sg}}(\mathcal{X}_{0})\). From (4.1), there is a long exact sequence of \(\mathbb{Q}\)-vector spaces: \[\ldots\to K^{\operatorname{top}}_{i}(\mathcal{X}_{0})\to G^{\operatorname{ top}}_{i}(\mathcal{X}_{0})\to K^{\operatorname{sg}}_{i}(\mathcal{X}_{0})\to K^{ \operatorname{top}}_{i-1}(\mathcal{X}_{0})\to G^{\operatorname{top}}_{i-1}( \mathcal{X}_{0})\to\ldots. \tag{4.3}\] We assume throughout the section that \(f\) is _quasi-homogeneous_, that is, that there exists an action of \(\mathbb{C}^{*}\) on \(\mathcal{X}\) contracting \(\mathcal{X}\) onto \(\mathcal{X}_{0}\) such that \(f\) is of weight \(d>0\) with respect to the action of \(\mathbb{C}^{*}\), or \(f=0\). Note that the function (2.12) is quasi-homogeneous of weight \(1\) with respect to the weight \(1\) scaling action on the fibers. Then \(0\) is the only singular value of \(f\). Further, there is a weak equivalence induced by restriction: \[K^{\operatorname{top}}(\mathcal{X})\overset{\sim}{\to}K^{\operatorname{top}}( \mathcal{X}_{0}).\] Note that actually all the results in this section hold as long as the isomorphism \(K^{\operatorname{top}}(\mathcal{X})\overset{\sim}{\to}K^{\operatorname{top}}( \mathcal{X}_{0})\) holds, and this is used only in the proof of Proposition 4.2. ### Vanishing cycle cohomology We begin by recalling two distinguished triangles relating the vanishing and cycle functors applied to the constant sheaf. A reference is [10, Chapter 3], especially [10, pages 24-28]. The results in loc. cit. are stated for varieties, but they also hold for quotient stacks as in [1, Subsection 2.2]. There is an exact triangle in \(D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}[-1]\to\psi_{f}[-1]:=\psi_{f}\mathbb{Q} \mathcal{X}[-1]\overset{\operatorname{can}}{\longrightarrow}\varphi_{f}[-1] :=\varphi_{f}\mathbb{Q}\mathcal{X}[-1]\to\iota_{*}\mathbb{Q}\mathcal{X}_{0}. \tag{4.4}\] By taking the dual of the above triangle, we obtain the distinguished triangle: \[\varphi_{f}[-1]\overset{\operatorname{var}}{\longrightarrow}\psi_{f}[-1]\to \iota_{*}\iota^{!}\mathbb{Q}\mathcal{X}[1]\to\varphi_{f}[1]. \tag{4.5}\] We have that \(\operatorname{var}\circ\operatorname{can}=1-\operatorname{T}\), where \(\operatorname{T}\) is the monodromy operator. Consider the map \[\alpha\colon\mathbb{Q}\mathcal{X}_{0}=\iota^{*}\mathbb{Q}\mathcal{X}\to\iota^ {!}\mathbb{Q}\mathcal{X}[2]\] given by capping with the fundamental class of the quasi-smooth variety \(\mathcal{X}_{0}\). If \(f\) is not the zero map, then this is the usual construction. If \(f\) is the zero map, then \(\mathcal{X}_{0}\cong\mathcal{X}\times r\), where \(r=\operatorname{Spec}\mathbb{C}[\epsilon]\) for \(\epsilon\) in homological degree \(1\). The map \(\alpha\) is then the zero map. Let \(\varphi_{f}^{\operatorname{inv}}\) be the cone of \(1-\operatorname{T}\): \[\varphi_{f}\overset{1-\operatorname{T}}{\longrightarrow}\varphi_{f}\to \varphi_{f}^{\operatorname{inv}}\to\varphi_{f}[1]. \tag{4.6}\] Consider the diagram, where the rows and the columns are distinguished triangles: (4.7) In the above diagram, the second row is (4.4), the second column is (4.5), and the third column is (4.6). We obtain that the first row is also a distinguished triangle: \[\iota_{*}\mathbb{Q}\mathcal{X}_{0}\overset{\alpha}{\to}\iota_{*}\iota^{!} \mathbb{Q}\mathcal{X}[2]\to\varphi_{f}^{\operatorname{inv}}\to\iota_{*} \mathbb{Q}\mathcal{X}_{0}[1]. \tag{4.8}\] We will also use later the notations \(\varphi_{f}^{\operatorname{inv}}\,\mathbb{Q}_{\mathcal{X}}\) and \(\varphi_{f}^{\operatorname{inv}}\,\mathrm{IC}_{\mathcal{X}}\) when it is convenient to indicate the ambient space. Denote by \[H^{\bullet}(\mathcal{X},\varphi_{f})^{\operatorname{inv}} :=\ker(1-\operatorname{T})\subset H^{\bullet}(\mathcal{X}, \varphi_{f}),\] \[H^{\bullet}(\mathcal{X},\varphi_{f})_{\operatorname{inv}} :=H^{\bullet}(\mathcal{X},\varphi_{f})/\mathrm{image}(1-\operatorname{ T}).\] There is a long exact sequence: \[\ldots\to H^{2d-i-2}(\mathscr{X}_{0}) \xrightarrow{\alpha}H^{\mathrm{BM}}_{i}(\mathscr{X}_{0})\to H^{2d-i}( \mathscr{X},\varphi^{\mathrm{inv}}_{f}[-2])=H^{2d-i-2}(\mathscr{X},\varphi^{ \mathrm{inv}}_{f})\] \[\to H^{2d-i-1}(\mathscr{X}_{0})\to H^{\mathrm{BM}}_{i-1}(\mathscr{X}_ {0})\to\ldots \tag{4.9}\] and there are short exact sequences: \[0\to H^{i}(\mathscr{X},\varphi_{f})_{\mathrm{inv}}\to H^{i}(\mathscr{X}, \varphi^{\mathrm{inv}}_{f})\to H^{i+1}(\mathscr{X},\varphi_{f})^{\mathrm{inv}} \to 0. \tag{4.10}\] We note the following compatibility between K-theory and cohomology. Let \(\alpha^{\prime}\colon\mathrm{Perf}(\mathscr{X}_{0})\hookrightarrow D^{b}( \mathscr{X}_{0})\) be the inclusion. **Proposition 4.1**.: _The following diagram commutes:_ Proof.: If \(f\) is not the zero map, then the diagram above is the same as the diagram (3.4). If \(f\) is zero, then \(\alpha\) is zero. We show that \(\alpha^{\prime}\) is also the zero map on topological K-theory. Let \(\mathscr{X}_{0}\) be the derived zero locus of \(0\colon\mathscr{X}\to\mathbb{C}\). Let \(r=\mathrm{Spec}\,\mathbb{C}[\epsilon]\) for \(\epsilon\) of homological degree \(1\), then \(\mathscr{X}_{0}\cong\mathscr{X}\times r\). Consider the natural projection \(\pi\colon\mathscr{X}_{0}=\mathscr{X}\times r\to\mathscr{X}\) and let \(l\colon\mathscr{X}_{0}^{\mathrm{cl}}\cong\mathscr{X}\to\mathscr{X}_{0}\). Then \(\pi^{*}\colon K^{\mathrm{top}}_{\bullet}(\mathscr{X})\xrightarrow{\sim}K^{ \mathrm{top}}_{\bullet}(\mathscr{X}_{0})\) and \(l_{*}\colon G^{\mathrm{top}}_{\bullet}(\mathscr{X}_{0})\xrightarrow{\sim}G^{ \mathrm{top}}_{\bullet}(\mathscr{X})\). For any topological vector bundle \(E\) on \(\mathscr{X}\), there is an isomorphism: \[\pi^{*}(E)\cong l_{*}(E)\oplus l_{*}(E)[1]\in G^{\mathrm{top}}_{0}(\mathscr{X} _{0}),\] so the conclusion for \(i=0\) holds. A similar computation holds for the suspension of \(\mathscr{X}\), so the conclusion also holds for \(i=1\). ### Chern character maps for matrix factorizations Let \(X\) be a smooth affine variety with an action of a reductive group \(G\). Consider the quotient stack \(\mathscr{X}=X/G\). Let \(f\colon\mathscr{X}\to\mathbb{C}\) be a regular function. The main result of this subsection is the construction of a Chern character map: \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathrm{MF}(\mathscr{X},f))\to\widetilde {H}^{i}(\mathscr{X},\varphi^{\mathrm{inv}}_{f}).\] We may assume that \(\mathrm{Crit}(f)\subset\mathscr{X}_{0}:=f^{-1}(0)\). Further, replacing \(\mathscr{X}\) with an open neighborhood of \(\mathscr{X}_{0}\), we may also assume that the pull-back gives a weak equivalence of spectra \(K^{\mathrm{top}}(\mathscr{X})\xrightarrow{\sim}K^{\mathrm{top}}(\mathscr{X}_{0})\). Consider the regular function \(\widetilde{f}\colon\mathscr{X}\times\mathbb{C}\to\mathbb{C}\) defined by \(\widetilde{f}(x,t)=t\cdot f(x)\) and set \[F_{\widetilde{f}}=(\widetilde{f})^{-1}(1)\subset\mathscr{X}\times\mathbb{C}^{ *}.\] For a closed substack \(\mathscr{Y}\subset\mathscr{X}\), we denote by \(K^{\mathrm{top}}(\mathscr{X}/\mathscr{Y})\) the relative topological K-theory spectra, i.e. the fiber of the map \(K^{\mathrm{top}}(\mathscr{X})\to K^{\mathrm{top}}(\mathscr{Y})\). **Proposition 4.2**.: _There is a canonical weak equivalence of spectra:_ \[K^{\mathrm{top}}(\mathrm{MF}(\mathscr{X},f))\xrightarrow{\sim}K^{\mathrm{ top}}(\mathscr{X}\times\mathbb{C}^{*}/F_{\widetilde{f}}).\] Proof.: We consider graded categories of matrix factorizations of \(\mathcal{X}\times\mathbb{C}\), where the grading is given by the \(\mathbb{C}^{*}\)-action with weight \((0,2)\). By the Koszul equivalence (2.14) and (4.2), there are equivalences: (4.11) In the above diagram, the horizontal sequences are exact sequences of dg-categories and the vertical arrows are equivalences induced by (2.14). Consider the inclusion \(\iota\colon\mathcal{X}_{0}\hookrightarrow\mathcal{X}\) and the projection \(p\colon\mathcal{X}\times\mathbb{C}\to\mathcal{X}\). Note that \(p|_{F_{\widetilde{f}}}\colon F_{\widetilde{f}}\to\mathcal{X}\setminus \mathcal{X}_{0}\) is an isomorphism. We have the commutative diagram of spectra: The horizontal sequences are exact triangles of spectra, and the vertical arrows are equivalences. Let \(i\colon\mathcal{X}\hookrightarrow\mathcal{X}\times\mathbb{C}\) be the inclusion into \(\mathcal{X}\times\{0\}\). By Lemma 4.3 below together with the isomorphism \(K^{\mathrm{top}}(\mathcal{X})\overset{\sim}{\to}K^{\mathrm{top}}(\mathcal{X}_ {0})\) (this is the only place where we use that \(f\) is quasi-homogeneous), we have the equivalence \[i_{*}\colon K^{\mathrm{top}}(\mathcal{X})\overset{\sim}{\to}K^{\mathrm{top}}( \mathrm{MF}^{\mathrm{gr}}_{\mathcal{X}\times\{0\}}(\mathcal{X}\times\mathbb{C },\widetilde{f})).\] Therefore by taking the cofibers of we obtain the equivalence \[K^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{X}\times\mathbb{C}^{*}, \widetilde{f}))\overset{\sim}{\to}\mathrm{fib}\big{(}K^{\mathrm{top}}( \mathcal{X}\times\mathbb{C}^{*})\to K^{\mathrm{top}}(F_{\widetilde{f}})\big{)}.\] Therefore the desired equivalence follows from the right vertical equivalence in (4.11). We have used the following lemma: **Lemma 4.3**.: _The following diagram commutes_ Proof.: The equivalence \(\kappa\) is given by \((-)\otimes_{\mathcal{O}_{\mathcal{X}_{0}}}\mathcal{K}\) for the Koszul factorization \(\mathcal{K}\), see [Tod, Section 2.3.3]: \[\mathcal{K}=\mathcal{O}_{\mathcal{X}_{0}}\otimes_{\mathcal{O}_{\mathcal{X}}} \mathcal{O}_{\mathcal{X}\times\mathbb{C}}=\mathcal{O}_{\mathcal{X}}[\varepsilon,t]\] where \(\deg\varepsilon=-1\), \(\deg t=2\), with differential \(d_{\mathcal{X}}(\alpha(t)+\beta(t)\varepsilon)=f\beta(t)+t\alpha(t)\varepsilon\). By construction, it commutes with tensor product from \(D^{b}(\mathcal{X})\). Moreover, as an object of \(\mathrm{MF}^{\mathrm{gr}}(\mathcal{X}\times\mathbb{C},\widetilde{f})\), the object \(\mathcal{X}\) is isomorphic to \(i_{*}\mathcal{O}_{\mathcal{X}}[1]\), see [1, Proposition 3.20] or [1, Equation (2.3.6)]. Therefore the lemma holds. We next relate the relative cohomology to the monodromy invariant cohomology of vanishing cycles: **Proposition 4.4**.: _There are canonical isomorphisms:_ \[H^{\bullet}(\mathcal{X}\times\mathbb{C}^{*}/F_{\widetilde{f}})\cong H^{\bullet }(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]).\] Proof.: Consider the commutative diagram where horizontal sequences are exact triangles. By taking the fibers of the vertical maps, we obtain the exact triangle \[\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}\oplus\iota_{*}\mathbb{Q}_{\mathcal{X}_{ 0}}[-1]\to\psi_{f}^{\mathrm{inv}}[-1]\to\varphi_{f}^{\mathrm{inv}}[-1]\to\iota _{*}\mathbb{Q}_{\mathcal{X}_{0}}[1]\oplus\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}. \tag{4.12}\] Let \(u\colon\mathcal{X}\setminus\mathcal{X}_{0}\hookrightarrow\mathcal{X}\). Then \(\psi_{f}^{\mathrm{inv}}[-1]=\iota_{*}t^{*}u_{*}u^{*}\mathbb{Q}_{\mathcal{X}}\), see [12, Equation (17)]. We then have that: \[\mathbb{Q}_{\mathcal{X}_{0}}\oplus\mathbb{Q}_{\mathcal{X}_{0}}[-1]=\iota^{*}p _{*}\mathbb{Q}_{\mathcal{X}\times\mathbb{C}^{*}},\ \psi_{f}^{\mathrm{inv}}[-1]=\iota_{*}t^{*}u_{*}u^{*}\mathbb{Q}_{\mathcal{X}}= \iota_{*}t^{*}p_{*}\mathbb{Q}_{F_{\widetilde{f}}}.\] The first map in (4.12) is identified with \(\iota_{*}t^{*}p_{*}\) of the natural map \(\mathbb{Q}_{\mathcal{X}\times\mathbb{C}^{*}}\to\mathbb{Q}_{F_{\widetilde{f}}}\). Therefore we obtain the desired isomorphism. Consider the Chern character map of relative K-theories: \[\mathrm{ch}\colon K_{i}^{\mathrm{top}}(\mathcal{X}\times\mathbb{C}^{*}/F_{ \widetilde{f}})\to\widetilde{H}^{i}(\mathcal{X}\times\mathbb{C}^{*}/F_{ \widetilde{f}}). \tag{4.13}\] Define the Chern character map \[\mathrm{ch}\colon K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X},f))\to\widetilde {H}^{i}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}). \tag{4.14}\] such that the following diagram commutes, where the horizontal maps are isomorphisms by Propositions 4.2 and 4.4: Recall that \(d:=\dim_{\mathbb{C}}\mathcal{X}\). Define the filtration \[E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X},f)):=\mathrm{ch}^{-1} \left(H^{\geqslant 2d-i-2\ell}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]) \right). \tag{4.15}\] We obtain cycle maps on the associated graded pieces: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X },f))\to H^{2d-i-2\ell}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]). \tag{4.16}\] **Proposition 4.5**.: _The maps (4.16) are isomorphisms for all \(i,\ell\in\mathbb{Z}\), and the map (4.14) is an isomorphism if \(\mathcal{X}\) is a variety._ Proof.: Define a filtration: \[E_{\ell}K_{i}^{\rm top}(\mathscr{X}\times\mathbb{C}^{*}/F_{\widehat{f}}):={\rm ch }^{-1}\left(H^{\geqslant 2d-i-2\ell}(\mathscr{X}\times\mathbb{C}^{*}/F_{ \widehat{f}})\right)\] and the cycle maps on the associated graded pieces, which are isomorphisms using the long exact sequence for relative K-theory and Proposition 3.1: \[{\rm c}\colon\operatorname{gr}_{\ell}K_{i}^{\rm top}(\mathscr{X}\times \mathbb{C}^{*}/F_{\widehat{f}})\xrightarrow{\sim}H^{2\dim\mathscr{X}-i-2\ell}( \mathscr{X}\times\mathbb{C}^{*}/F_{\widehat{f}}).\] The conclusions then follow. Composing with the inverse of the equivalence (4.2), we also obtain a Chern character: \[{\rm ch}\colon K_{i}^{\rm sg}(\mathscr{X}_{0})\to\widetilde{H}^{i}(\mathscr{X },\varphi_{f}^{\rm inv}). \tag{4.17}\] Note the following compatibility of the Chern character maps. **Proposition 4.6**.: _The following diagram commutes, where the top sequence is (4.3) and the bottom sequence is (4.9):_ (4.18) Proof.: By the construction of the Chern character (4.17) and the GRR theorem, it suffices to show the following diagram commutes, which is indeed the case: ### The Grothendieck-Riemann-Roch theorem for matrix factorizations The Grothendieck-Riemann-Roch theorem for relative topological K-theory and cohomology implies the following. **Theorem 4.7**.: _Let \(h\colon\mathscr{X}\to\mathscr{Y}\) be a morphism of smooth quotient stacks. Consider a regular function \(f\colon\mathscr{Y}\to\mathbb{C}\), let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous. Let \(i\in\mathbb{Z}\)._ _(a) The following diagram commutes:_ _(b) Assume \(h\) is proper. Let \({\rm td}(T_{h})\in\widetilde{H}^{0}(\mathscr{X}_{0})\) be the Todd class of the virtual tangent bundle \(T_{h}\) of \(h\) and let \(h^{\prime}_{*}(-):=h_{*}({\rm td}(T_{h})\cdot(-))\). Then the following diagram commutes:_ Proof.: We may assume that \(f\) and \(g\) have only \(0\) as a critical value. The equivalence from Proposition 4.2 and the isomorphism from Proposition 4.4 commutes with both \(h_{*}\) and \(h^{*}\). The Chern character (4.13) commutes with \(h^{*}\), so part (a) follows. Finally, the topological Grothendieck-Riemann-Roch theorem implies that the following diagram commutes, so part (b) follows as well: We note the following functoriality of graded topological K-theory of categories of singularities. **Proposition 4.8**.: _Let \(h\colon\mathcal{X}\to\mathcal{Y}\) be a morphism of smooth quotient stacks of relative dimension \(d\), let \(f\colon\mathcal{Y}\to\mathbb{C}\) be a regular function, let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous. Let \(\mathcal{X}_{0}\) and \(\mathcal{Y}_{0}\) be the (derived) zero loci of \(g\) and \(f\), respectively. Let \(i,\ell\in\mathbb{Z}\). Then \(h\) induces a pullback map:_ \[h^{*}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{ MF}(\mathcal{Y},f))\to\operatorname{gr}_{\ell+d}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{X},g)).\] _If \(h\) is proper, then there is a pushforward map:_ \[h_{*}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{ MF}(\mathcal{X},g))\to\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{Y},f)).\] Proof.: The claim follows from Theorem 4.7 and Proposition 3.2. For future reference, we also state explicitly the compatibility of the Chern character maps with Knorrer periodicity, which is a particular case of Theorem 4.7. **Corollary 4.9**.: _Let \(X\) be a smooth affine variety with an action of a reductive group, let \(\mathcal{X}:=X/G\) and consider a regular function \(f\colon\mathcal{X}\to\mathbb{C}\) with only \(0\) as a critical value. Let \(U\) be a finite dimensional representation of \(G\) and consider the natural pairing \(w\colon U\times U^{\vee}\to\mathbb{C}\). Let \(\mathcal{Y}:=(X\times U\times U^{\vee})/G\) and consider the regular function \(f+w\colon\mathcal{Y}\to\mathbb{C}\), where \(f\) and \(w\) are pulled-back from \(X\) and \(U\times U^{\vee}\), respectively. Consider the natural maps:_ \[X\overset{v}{\leftarrow}X\times U\overset{s}{\hookrightarrow}X\times U\times U ^{\vee}\] _where \(v\) is the projection and \(s(x,u)=(x,u,0)\). Let \(\operatorname{ch}^{\prime}:=\operatorname{ch}\cdot\operatorname{td}(T_{s})\), where \(T_{s}\) is the relative tangent complex of \(s\). The following diagram commutes:_ \[\begin{CD}K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},f))@>{s _{*}v^{*}}>{}>K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{Y},f+w)) \\ @V{}V{\operatorname{ch}^{\prime}}V@V{}V{\operatorname{ch}}V\\ \widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{inv}})@>{s_{*}v^{*}}> {}>\widetilde{H}^{i}(\mathcal{Y},\varphi_{f+w}^{\operatorname{inv}}).\end{CD}\] Note that the horizontal maps are isomorphisms by the Thom-Sebastiani theorem, see the proofs of Propositions 6.11 and 6.13. The top horizontal map is called Knorrer periodicity [11, 12] ### Complements #### 4.4.1. Injectivity of the cycle map The Chern characters (3.1), (3.3), or (4.14) may not be injective when \(\mathcal{X}\) is a stack. However, they are all isomorphism when \(\mathcal{X}\) is a variety. In some cases of interest, we can show that (4.14) is injective for \(\mathcal{X}\) a stack using the following propositions. **Proposition 4.10**.: _Let \(\mathcal{X}\) be a smooth quotient stack and let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function. Let \(\mathbb{S}\) be a subcategory of \(\operatorname{MF}(\mathcal{X},f)\). Assume there exists a smooth variety \(Y\) and a morphism \(r\colon Y\to\mathcal{X}\) such that \(r^{*}\colon\mathbb{S}\to\operatorname{MF}(Y,g)\) is (left or right) admissible, where \(g:=f\circ r\). Let \(i\in\mathbb{Z}\). Then the Chern character_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{S})\to K_{i}^{ \operatorname{top}}(\operatorname{MF}(\mathcal{X},f))\to\widetilde{H}^{i}( \mathcal{X},\varphi_{f}^{\operatorname{inv}})\] _is injective._ Proof.: The pullback map \(r^{*}\colon K_{i}^{\operatorname{top}}(\mathbb{S})\hookrightarrow K_{i}^{ \operatorname{top}}(\operatorname{MF}(Y,g))\) is injective. The claim then follows from the diagram: **Proposition 4.11**.: _Let \(\mathcal{X}\) be a smooth quotient stack and let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function. Assume there is a semiorthogonal decomposition \(\operatorname{MF}(\mathcal{X},f)=\langle\mathbb{B}_{i}\mid i\in I\rangle\) and a collection of finite subsets \(I_{n}\subset I\) for \(n\in\mathbb{N}\) with the following two properties:_ * _for any finite subset_ \(S\subset I\)_, there exists_ \(n\in\mathbb{N}\) _such that_ \(S\subset I_{n}\)_,_ * _for all_ \(n\in\mathbb{N}\)_, there exists a smooth variety_ \(Y_{n}\) _and a morphism_ \(r_{n}\colon Y_{n}\to\mathcal{X}\) _such that the category_ \(\mathbb{B}^{n}:=\langle\mathbb{B}_{i}\mid i\in I_{n}\rangle\) _is (left or right) admissible in_ \(\operatorname{MF}(Y_{n},f\circ r_{n})\) _via_ \(r_{n}^{*}\)_._ _Let \(i\in\mathbb{Z}\). Then the Chern character_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathcal{X},f))\to\widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{ inv}})\] _is injective._ Proof.: Let \(x\in K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},f))=\bigoplus_{j \in I}K_{i}^{\operatorname{top}}(\mathbb{B}_{j})\). Let \(S\subset I\) be a finite set such that \(x\in\bigoplus_{j\in S}K_{i}^{\operatorname{top}}(\mathbb{B}_{j})\). Then there exists \(n\) such that \(x\in K^{\operatorname{top}}(\mathbb{B}^{n})\). The Chern character \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{B}^{n})\to \widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{inv}})\] is injective by Proposition 4.10, and the claim follows. #### 4.4.2. Action of exterior algebra on the K-theory of matrix factorizations Denote by \(p:=\operatorname{Spec}\mathbb{C}\). The following computation follows as in Proposition 4.1. **Lemma 4.12**.: _As a \(\mathbb{Z}/2\)-algebra, we have_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(p,0))=\Lambda:=\mathbb{Q}[\epsilon]\] _where \(\epsilon\) has degree one._ Note that, for any regular function on a smooth stack \(h\colon\mathcal{Y}\to\mathbb{C}\), the category \(\operatorname{MF}(\mathcal{Y},h)\) is a module over \(\operatorname{MF}(p,0)\), so \(K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathcal{Y},h))\) is a \(\mathbb{Z}/2\)-graded \(\Lambda\)-module by Lemma 4.12. **Proposition 4.13**.: _Let \(\mathcal{X}\) be a smooth stack. Then_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},0))\cong K_{\cdot }^{\operatorname{top}}(\mathcal{X})\otimes_{\mathbb{Q}}\Lambda\] _as \(\Lambda\)-modules. Then, if \(\mathbb{M}\subset D^{b}(\mathcal{X})\) is an admissible subcategory of \(D^{b}(\mathcal{X})\), there is an isomorphism of \(\Lambda\)-modules:_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathbb{M},0))\cong K_{\cdot }^{\operatorname{top}}(\mathbb{M})\otimes_{\mathbb{Q}}\Lambda.\] Proof.: It suffices to prove the first isomorphism. Let \(\mathcal{X}_{0}\) be the derived zero locus of \(0\colon\mathcal{X}\to\mathbb{C}\). By the long exact sequence (4.3), it suffices to show that the map \(\alpha^{\prime}\colon\operatorname{Perf}(\mathcal{X}_{0})\to D^{b}(\mathcal{X} _{0})\) induces the zero map: \[\alpha^{\prime}\colon K_{\bullet}^{\operatorname{top}}(\mathcal{X}_{0})\to G _{\bullet}^{\operatorname{top}}(\mathcal{X}_{0}),\] which we showed in the proof of Proposition 4.1. #### 4.4.3. The Chern character for the algebraic K-theory of matrix factorizations Consider the natural transformation \[\gamma\colon K_{0}^{\operatorname{alg}}:=K_{0}\to K_{0}^{\operatorname{top}}\] from algebraic K-theory to topological K-theory [1, Remark 4.14]. For a quotient stack \(\mathcal{X}=X/G\), where \(G\) is a reductive group acting on a smooth affine scheme \(X\), there is a Chern character: \[\operatorname{ch}^{\operatorname{alg}}\colon K_{0}^{\operatorname{alg}}( \operatorname{MF}(\mathcal{X},f))\xrightarrow{\gamma}K_{0}^{\operatorname{ top}}(\operatorname{MF}(\mathcal{X},f))\xrightarrow{\operatorname{ch}}\widetilde{H}^{0}( \mathcal{X},\varphi_{f}^{\operatorname{inv}}).\] We next state an algebraic version of the GRR theorem 4.7. **Theorem 4.14**.: _Let \(h\colon\mathcal{X}\to\mathcal{Y}\) be a morphism of smooth quotient stacks. Consider a regular function \(f\colon\mathcal{Y}\to\mathbb{C}\), let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous._ _(a) The following diagram commutes:_ _(b) Assume \(h\) is proper. Let \(\operatorname{td}(T_{h})\in\widetilde{H}^{0}(\mathcal{X}_{0})\) be the Todd class of the virtual tangent bundle \(T_{h}\) of \(h\), and let \(h^{\prime}_{*}(-):=h_{*}(\operatorname{td}(T_{h})\cdot(-))\). Then the following diagram commutes:_ Proof.: Both claims follow from Theorem 4.7 and the commutativity of \(\gamma\) with \(h^{*}\) and \(h_{*}\). #### 4.4.4. Graded and ungraded matrix factorizations One can define graded categories of matrix factorizations in more generality than the one used in Subsection 2.7, see below for one example. It is natural to ask for an analogue of Proposition 4.5 for categories of graded matrix factorizations. We do not know how to answer this question for general graded categories, but we study some examples in Section 5. We mention a theorem of Brown-Dyckerhoff in [1, Theorem 1.3] which computes the topological K-theory for a class of graded matrix factorizations not covered by our methods. Let \(f\colon\mathbb{C}^{n}\to\mathbb{C}\) be a homogeneous polynomial of degree \(d\). Let \(\mathbb{C}^{*}\) act on \(\mathbb{C}^{n}\) with weight \(1\). Consider category \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n},f)\) with objects of the form \[(\alpha\colon\mathcal{F}\rightleftarrows\mathcal{G}\colon\beta),\ \alpha\circ\beta=\beta\circ\alpha=\times f,\] where \(\alpha\) is homogeneous of degree \(d\) and \(\beta\) is homogeneous of degree zero. For each \(i\in\mathbb{Z}\), there are isomorphisms: \[K^{\operatorname{top}}_{\mu_{d},i}(\mathbb{C}^{n},f^{-1}(1))\overset{\sim}{ \to}K^{\operatorname{top}}_{i}(\operatorname{MF}^{\operatorname{gr}}(\mathbb{ C}^{n},f)),\] where the left hand side is the \(\mu_{d}\)-equivariant relative topological K-theory space, see loc. cit. for more details. Note that, for a homogeneous polynomial, the vanishing cycle cohomology can be computed in terms of relative cohomology [1, Proposition 6.4]. We do not have an alternative proof of [1, Theorem 1.3]. However, we note the following relation between graded and ungraded matrix factorizations, that may be used in conjunction with excision arguments for computation, but which we do not use later in the paper. **Proposition 4.15**.: _Let \(\mathbb{C}^{*}\) act on \(\mathbb{C}^{n+1}\) with weight \(1\), consider the grading with respect to this weight, and by abuse of notation denote by \(f\colon\mathbb{C}^{n}\times\mathbb{C}^{*}\overset{\pi_{1}}{\to}\mathbb{C}^{n} \overset{f}{\to}\mathbb{C}\). There is an equivalence_ \[\operatorname{MF}(\mathbb{C}^{n},f)\overset{\sim}{\to}\operatorname{MF}^{ \operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f). \tag{4.19}\] Proof.: We have the isomorphism of stacks \[p\colon(\mathbb{C}^{n}\times\mathbb{C}^{*})/\mathbb{C}^{*}\overset{\sim}{\to }\mathbb{C}^{n},\ (x_{1},\dots,x_{n},t)\mapsto(t^{-1}x_{1},\dots,t^{-1}x_{n}).\] For an object \((\alpha\colon\mathcal{F}\rightleftarrows\mathcal{G}\colon\beta)\) in \(\operatorname{MF}(\mathbb{C}^{n},f)\), we associate the object \[(\alpha^{\prime}\colon p^{*}\mathcal{F}\rightleftarrows p^{*}\mathcal{G} \colon\beta^{\prime}),\ \alpha^{\prime}=t^{d}p^{*}\alpha,\ \beta^{\prime}=p^{*}\beta\] in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\). Note that \(\alpha^{\prime}\) is degree \(d\) and \(\beta^{\prime}\) is degree zero. Since \(\alpha\circ\beta=\beta\circ\alpha=\times f\) and \(p^{*}f=t^{-d}f\), we have \(\alpha^{\prime}\circ\beta^{\prime}=\beta^{\prime}\circ\alpha^{\prime}=\times f\), so it determines an object in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\). Conversely, given an object \((\gamma\colon\mathcal{P}\rightleftarrows\mathcal{Q}\colon\delta)\) in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\), we associate the object \[(\gamma^{\prime}\colon\mathcal{P}_{0}\rightleftarrows\mathcal{Q}_{0}\colon \delta^{\prime}),\ \gamma^{\prime}=t^{-d}\gamma_{0},\ \delta^{\prime}=\delta_{0}\] in \(\operatorname{MF}(\mathbb{C}^{n},f)\). In the above, the subscript \(0\) means taking the degree zero part and the morphism \(\gamma^{\prime}\) is \(\mathcal{P}_{0}\stackrel{{\gamma_{0}}}{{\to}}\mathcal{Q}_{d} \stackrel{{ t^{-d}}}{{\to}}\mathcal{Q}_{0}\). It is easy to see that the above correspondences give mutually inverse functors, giving the equivalence (4.19). ## 5. Dimensional reduction In this section, we show that the Koszul equivalence (2.11) and the dimensional reduction in cohomology are compatible via the Chern character map. We will use these compatibilities in Subsection 7 to compute the topological K-theory of preprojective quasi-BPS categories from the topological K-theory of quasi-BPS categories of tripled quivers with potential. ### Dimensional reduction Recall the setting of the Koszul equivalence from Subsection 2.8. We will use the notations of the various maps from the diagram (2.13). In this subsection, we review the dimensional reduction theorem in cohomology due to Davison [16, Theorem A.1] (note that, to obtain the maps in loc. cit., one needs to precompose all the following maps by \(l_{*}\), see the isomorphism (2.6)). For \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{E}^{\vee}_{0})\), there is a natural isomorphism: \[\varphi_{f}[-1]\iota_{*}\bullet\stackrel{{\sim}}{{\to}}\iota_{*} \bullet. \tag{5.1}\] For \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\), there is a natural transformation \[\eta^{*}\bullet\to\eta^{*}j_{*}j^{*}\bullet \tag{5.2}\] The natural transformations (5.2) and (5.1) induce a natural transformation for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\varphi_{f}[-1]\eta^{*}\bullet\to\varphi_{f}[-1](\eta^{*}j_{*}j^{*}\bullet)= \eta^{*}j_{*}j^{*}\bullet.\] The dimensional reduction isomorphism in cohomology [16, Theorem A.1] is the following natural isomorphism for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\eta_{i}\varphi_{f}[-1]\eta^{*}\bullet\stackrel{{\sim}}{{\to}} \eta_{i}\eta^{*}j_{*}j^{*}\bullet.\] By taking the Verdier dual of the above natural isomorphism, we obtain: \[\eta_{*}j^{\prime}_{*}j^{\prime\prime}_{*}\eta^{!}\eta^{!}\bullet=\eta_{*}\eta ^{!}j_{*}j^{!}\bullet\stackrel{{\sim}}{{\to}}\eta_{*}\varphi_{f} [-1]\eta^{!}\bullet, \tag{5.3}\] which alternatively can be described as applying the functor \(\eta_{*}\varphi_{f}[-1]\) to the natural transformation \(j^{\prime}_{*}j^{\prime!}\eta^{!}\bullet\to\eta^{!}\bullet\) for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\). By taking the cohomology of the two sides in (5.3), one obtains the _dimensional reduction_ isomorphism: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{\operatorname{BM}}_{i}(\mathcal{X}) \stackrel{{\sim}}{{\to}}H^{\operatorname{BM}}_{i+2r}(\mathcal{E} ^{\vee}|_{\mathcal{X}})\stackrel{{\sim}}{{\to}}H^{2\dim\ell-2r-i} (\mathcal{E}^{\vee},\varphi_{f}\mathbb{Q}[-1]). \tag{5.4}\] Further, the monodromy on the left hand side is trivial. The isomorphism (5.3) factors through: \[\eta_{*}j^{\prime}_{*}j^{\prime!}\eta^{!}\bullet\to\eta_{*}\iota_{*}\iota^{! }\eta^{!}\bullet=\eta_{*}\varphi_{f}[-1]\iota_{*}\iota^{!}\eta^{!}\bullet\to \eta_{*}\varphi_{f}[-1]\eta^{!}\bullet.\] Recall that \(\varphi_{f}^{\operatorname{inv}}\) is the cone of the map \(\alpha\colon\iota_{*}\mathbb{Q}_{\mathcal{E}^{\vee}_{0}}\to\iota_{*}\iota^{! }\mathbb{Q}_{\mathcal{E}^{\vee}}[2]\), see (4.8). From the diagram (4.7), the map \(\eta_{*}\iota_{*}\iota^{!}\mathbb{Q}_{\mathcal{E}^{\vee}}[2]\to\eta_{*}\varphi _{f}\mathbb{Q}_{\mathcal{E}^{\vee}}[1]\) factors through \(\eta_{*}\varphi_{f}^{\operatorname{inv}}\), and thus there are maps whose composition is an isomorphism: \[\beta^{\prime}\colon\eta_{*}j^{\prime}_{*}\omega_{\mathcal{E}^{\vee}|_{ \mathcal{X}}}\to\eta_{*}\iota_{*}\iota^{!}\omega_{\mathcal{E}^{\vee}}\to\eta_{* }\varphi_{f}^{\operatorname{inv}}[2\dim\mathcal{E}^{\vee}-2]\stackrel{{ \beta}}{{\to}}\eta_{*}\varphi_{f}\omega_{\mathcal{E}^{\vee}}[-1]. \tag{5.5}\] We let \(\beta^{\diamond}\colon\eta_{*}j^{\prime}_{*}\omega_{\mathcal{E}^{\vee}|_{\mathcal{X }}}\to\eta_{*}\iota_{*}!^{\omega}\omega_{\mathcal{E}^{\vee}}\to\eta_{*}\varphi_{ f}^{\mathrm{inv}}[2\dim\mathcal{E}^{\vee}-2]\). The map \(\beta^{\diamond}\) provides a splitting of the map \(\beta\), thus the triangle (4.6) becomes the natural isomorphism: \[\eta_{*}\varphi_{f}^{\mathrm{inv}}[-2]\cong\eta_{*}\varphi_{f}[-2]\oplus\eta_{* }\varphi_{f}[-1]. \tag{5.6}\] By taking global sections of this isomorphism, there is a natural injective map: \[\gamma\colon H^{\bullet}(\mathcal{E}^{\vee},\varphi_{f}[-1])\hookrightarrow H ^{\bullet}(\mathcal{E}^{\vee},\varphi_{f}^{\mathrm{inv}}[-2]). \tag{5.7}\] We also note that, by taking global sections of the complexes in (5.5), we obtain an isomorphism: \[\beta^{\prime}\colon H^{\mathrm{BM}}_{i}(\mathcal{K}) \xrightarrow{\sim}H^{\mathrm{BM}}_{i+2r}(\mathcal{E}^{\vee}|_{ \mathcal{X}})\to H^{\mathrm{BM}}_{i+2r}(\mathcal{E}^{\vee}_{0})\to H^{2\dim \mathcal{E}-2r-i}(\mathcal{E}^{\vee},\varphi_{f}^{\mathrm{inv}}[-2])\] \[\to H^{2\dim\mathcal{E}-2r-i}(\mathcal{E}^{\vee},\varphi_{f} \mathbb{Q}[-1])=H^{2\dim\mathcal{X}-i}(\mathcal{E}^{\vee},\varphi_{f}\mathbb{Q }[-1]). \tag{5.8}\] Note that the composition of the maps on the top row on (5.8) is given by \(\beta^{\diamond}\). ### The Chern character for graded matrix factorizations The purpose of this subsection is to construct a Chern character map: \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{ \vee},f))\to\widetilde{H}^{i}(\mathcal{E}^{\vee},\varphi_{f}[-1]) \tag{5.9}\] compatible with the Chern character map (3.8) for \(\mathcal{K}\) and the Chern character map (4.14) for \(\mathrm{MF}(\mathcal{E}^{\vee},f)\), see Proposition 5.1. We begin with a few preliminaries. Recall the forget-the-grading functor \[\Theta\colon\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f)\to\mathrm{MF}( \mathcal{E}^{\vee},f) \tag{5.10}\] and the equivalence between matrix factorizations and categories of singularities [10, 1]: \[\mathrm{MF}(\mathcal{E}^{\vee},f)\xrightarrow{\sim}D_{\mathrm{sg}}(\mathcal{ E}^{\vee}_{0}).\] The following diagram commutes: By the isomorphism (2.6), we obtain a commutative diagram, and we define \(\Psi\) as the resulting map: (5.11) Recall the Chern character (4.17) and the splitting (5.8). Let \(N\) be the normal bundle of \(\mathcal{K}\hookrightarrow\mathcal{E}^{\vee}\) and let \(M\) be the normal bundle of \(\mathcal{E}^{\vee}_{0}\hookrightarrow\mathcal{E}^{\vee}\). Let \(\mathrm{ch}^{\prime}:=\mathrm{ch}\cdot\mathrm{td}(N)\) and \(\mathrm{ch}^{\prime\prime}:=\mathrm{ch}\cdot\mathrm{td}(M)\). Then the following diagram commutes by Proposition 3.5: (5.12) **Proposition 5.1**.: _There is an injective Chern character (5.9) such that, in the following commutative diagram, the horizontal maps are injective:_ (5.13) _and such that the following diagram commutes as well for the modified Chern character for the immersions of \(\mathcal{E}^{\vee}|_{\mathcal{X}}\) and \(\mathcal{E}^{\vee}_{0}\) in \(\mathcal{E}^{\vee}\):_ (5.14) Proof.: Define (5.9) such that the diagram (5.14) commutes. We have that \(\gamma\circ j_{*}^{\prime}\eta^{\prime*}=\beta^{\circ}\) and \(\Theta\circ j_{*}^{\prime}\eta^{\prime*}=\Psi\), so the diagram (5.13) commutes as well. It remains to show that \(\Theta\) is injective. The map \(\beta^{\circ}\) is injective by (5.8). Then \(\Psi\) is also injective by the commutativity of the diagram (5.12). By the factorization \(\Theta\circ j_{*}^{\prime}\eta^{\prime*}=\Psi\), the map \(\Theta\) is indeed injective. We define an increasing filtration \(E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f) )\subset K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f))\) by \[E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f) ):=\mathrm{ch}^{-1}\left(H^{\geqslant 2\dim\mathcal{E}^{\vee}-i-2\ell}( \mathcal{E}^{\vee},\varphi_{f}[-1])\right).\] We obtain cycle maps: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr }}(\mathcal{E}^{\vee},f))\xrightarrow{\sim}H^{2\dim\mathcal{E}^{\vee}-i-2\ell} (\mathcal{E}^{\vee},\varphi_{f}[-1]).\] The above cycle maps are isomorphisms by the isomoprhim (3.11) together with the commutative diagram (5.14). The following is a corollary of Propositions 4.10, 4.5, and 5.1: **Proposition 5.2**.: _The following diagram commutes, where all the cycle maps are isomorphisms:_ \[\begin{CD}\mathrm{gr}_{\ell}\mathcal{G}_{i}^{\mathrm{top}}(\mathcal{X})@>{j_ {*}^{\prime}\eta^{\prime*}}>{}>\mathrm{gr}_{\ell+r}K_{i}^{\mathrm{top}}( \mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f))@<{}<{}<\mathrm{gr}_{\ell+r}K _{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{E}^{\vee},f))\\ @V{\downarrow}V{}V@V{\downarrow}V{}V\\ H_{i+2\ell}^{\mathrm{BM}}(\mathcal{X})@>{j_{*}^{\prime}\eta^{\prime*}}>{}> \wedge H^{2\dim\mathcal{X}-i-2\ell}(\mathcal{E}^{\vee},\varphi_{f}[-1])@<{ \gamma}>{}>H^{2\dim\mathcal{X}-i-2\ell}(\mathcal{E}^{\vee},\varphi_{f}^{ \mathrm{inv}}[-2]).\end{CD}\] Proof.: The modified Chern characters \(\operatorname{ch}^{\prime}\) and \(\operatorname{ch}^{\prime\prime}\) induce the cycle maps \(\operatorname{c}\) on the associated graded, see Proposition 3.2. **Remark 5.3**.: Recall that \(K_{\cdot}^{\operatorname{top}}(\mathcal{E}^{\vee},f)\) is a \(\Lambda=\mathbb{Q}[\epsilon]\)-module, where \(\epsilon\) has degree \(1\). We include the following computation of a \(\Lambda\)-module structure on the topological K-theory of a category of matrix factorizations, see also Proposition 4.13, but note that we do not use it later in the paper. **Proposition 5.4**.: _The forget-the-potential functor induces an isomorphism_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}^{\operatorname{gr}}( \mathcal{E}^{\vee},f))\otimes_{\mathbb{Q}}\Lambda\xrightarrow{\sim}K_{\cdot}^{ \operatorname{top}}(\operatorname{MF}(\mathcal{E}^{\vee},f)) \tag{5.15}\] _of \(\Lambda\)-modules. Thus, if \(\mathbb{M}\) is admissible in \(D^{b}(\mathcal{E}^{\vee})\), there is an isomorphism of \(\Lambda\)-modules:_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}^{\operatorname{gr}}( \mathbb{M},f))\otimes_{\mathbb{Q}}\Lambda\xrightarrow{\sim}K_{\cdot}^{ \operatorname{top}}(\operatorname{MF}(\mathbb{M},f)).\] Proof.: It is enough to prove (5.15). Let \(p:=\operatorname{Spec}\mathbb{C}\) and let \(r:=\operatorname{Spec}\Lambda\). Recall the Koszul equivalence (2.14). Using [11, Proposition 3.24] (also see [13, Proposition 3.9] and note that \(\operatorname{MF}(p,0)\simeq D^{b}(r)/\operatorname{Perf}(r)\)), the equivalence (2.14) induces an equivalence: \[\kappa^{\prime}\colon D^{b}(\mathcal{K})\otimes_{D^{b}(p)}\operatorname{MF}(p,0)\xrightarrow{\sim}\operatorname{MF}(\mathcal{E}^{\vee},f).\] Let \(\mathcal{K}_{0}:=\mathcal{K}\times r\) and let \(\pi\colon\mathcal{K}_{0}\to\mathcal{K}\) and \(t\colon r\to p\) be the natural projections. We have that \(\operatorname{MF}(p,0)\cong D^{b}(r)/t^{*}(D^{b}(p))\). Then \[D^{b}(\mathcal{K})\otimes_{D^{b}(p)}\operatorname{MF}(p,0)\cong D^{b}( \mathcal{K}_{0})/\pi^{*}(D^{b}(\mathcal{K})).\] It suffices to show that the map \[\pi^{*}\colon G_{i}^{\operatorname{top}}(\mathcal{K})\to G_{i}^{\operatorname {top}}(\mathcal{K}_{0})\cong G_{i}^{\operatorname{top}}(\mathcal{K})\] is zero, which follows as in the proof of Proposition 4.1. ## 6. Topological K-theory of quasi-BPS categories for quivers with potential In this section, we compute the topological K-theory of quasi-BPS categories for symmetric quivers satisfying Assumption 2.1 with a quasi-homogeneous potential in terms of BPS cohomology, see Theorem 6.2. The main step in the proof of Theorem 6.2 is the construction of the cycle map from topological K-theory of quasi-BPS categories to BPS cohomology, see Theorem 6.3 (which holds for all symmetric quivers). The conclusion then follows by comparing the decomposition of DT invariants in BPS invariants of Meinhardt-Reineke (which also holds for all symmetric quivers) and and the semiorthogonal decomposition of the variety of framed representations from Theorem 2.8. We note that there is a version of Theorem 2.8 for all symmetric quivers, see [PTe]. However, under Assumption 2.1, all quasi-BPS categories appearing in the semiorthogonal decomposition are of the form \(\mathbb{S}(d)_{v}\), which is used crucially in the computation in Subsection 6.3. The construction of the cycle map from Theorem 6.3 holds for all quasi-BPS categories \(\mathbb{S}(d;\delta)\) for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Theorem 6.3 is proved in Subsections 6.6 and is based on the fact that the weight conditions for complexes in \(\mathbb{S}(d;\delta)\) restrict the possible perverse degree of their image under the cycle map, see Proposition 6.15 and Corollaries 6.18 and 6.19. In view of the assumptions in Section 4, we assume throughout this section that the potential \(W\) of \(Q=(I,E)\) is _quasi-homogeneous_, that is, there exists a weight function \(w\colon E\to\mathbb{Z}_{\geqslant 0}\) such that \(W\) is homogeneous of weight \(d>0\) with respect to the function \(w\). ### Statement of the main theorem Before we state Theorem 6.2, we introduce notation related to quasi-BPS categories and BPS sheaves. #### 6.1.1. Quasi-BPS categories Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Consider the category \(\mathbb{M}(d;\delta)\) defined in (2.19) and recall the definition of quasi-BPS categories \(\mathbb{S}(d;\delta)\) from (2.21). For \(\lambda\) a cocharacter of \(T(d)\), recall the definition of \(n_{\lambda}\) from (2.20). For \(\lambda\) a dominant cocharacter of \(T(d)\), define \[\varepsilon_{\lambda,\delta}=\begin{cases}1,\text{ if }\frac{1}{2}n_{\lambda}+ \langle\lambda,\delta\rangle\in\mathbb{Z},\\ 0,\text{ otherwise.}\end{cases} \tag{6.1}\] For a partition \(\mathbf{d}=(d_{i})_{i=1}^{k}\) of \(d\), let \(\varepsilon_{\mathbf{d},\delta}=1\) if \(\varepsilon_{\lambda,\delta}=1\) for all cocharacters \(\lambda\) with associated partition \(\mathbf{d}\) and let \(\varepsilon_{\mathbf{d},\delta}=0\) otherwise. #### 6.1.2. Sets of partitions For a dimension vector \(d=(d^{j})_{j\in I}\in\mathbb{N}^{I}\), recall that \(\underline{d}:=\sum_{j\in I}d^{j}\). Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Denote by \(S_{\delta}^{d}\) the set of partitions \(\mathbf{d}=(d_{i})_{i=1}^{k}\) of \(d\) such that \(\varepsilon_{\mathbf{d},\delta}=1\), where \(\lambda\) is any antidominant cocharacter with associated partition \((d_{i})_{i=1}^{k}\). If \(\delta=v\tau_{d}\), we use the notation \(S_{v}^{d}\) instead of \(S_{v\tau_{d}}^{d}\). Consider \((d_{i})_{i=1}^{k}\in S_{\delta}^{d}\), and an antidominant cocharacter with associated partition \((d_{i})_{i=1}^{k}\). Define \(\theta_{i}\in\frac{1}{2}M(d_{i})\) with \[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g} (d)^{\lambda>0}.\] Let \(\delta_{d_{i}}\in M(d_{i})_{\mathbb{R}}\) such that \(\sum_{i=1}^{k}\delta_{d_{i}}=\delta\). Then the Hall product induces a functor \[m=m_{\lambda}\colon\bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_ {i}})\to\mathbb{M}(d;\delta)\] and similarly for categories of matrix factorizations, see [10, Propositions 3.5 and 3.6] (in loc. cit. and using the notations used there, [10, Proposition 3.6] is stated that \(r>\frac{1}{2}\), but for \(r=\frac{1}{2}\) it is still true that \(\chi-\sigma_{I}\in\frac{1}{2}\mathbb{W}\)). If we assume that \(Q\) satisfies Assumption 2.1, then \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\), and so there are functors, see Remark 2.6: \[\bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\delta_{d_{i}})\overset{\sim}{\to} \bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}})\overset{m}{ \to}\mathbb{M}(d;\delta_{d}). \tag{6.2}\] Assume that \(\delta=v\tau_{d}\) and write \(\delta_{i}=v_{i}\tau_{d_{i}}\) for \(1\leqslant i\leqslant k\). Then \[\frac{v}{\underline{d}}=\frac{v_{i}}{\underline{d}_{i}} \tag{6.3}\] for any \(1\leqslant i\leqslant k\). If we assume that \(Q\) satisfies Assumption 2.1, the Hall product then induces functors, see (6.2): \[\bigotimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\overset{\sim}{\to}\bigotimes_{i =1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}})\overset{m}{\to}\mathbb{M} (d)_{v}.\] We end this subsection with the following computation: **Proposition 6.1**.: _Let \(Q=(I,E)\) be a quiver satisfying Assumption 2.1. Let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\). The set \(S_{v}^{d}\) contains all partitions \(\textbf{d}=(d_{i})_{i=1}^{k}\) such that_ \[\underline{d}|v\cdot\gcd(\underline{d}_{1},\ldots,\underline{d}_{k}).\] _In particular, if \(\gcd(v,\underline{d})=1\), then \(S_{v}^{d}\) contains only the one term partition of \(d\)._ Proof.: Let \(d=(d^{a})_{a\in I}\in\mathbb{N}^{I}\). Note that \(n_{\lambda}=\langle\lambda,\mathbb{L}_{\mathcal{X}(d)}^{\lambda>0}|_{0}\rangle \in 2\mathbb{Z}\) because \(Q\) satisfies Assumption 2.1. Then \(\varepsilon_{\textbf{d},v\tau_{d}}=1\) if and only if \(\langle\lambda,vr_{d}\rangle\in\mathbb{Z}\) for all cocharacters \(\lambda\) with associated partition \(\textbf{d}\). Write \(\lambda=(\lambda^{a})_{a\in I}\), where \(\lambda^{a}\colon\mathbb{C}^{*}\to T(d^{a})\) is a cocharacter \[\lambda(t)=(t^{m_{1}},\ldots,t^{m_{1}},t^{m_{2}},\ldots,t^{m_{2}},\ldots,t^{m _{k}}),\] where \(m_{i}\) appears \(d_{i}^{a}\)-times, and \(m_{i}\neq m_{j}\) for \(1\leqslant i\neq j\leqslant k\). Then the condition \(\langle\lambda,vr_{d}\rangle\in\mathbb{Z}\) is equivalent to that \[v/\underline{d}\cdot\sum_{i=1}^{k}m_{i}\underline{d}_{i}\in\mathbb{Z}\] for all tuples of pairwise distinct integers \((m_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\), which implies the desired conclusion. #### 6.1.3. BPS sheaves and cohomologies Let \(Q=(I,E)\) be a symmetric quiver, let \(W\) be a potential of \(Q\), and let \(d\in\mathbb{N}^{I}\). Consider the stack of dimension \(d\) representations of \(Q\) and its good moduli space: \[\pi_{d}\colon\mathcal{X}(d):=R(d)/G(d)\to X(d):=R(d)/\!\!/G(d).\] We denote by \(\mathrm{IC}:=\mathrm{IC}_{\mathcal{X}(d)}=\mathbb{Q}_{\mathcal{X}(d)}[\dim \mathcal{X}(d)]\) and we may drop \(\mathrm{Tr}\,W\) from the notation of the vanishing cycle functor. Recall that \(\varphi:=\varphi_{\mathrm{Tr}\,W}\mathbb{Q}_{\mathcal{X}(d)}\). Following [1], define the BPS sheaf \[\mathcal{BPS}_{d}:=\begin{cases}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{X(d)}[-1 ],\text{ if }X(d)^{\mathrm{st}}\neq\emptyset,\\ 0,\text{ if }X(d)^{\mathrm{st}}=\emptyset.\end{cases}\] Note that \(\mathcal{BPS}_{d}\in\mathrm{Perv}(X(d))\). Consider a partition \(A=(d_{i})_{i=1}^{k}\) of \(d\). Let \(\ell(A):=k\). Assume the set \(\{d_{1},\ldots,d_{k}\}=\{e_{1},\ldots,e_{s}\}\) has cardinality \(s\) and that, for each \(1\leqslant i\leqslant s\), there are \(m_{i}\) elements in \(\{d_{1},\ldots,d_{k}\}\) equal to \(e_{i}\). Define the addition maps \(\oplus_{i}\colon X(e_{i})^{\times m_{i}}\to X(m_{i}e_{i})\) for \(1\leqslant i\leqslant s\) and \(\oplus^{\prime}\colon\times_{i=1}^{s}X(m_{i}e_{i})\to X(d)\), which are finite. Define the sheaves: \[\mathrm{Sym}^{m_{i}}\big{(}\mathcal{BPS}_{e_{i}}\big{)} :=\oplus_{i,*}\left(\mathcal{BPS}_{e_{i}}^{\boxtimes m_{i}} \right)^{\mathfrak{S}_{m_{i}}}\in\mathrm{Perv}(X(m_{i}e_{i})), \tag{6.4}\] \[\mathcal{BPS}_{A} :=\oplus_{*}^{\prime}\left(\boxtimes_{i=1}^{s}\mathrm{Sym}^{m_{i} }(\mathcal{BPS}_{e_{i}})\right)\in\mathrm{Perv}(X(d)).\] Alternatively, by the Thom-Sebastiani theorem, the sheaf \(\mathcal{BPS}_{A}\) has the following description. Let \(\mathcal{BPS}_{A}^{0}\) be the sheaf defined above for \(W=0\). Then \(\mathcal{BPS}_{A}=\varphi_{\mathrm{Tr}\,W}\mathcal{BPS}_{A}^{0}[-1]\). Define the complexes \[\mathcal{BPS}_{d,\delta} :=\bigoplus_{A\in S_{s}^{d}}\mathcal{BPS}_{A}[-\ell(A)]\in D_{ \mathrm{con}}^{b}(X(d)),\] \[\mathcal{BPS}_{d,v} :=\mathcal{BPS}_{d,vr_{d}}\in D_{\mathrm{con}}^{b}(X(d)). \tag{6.5}\] As we will see in (6.18), the complexes \(\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{d,\delta}\) are direct summands of \(\pi_{*}\varphi\mathrm{IC}[-1]\) preserved by \(1-\mathrm{T}\). Define \(\mathcal{BPS}^{\mathrm{inv}}_{A},\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}\in D ^{b}_{\mathrm{con}}(X(d))\) by the exact triangles: \[\mathcal{BPS}^{\mathrm{inv}}_{A}[-1] \to\mathcal{BPS}_{A}\xrightarrow{1-\mathrm{T}}\mathcal{BPS}_{A} \to\mathcal{BPS}^{\mathrm{inv}}_{A},\] \[\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}[-1] \to\mathcal{BPS}_{d,\delta}\xrightarrow{1-\mathrm{T}}\mathcal{BPS }_{d,\delta}\to\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}.\] #### 6.1.4. Statement of the main theorem Let \(Q=(I,E)\) be a symmetric quiver and let \(W\) be a quasi-homogeneous potential. Consider the Chern character map (4.14): \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v})\hookrightarrow K^{ \mathrm{top}}_{i}(\mathrm{MF}(\mathcal{X}(d),\mathrm{Tr}\,W))\to\widetilde{H} ^{i}(\mathcal{X}(d),\varphi^{\mathrm{inv}}_{\mathrm{Tr}\,W}). \tag{6.6}\] Recall (4.15) and define the filtration: \[E_{\ell}K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v}):=K^{\mathrm{top}}_{i}(\mathbb{ S}(d)_{v})\cap E_{\ell}K^{\mathrm{top}}_{i}(\mathrm{MF}(\mathcal{X}(d),\mathrm{ Tr}\,W))\subset K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v}).\] There is an injective cycle map on the associated graded pieces: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}(\mathbb{S} (d)_{v}) \to H^{2\dim\mathcal{X}(d)-2\ell-i}(\mathcal{X}(d),\varphi^{\mathrm{ inv}}_{\mathrm{Tr}\,W}[-2])\] \[\xrightarrow{\sim}H^{\dim\mathcal{X}(d)-2\ell-i}(\mathcal{X}(d), \varphi^{\mathrm{inv}}_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-2]), \tag{6.7}\] where we used \(\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}=\varphi_{\mathrm{Tr}\,W}[ \dim\mathcal{X}(d)]\) for computing the cohomological degree. The following is the main result of this section: **Theorem 6.2**.: _Assume the quiver \(Q\) satisfies Assumption 2.1 and let \(W\) be a quasi-homogeneous potential of \(Q\). Then the cycle map (6.7) induces an isomorphisms for \(i,\ell\in\mathbb{Z}\):_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}\left(\mathbb{S}(d)_{v} \right)\xrightarrow{\sim}H^{\dim\mathcal{X}(d)-2\ell-i}(X(d),\mathcal{BPS}^{ \mathrm{inv}}_{d,v}[-1]). \tag{6.8}\] The main part of proving Theorem 6.2 is the construction of a cycle map from the topological K-theory of quasi-BPS categories to BPS cohomology, which applies to all symmetric quivers \(Q\). **Theorem 6.3**.: _Let \(Q\) be an arbitrary symmetric quiver and let \(W\) be a quasi-homogeneous potential of \(Q\). Let \(d\in\mathbb{N}^{I}\), \(\delta\in M(d)^{W_{d}}_{\mathbb{R}}\), and \(i,\ell\in\mathbb{Z}\). The cycle map (6.7) induces a map:_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}\left(\mathbb{S}(d; \delta)\right)\to H^{\dim\mathcal{X}(d)-2\ell-i}\left(X(d),\mathcal{BPS}^{ \mathrm{inv}}_{d,\delta}[-1]\right). \tag{6.9}\] We mention the following numerical corollary of Theorems 6.2 and 6.3. **Corollary 6.4**.: _Let \(Q\) be an arbitrary symmetric quiver and let \(W\) be a quasi-homogeneous potential. Let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\) and let \(i\in\mathbb{Z}\). Then:_ \[\dim_{\mathbb{Q}}K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v})\leqslant\dim_{ \mathbb{Q}}H^{\cdot}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}. \tag{6.10}\] _If \(Q\) satisfies Assumption 2.1, then equality holds in (6.10)._ When \(\gcd(\underline{d},v)=1\) and \(Q\) satisfies Assumption 2.1, we regard \(\mathbb{S}(d)_{v}\) as a categorification of the monodromy invariant BPS cohomology of \((Q,W)\). Before we prove Corollary 6.4, note the following: **Proposition 6.5**.: _Let \(Q\) be a quiver satisfying Assumption 2.1. The Chern character map (6.6) is injective._ Proof.: It follows from Proposition 4.10 and Theorem 2.8. Proof of Corollary 6.4 from Theorem 6.3.: Note that there is a (non-canonical) isomorphism \[H^{\bullet}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}\cong H^{\bullet}(X(d), \mathcal{BPS}_{d,v})_{\mathrm{inv}}. \tag{6.11}\] The cycle map (6.9) is injective because (6.8) is injective. Then, by Theorem 6.3, we have that: \[\dim_{\mathbb{Q}}\operatorname{gr}.K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v}) \leqslant\dim_{\mathbb{Q}}H^{\cdot}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}. \tag{6.12}\] If \(Q\) satisfies Assumption 2.1, then (6.12) is an equality. It suffices to show that \(\dim_{\mathbb{Q}}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})=\dim_{\mathbb{Q}} \operatorname{gr}.K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\), equivalently that (6.6) is injective, which is Proposition 6.5. Corollary 1.5 follows easily from Theorem 6.2. Proof of Corollary 1.5.: Note that Proposition 6.1 implies that \(\mathbf{d}=(d_{i})_{i=1}^{k}\in S_{v}^{d}\) if and only if \(\underline{d}/\gcd(\underline{d},v)\) divides \(\underline{d}_{i}\) for \(1\leqslant i\leqslant k\). Then \(S_{v}^{d}=S_{v^{\prime}}^{d}\) for \(v,v^{\prime}\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=\gcd(\underline{d},v^{\prime})\). The statement then follows from Theorem 6.2. In Section 7, we compute the topological K-theory of quasi-BPS categories of preprojective algebras of quivers satisfying Assumption 2.2 using Theorem 6.2, see Theorem 7.6. In [PTd], we further use Theorem 7.6 to compute the topological K-theory of quasi-BPS categories of K3 surfaces. In particular, we obtain categorifications of the BPS cohomology of a large class of preprojective algebras and of K3 surfaces. We end this subsection by discussing the zero potential case of Theorem 6.2. Then \(\mathcal{BPS}_{d}=\mathrm{IC}_{X(d)}\). Denote by \(\mathrm{IH}^{\bullet}(X(d)):=H^{\bullet}\big{(}X(d),\mathrm{IC}_{X(d)}\big{)}\). Note that \(H^{\mathrm{odd}}(\mathfrak{X}(d))=H^{\mathrm{odd}}(\mathfrak{X}(d),\mathrm{IC }_{\mathfrak{X}(d)})=0\). We then have that \(\mathrm{IH}^{\mathrm{even}}(X(d))=0\) because \(\mathrm{IC}_{X(d)}[-1]\) is a direct summand of \(R\pi_{*}\mathrm{IC}_{\mathfrak{X}(d)}\), see (6.16); alternatively, the vanishing \(\mathrm{IH}^{\mathrm{even}}(X(d))=0\) follows from Kirwan surjectivity. By Theorem 6.2 and Proposition 4.13, we obtain the following: **Theorem 6.6**.: _Let \(Q\) be a quiver satisfying Assumption 2.1, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\) such that \(\gcd\,(\underline{d},v)=1\). For \(\ell\in\mathbb{Z}\), the cycle map induces an isomorphism:_ \[\mathrm{c}\colon\operatorname{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v })\xrightarrow{\sim}\mathrm{IH}^{\dim\mathfrak{X}(d)-2\ell-1}(X(d)).\] We note a consequence of Corollary 6.4, alternatively a numerical corollary of Theorem 6.6. **Corollary 6.7**.: _Let \(Q\) be a quiver satisfying Assumption 2.1, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\) such that \(\gcd\,(\underline{d},v)=1\). Then_ \[\dim_{\mathbb{Q}}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v})=\dim_{\mathbb{Q}} \mathrm{IH}^{\cdot}(X(d))\] _for any \(i\in\mathbb{Z}\)._ For \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\) and \(Q\) as in the statement of Corollary (6.7), we regard \(\mathbb{M}(d)_{v}\) as a categorification of the intersection cohomology of \(X(d)\). Note that, in general, \(X(d)\) is a singular scheme. ### The decomposition theorem Let \(\alpha\in\mathbb{N}\) and recall the construction of framed quivers \(Q^{\alpha f}\) from Subsection 2.14. We review the explicit computation of summands in the BBDG decomposition theorem [1] for the pushforwards of the constant sheaves along the maps: \[\pi_{\alpha f,d}\colon\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}\to X(d),\ \pi_{d}\colon\mathcal{X}(d)\to X(d)\] due to Meinhardt-Reineke [11] and Davison-Meinhardt [15]. The maps \(\pi_{\alpha f,d}\) "approximate" the map \(\pi_{d}\), see [15, Subsection 4.1]. The computation of \(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\) is deduced from the computation of \(\pi_{\alpha f,d*}\mathbb{Q}_{\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}}[\dim \mathcal{X}(d)]\). We introduce some constructible sheaves on \(X(d)\). Let \(A\) be a tuplet \((e_{i},m_{i,a})\) for \(1\leqslant i\leqslant s\) and for \(a\geqslant 0\), with \((e_{i})_{i=1}^{s}\in\mathbb{Z}_{\geqslant 1}^{s}\) pairwise distinct and \(m_{i,a}\geqslant 0\) such that \(\sum_{i=1}^{s}\sum_{a\geqslant 0}e_{i}m_{i,a}=d\). Let \(\mathcal{P}\) be the set of all such tuplets \(A\) and let \(\mathcal{P}_{\alpha}\subset\mathcal{P}\) be the subset of such tuplets with \(m_{i,a}=0\) for \(a\geqslant\alpha e_{i}\). Note that each \(A\) has a corresponding partition with terms \(e_{i}\) with multiplicity \(\sum_{a\geqslant 0}m_{i,a}\) for \(1\leqslant i\leqslant s\). Consider the addition maps: \[\oplus_{i,a}\colon X(e_{i})^{\times m_{i,a}}\to X(m_{i,a}e_{i}),\ \oplus^{\prime}\colon\ \times_{i=1}^{s}\times_{a\geqslant 0}X(m_{i,a}e_{i}) \to X(d). \tag{6.13}\] Define the constructible complexes: \[\mathrm{Sym}^{m_{i,a}}\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right) :=\oplus_{i,a}\left(\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right)^{ \boxtimes m_{i,a}}\right)^{\mathfrak{S}m_{i,a}},\] \[\mathrm{P}_{A} :=\oplus_{*}^{\prime}\left(\boxtimes_{1\leqslant i\leqslant s,a \geqslant 0}\mathrm{Sym}^{m_{i,a}}\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right) \right).\] Then \(\mathrm{P}_{A}\) is supported on the image of \(\oplus^{\prime}\) and is a shifted perverse sheaf of degree \[p_{A}:=\sum_{i=1}^{k}\sum_{a\geqslant 0}m_{i,a}(2a+1),\] meaning that \(\mathrm{P}_{A}[p_{A}]\in\mathrm{Perv}(X(d))\). Define analogously \[\mathrm{Q}_{A}:=\oplus_{*}^{\prime}\left(\boxtimes_{1\leqslant i\leqslant s,a \geqslant 0}\mathrm{Sym}^{m_{i,a}}\left(\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{X(e _{i})}[-2a-2]\right)\right). \tag{6.14}\] Then one can show, using the Thom-Sebastiani theorem, that \[\mathrm{Q}_{A}=\varphi_{\mathrm{Tr}\,W}\mathrm{P}_{A}[-1].\] Let \(\alpha\) be an even positive natural number. The following explicit form of the BBDG decomposition theorem for \(\pi_{\alpha f,d}\) was determined by Meinhardt-Reineke [11, Proposition 4.3]: \[\pi_{\alpha f,d*}\left(\mathbb{Q}_{\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}}[ \dim\mathcal{X}(d)]\right)=\bigoplus_{A\in\mathcal{P}_{\alpha}}P_{A}. \tag{6.15}\] The result in loc. cit. is stated as an equality in the Grothendieck group of constructible sheaves, but the above stronger statement holds by the argument in [15, Proof of Theorem 4.10]. Using the above, one can obtain, see [15, Theorem C], the following decomposition: \[\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}=\bigoplus_{A\in\mathcal{P}}\mathrm{P}_{A}. \tag{6.16}\] The proper pushforward commutes with the vanishing cycle functor, so applying the vanishing cycle functor to (6.15) one obtains the following decomposition, which is also called the DT/ BPS wall-crossing: \[\pi_{\alpha f,d*}\varphi_{\mathrm{Tr}\,W}\left(\mathbb{Q}_{\mathcal{X}^{ \alpha f}(d)^{\mathrm{ss}}}[\dim\mathcal{X}(d)-1]\right)=\bigoplus_{A\in \mathcal{P}_{\alpha}}Q_{A}. \tag{6.17}\] The map \(\pi_{d}\) can be approximated by the proper maps \(\pi_{\alpha f,d}\), thus \(\pi_{d*}\) also commutes with the vanishing cycle functor. From (6.16), we obtain: \[\pi_{d*}\varphi_{\operatorname{Tr}W}\mathrm{IC}_{\mathfrak{X}(d)}[-1]=\bigoplus _{A\in\mathcal{P}}Q_{A}. \tag{6.18}\] The summands in all the above decompositions are induced via the Hall product. We now state a corollary of (6.17). **Proposition 6.8**.: _Let \(\alpha\) be an even positive integer and let \(i\in\mathbb{Z}\). Then there is an isomorphism of \(\mathbb{N}^{I}\times\mathbb{N}\)-graded vector spaces, where the second grading is the cohomological grading:_ \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{ \alpha f}(d)^{\operatorname{ss}},\varphi\left[\dim\mathfrak{X}(d)-1\right] \right)^{\operatorname{inv}}\cong\\ \left(\operatorname{Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{ \bullet}\left(X(d),\mathcal{BPS}_{d}[-1]\right)\otimes H^{\bullet}(\mathbb{P} ^{\alpha d-1})\right)\right)^{\operatorname{inv}}. \tag{6.19}\] _By forgetting the cohomological grading, there is an isomorphism of \(\mathbb{N}^{I}\)-graded vector spaces:_ \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{\alpha f}(d)^{ \operatorname{ss}},\varphi\right)^{\operatorname{inv}}\cong\left(\operatorname {Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot}\left(X(d),\mathcal{BPS}_{ d}\right)^{\oplus\alpha d}\right)\right)^{\operatorname{inv}}.\] Proof.: By taking global sections of the two sides of (6.17), we obtain an isomorphism: \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{ \alpha f}(d)^{\operatorname{ss}},\varphi\left[\dim\mathfrak{X}(d)-1\right] \right)\cong\\ \operatorname{Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet }\left(X(d),\mathcal{BPS}_{d}[-1]\right)\otimes H^{\bullet}(\mathbb{P}^{ \alpha d-1})\right). \tag{6.20}\] The isomorphism (6.19) follows by taking the monodromy invariant parts on the two sides of the isomorphism (6.20). ### Semiorthogonal decompositions and the BBDG decomposition theorem In this section, we prove Theorem 6.2 assuming Theorem 6.3. The proof follows from a comparison of the pieces in the semiorthogonal decomposition from Theorem 2.8 with the summands of the DT/ BPS wall-crossing (6.17). Actually, the proof is based on a comparison of dimensions of certain vector spaces. In the rest of this subsection, we will use certain non-canonical maps, but they suffice for comparing dimensions of vector spaces. We rewrite the Chern character isomorphism (4.17) for \(\mathfrak{X}\) a smooth variety with a regular function \(f\colon\mathfrak{X}\to\mathbb{C}\). Observe that there is a (non-canonical) isomorphism \(H^{i}(\mathfrak{X},\varphi_{f})^{\operatorname{inv}}\cong H^{i}(\mathfrak{X}, \varphi_{f})_{\operatorname{inv}}\) of \(\mathbb{Q}\)-vector spaces. Rewrite (4.17) as the following (non-canonical) isomorphism of \(\mathbb{Q}\)-vector spaces for every \(i\in\mathbb{Z}\): \[\operatorname{ch}\colon K_{i}^{\operatorname{sg}}(\mathfrak{X}_{0})\xrightarrow {\sim}H^{\cdot}(\mathfrak{X},\varphi_{f})^{\operatorname{inv}}. \tag{6.21}\] Recall the notations \(\operatorname{gr}.K_{i}^{\operatorname{top}}:=\bigoplus_{a\in\mathbb{Z}} \operatorname{gr}_{a}K_{i}^{\operatorname{top}}\) and \(H^{\cdot}:=\bigoplus_{a\in\mathbb{Z}}H^{a}.\) Given a vector space \(V\) with a linear map \(T\colon V\to V\), we denote by \(V^{\operatorname{inv}}\) the kernel of \((1-T)\). For a set \(A\) of pairs \((V_{a},T_{a})_{a\in A}\) we denote by \((\otimes_{a\in A}V_{a})^{\operatorname{inv}}\) the kernel of \(1-\otimes_{a\in A}T_{a}\). Note that \(\otimes_{a\in A}V_{a}^{\rm inv}\subset(\otimes_{a\in A}V_{a})^{\rm inv}\). The same notation is also used for symmetric products. We will apply the above notation when \(T_{a}\) are monodromy operators on vanishing cycle cohomologies. Note the following corollary of Theorem 6.3, which follows because the cycle map (6.8) is injective: **Corollary 6.9**.: _Assume Theorem 6.3 holds. Then the cycle map (6.9) is injective._ Proof of Theorem 6.2 assuming Theorem 6.3.: Let \(\alpha\) be an even positive integer and fix \(i\in\mathbb{Z}\). By Theorem 2.8, there is a semiorthogonal decomposition: \[\operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname{Tr}W \right)=\left\langle\bigotimes_{j=1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right\rangle,\] where the right hand side is after all partitions \(\sum_{j=1}^{k}d_{j}=d\) and all weights \(v_{j}\in\mathbb{Z}\) with \(0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/\underline{d}_{k}<\alpha\). There is thus an isomorphism of \(\mathbb{N}^{I}\)-graded vector spaces: \[\bigoplus_{d\in\mathbb{N}^{I}}\operatorname{gr}.K_{i}^{\rm top}\left( \operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname{Tr}W \right)\right)\cong\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/ \underline{d}_{k}<\alpha}\operatorname{gr}.K_{i}^{\rm top}\left(\bigotimes_{j= 1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right).\] Recall the isomorphism of \(\mathbb{Q}\)-vector spaces (6.11). By Corollary 6.9, there is an injective (non-canonical) map: \[\operatorname{gr}.K_{i}^{\rm top}(\mathbb{S}(d)_{v})\hookrightarrow H^{\cdot} (X(d),\mathcal{BPS}_{d,v})^{\rm inv}. \tag{6.22}\] Then we have injective maps \[\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/ \underline{d}_{k}<\alpha}\operatorname{gr}.K_{i}^{\rm top}\left(\bigotimes_{j =1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right)\] \[\hookrightarrow\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}< \ldots<v_{k}/\underline{d}_{k}<\alpha}H^{\cdot}\left(\times_{j=1}^{k}X(d_{j}),\boxtimes_{j=1}^{k}\mathcal{BPS}_{d,v}\right)^{\rm inv}\] \[\hookrightarrow\left(\bigotimes_{\begin{subarray}{c}\mu\in \mathbb{Q}\\ 0\leqslant\mu<\alpha\end{subarray}}\left(\operatorname{Sym}\left(\bigoplus_{ \begin{subarray}{c}d\in\mathbb{N}^{I}\\ \exists v\operatorname{s.t.}\mu=v/\underline{d}\end{subarray}}H^{\cdot}(X(d), \mathcal{BPS}_{d})\right)\right)\right)^{\rm inv}\] \[\xrightarrow{\sim}\left(\operatorname{Sym}\left(\bigoplus_{d\in \mathbb{N}^{I}}H^{\cdot}(X(d),\mathcal{BPS}_{d})^{\oplus\underline{d}}\right) \right)^{\rm inv}\xrightarrow{\sim}\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot} \left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\varphi\right)^{\rm inv}\] where the first inclusion follows from Corollary 6.9 (applied to disjoint union of \(k\)-copies of \(Q\), the \(k=1\) case is (6.22)), the second inclusion follows from the definition of \(\mathcal{BPS}_{d,v}\), Proposition 6.1, and the fact that the Thom-Sebastiani isomorphism is natural with respect to the monodromy actions, the first isomorphism follows from a combinatorial observation, and the second isomorphism follows from Proposition 6.8. We thus obtain an injective map of \(\mathbb{N}^{I}\)-graded vector spaces: \[\bigoplus_{d\in\mathbb{N}^{I}}\operatorname{gr}.K_{i}^{\rm top}\left( \operatorname{MF}^{\rm gr}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname {Tr}W\right)\right)\hookrightarrow\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot} \left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\varphi\right)^{\rm inv}.\] By the isomorphism (6.21) together with the exact sequence (4.10), the \(\mathbb{N}^{I}\)-graded piece of both sides of the above map has the same (finite) dimension, hence the map above is an isomorphism. The map (6.22) is then also an isomorphism, thus also the maps (6.8) are isomorphisms. It thus remains to prove Theorem 6.3. In Subsection 6.10, we reduce the proof for a general symmetric quiver to that of a quiver with at least two loops at every vertex. In Subsection 6.5, we prove a restriction statement of the image under the cycle map of an object in a quasi-BPS category. In Subsection 6.6, we combine the above restriction with the decomposition theorems (6.16) to prove Theorem 6.3. ### Reduction to quivers with enough edges Consider an arbitrary symmetric quiver with potential \((Q,W)\). Let \(Q=(I,E)\). For \(i\in I\), let \(\omega_{i},\omega_{i}^{\prime}\) be two loops at \(i\). Let \(E^{\mathfrak{Z}}:=E\sqcup\{\omega_{i},\omega_{i}^{\prime}\mid i\in I\}\) and consider the quadratic potential \(W^{q}:=\sum_{I\in I}\omega_{i}\omega_{i}^{\prime}\). Define the quiver with potential: \[Q^{\mathfrak{Z}}:=(I,E^{\mathfrak{Z}}),\,W^{\mathfrak{Z}}:=W+W^{q}.\] **Proposition 6.10**.: _Assume Theorem 6.3 holds for \((Q^{\mathfrak{Z}},W^{\mathfrak{Z}})\). Then Theorem 6.3 holds for \((Q,W)\)._ Recall the stack of representations \(\mathscr{X}(d)=R(d)/G(d)\) of \(Q\). For the quiver \(Q^{\mathfrak{Z}}\) and for \(d\in\mathbb{N}^{I}\), we consider the following: the stack of representations with its good moduli space \[\pi_{d}^{\mathfrak{Z}}\colon\mathscr{X}^{\mathfrak{Z}}(d):=\left(R(d)\oplus \mathfrak{g}(d)^{\oplus 2}\right)/G(d)\to X^{\mathfrak{Z}}(d),\] the BPS sheaves \(\mathcal{BPS}^{\mathfrak{Z}}_{d,v}\) as defined in (6.5), the polytope \(\mathbb{W}^{\mathfrak{Z}}(d)\) as in (2.16), the integers \(n_{\lambda}^{\mathfrak{Z}}\) as in (2.20), the quasi-BPS categories \(\mathbb{M}^{\mathfrak{Z}}(d;\delta)\) from (2.19) and \(\mathbb{S}^{\mathfrak{Z}}(d;\delta)\) from (2.21). Let \[\mathscr{S}(d):=(R(d)\oplus\mathfrak{g}(d))/G(d)\] and consider the maps, where \(v,t\) are the natural projections and \(s\) is the natural inclusion: Let \(G:=G(d)\) and \(\mathfrak{g}:=\mathfrak{g}(d)\). We discuss two preliminary propositions. **Proposition 6.11**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(i\in\mathbb{Z}\). There is an isomorphism:_ \[s_{*}v^{*}\colon H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W} \mathrm{IC}_{\mathscr{X}(d)}[-1]) \xrightarrow{\sim}H^{i}(\mathscr{X}^{\mathfrak{Z}}(d),\varphi_{ \mathrm{Tr}\,W^{\mathfrak{Z}}}\mathrm{IC}_{\mathscr{X}^{\mathfrak{Z}}(d)}[-1]),\] \[H^{i}(X(d),\mathcal{BPS}_{d,\delta}) \xrightarrow{\sim}H^{i}(X^{\mathfrak{Z}}(d),\mathcal{BPS}^{ \mathfrak{Z}}_{d,\delta}).\] Proof.: Consider the diagram: (6.23) We first show there is an isomorphism of sheaves on \(D^{b}_{\mathrm{con}}(X^{\mathfrak{Z}}(d))\): \[s_{*}v^{*}\colon u_{*}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X} (d)}[-1]\xrightarrow{\sim}\pi_{d*}^{\mathfrak{Z}}\varphi_{\mathrm{Tr}\,W^{ \mathfrak{Z}}}\mathrm{IC}_{\mathscr{X}^{\mathfrak{Z}}(d)}[-1]. \tag{6.24}\] First, there is an isomorphism of sheaves on \(D^{b}_{\mathrm{con}}(X^{\mathfrak{Z}}(d))\): \[s_{*}v^{*}\colon u_{*}\pi_{d*}\mathrm{IC}_{\mathscr{X}(d)}\xrightarrow{\sim} \pi_{d*}^{\mathfrak{Z}}\varphi_{\mathrm{Tr}\,W^{\mathfrak{Z}}}\mathrm{IC}_{ \mathscr{X}^{\mathfrak{Z}}(d)}[-1]. \tag{6.25}\] The above map is obtained from base-change from the the map for \(\mathscr{X}(d)=\operatorname{pt}\), that is, from the map \[s_{0*}v_{0}^{*}\colon\pi_{d,0*}u_{0*}\mathrm{IC}_{BG}\xrightarrow{\sim}\pi_{d,0* }\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathfrak{g}^{\oplus 2}/G}[-1], \tag{6.26}\] where \(s_{0},v_{0},u_{0}\) are the maps as in (6.23) for \(\mathscr{X}(d)\) replaced by \(\operatorname{Spec}\mathbb{C}=\operatorname{pt}\), and where \(\pi_{d,0}\colon\mathfrak{g}^{\oplus 2}/G\to\mathfrak{g}^{\oplus 2}/\!\!/G\). By a direct computation, we have that \[\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathfrak{g}^{\oplus 2}/G}[-1]= \mathrm{IC}_{BG}\] because \(W^{q}\) is a Morse function with critical locus \(BG\), the origin in \(\mathfrak{g}^{\oplus 2}/G\). Further, (6.26) is an isomorphism for global sections by dimensional reduction, see Subsection 5.1, so (6.26) is an isomorphism. Then (6.25) is also an isomorphism. Abuse notation and write \(\operatorname{Tr}W\colon\mathscr{X}^{\mathsf{I}}(d)\xrightarrow{\operatorname {proj}}\mathscr{X}(d)\xrightarrow{\operatorname{Tr}W}\mathbb{C}.\) Note that \(\pi_{d*}\) commutes with \(\varphi_{\operatorname{Tr}W}\) because \(\pi_{d}\) can be approximated with the proper maps \(\pi_{\alpha f,d}\), see Subsection 6.2. Further, \(\varphi_{\operatorname{Tr}W}\) commutes with proper pushforward and smooth pullback. Apply \(\varphi_{\operatorname{Tr}W}\) to both sides of (6.25) and use the Thom-Sebastiani theorem for vanishing cycles to obtain: \[s_{*}v^{*}\colon u_{*}\pi_{d*}\varphi_{\operatorname{Tr}W}\mathrm{IC}_{ \mathscr{X}(d)}[-1] \xrightarrow{\sim}\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W }\left(\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathscr{X}(d)}[-1]\right)\] \[\cong\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W^{2}} \mathrm{IC}_{\mathscr{X}^{\mathsf{I}}(d)}[-1].\] We now explain that the isomorphism (6.24) induces an isomorphism of sheaves in \(D^{b}_{\operatorname{con}}(X^{\mathsf{I}}(d))\): \[u_{*}\mathcal{BPS}_{d,\delta}\xrightarrow{\sim}\mathcal{BPS}_{d,\delta}^{ \mathsf{I}}.\] First, we explain that \(S_{\delta}^{d}(Q)=S_{\delta}^{d}(Q^{\mathsf{I}})\). Let \(\lambda\) be a cocharacter of \(T(d)\). Let \(n_{\lambda}\) and \(n_{\lambda}^{\mathsf{I}}\) be the integers (2.20) for \(Q\) and \(Q^{\mathsf{I}}\), respectively. Let \(\varepsilon_{\lambda,\delta}\) and \(\varepsilon_{\lambda,\delta}^{\mathsf{I}}\) be the integers (6.1) for \(Q\) and \(Q^{\mathsf{I}}\), respectively. Then \[n_{\lambda}^{\mathsf{I}}-n_{\lambda}=2\langle\lambda,\mathfrak{g}(d)^{\lambda >0}\rangle,\] thus \(\varepsilon_{\lambda,\delta}=\varepsilon_{\lambda,\delta}^{\mathsf{I}}\), so indeed \(S_{\delta}^{d}(Q)=S_{\delta}^{d}(Q^{\mathsf{I}})=:S_{\delta}^{d}\). It suffices to check that (6.24) induces isomorphisms: \[u_{*}\mathcal{BPS}_{A}\xrightarrow{\sim}\mathcal{BPS}_{A}^{\mathsf{I}} \tag{6.27}\] for any \(A\in S_{\delta}^{d}\). The isomorphism (6.24) is obtained by applying the functor \(\varphi_{\operatorname{Tr}W}\) to the isomorphism (6.24) for \(W=0\), that is, from the isomorphism (6.25). Therefore it suffices to check (6.27) when \(W=0\), so we assume that \(W=0\) in the rest of the proof. Assume \(A\) has a corresponding partition \((d_{i})_{i=1}^{k}\) of \(d\). Let \(X_{A}\) be the image of \(\oplus\colon\times_{i=1}^{k}X(d_{i})\to X(d)\). There is an isomorphism: \[u_{*}{}^{p}\mathcal{H}^{k}(\pi_{d*}\mathrm{IC}_{\mathscr{X}(d)})\xrightarrow{ \sim}{}^{p}\mathcal{H}^{k}(\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W^{ q}}\mathrm{IC}_{\mathscr{X}^{\mathsf{I}}(d)}[-1]).\] There are either no summands of support \(X_{A}\) on both sides, case in which both \(u_{*}\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{A}^{\mathsf{I}}\) are zero, or there are unique summands of support \(X_{A}\) on both sides, namely \(u_{*}\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{A}^{\mathsf{I}}\), and thus (6.27) follows. We note the following corollary of Proposition 6.11. **Corollary 6.12**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(i\in\mathbb{Z}\). There is an isomorphism:_ \[s_{*}v^{*}\colon H^{i}(\mathscr{X}(d),\varphi_{\operatorname{Tr}W}^{\mathrm{ inv}}\mathrm{IC}_{\mathscr{X}(d)}[-1]) \xrightarrow{\sim}H^{i}(\mathscr{X}^{\mathsf{I}}(d),\varphi_{ \operatorname{Tr}W^{\mathsf{I}}}^{\mathrm{inv}}\mathrm{IC}_{\mathscr{X}^{ \mathsf{I}}(d)}[-1]),\] \[H^{i}(X(d),\mathcal{BPS}_{d,\delta}^{\mathrm{inv}}) \xrightarrow{\sim}H^{i}(X^{\mathsf{I}}(d),\mathcal{BPS}_{d, \delta}^{\mathsf{I},\mathrm{inv}}).\] We also relate quasi-BPS categories under Knorrer periodicity: **Proposition 6.13**.: _There is an equivalence:_ \[s_{*}v^{*}\colon\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W) \xrightarrow{\sim}\operatorname{MF}(\mathscr{X}^{\underline{1}}(d), \operatorname{Tr}W^{\underline{1}}),\] \[\mathbb{S}(d;\delta) \xrightarrow{\sim}\mathbb{S}^{\underline{1}}(d;\delta).\] Proof.: (cf. [PTe, Proposition 2.14]) Consider the Koszul complex \[\mathscr{K}:=s_{*}v^{*}\mathscr{O}_{\mathscr{X}}\in\operatorname{MF}(\mathscr{ X}^{\underline{1}}(d),\operatorname{Tr}W^{q}),\] where \(s_{*}v^{*}\colon\operatorname{MF}(\mathscr{X}(d),0)\xrightarrow{\sim} \operatorname{MF}(\mathscr{X}^{\underline{1}}(d),\operatorname{Tr}W^{q})\) is an equivalence by Knorrer periodicity. By the Thom-Sebastiani theorem for matrix factorizations [Pre], there is then an equivalence: \[t^{*}(-)\otimes\mathscr{K}\colon\operatorname{MF}(\mathscr{X}(d),\operatorname{ Tr}W)\xrightarrow{\sim}\operatorname{MF}(\mathscr{X}^{\underline{1}}(d), \operatorname{Tr}W^{\underline{1}}).\] Note that \(t^{*}(-)\otimes\mathscr{K}=s_{*}v^{*}(-)\). It remains to show that \[t^{*}(-)\otimes\mathscr{K}\colon\mathbb{S}(d;\delta)\xrightarrow{\sim} \mathbb{S}^{\underline{1}}(d;\delta).\] It suffices to show that, for \(F\in D^{b}(\mathscr{X}(d))\), we have that \(F\in\mathbb{M}(d;\delta)\) if and only if \(t^{*}(F)\otimes\mathscr{K}\in\mathbb{M}^{\underline{1}}(d;\delta)\). We use Lemma 2.3. Let \(\nu\colon B\mathbb{C}^{*}\to\mathscr{X}(d)\), let \(F\in D^{b}(\mathscr{X}(d))\), and let \(A_{F}\subset\mathbb{Z}\) be the set of weights of \(\nu^{*}(F)\). Note that for any \(\nu^{\underline{1}}\colon B\mathbb{C}^{*}\to\mathscr{X}^{\underline{1}}(d)\) such that \(t\circ\nu^{\underline{1}}=\nu\), we have that the weights of \((\nu^{\underline{1}})^{*}(t^{*}(F)\otimes\mathscr{K})\) are the Minkowski sum \(A_{F}+[-\langle\nu,\mathfrak{g}\rangle,\langle\nu,\mathfrak{g}\rangle]\). We have that \[A_{F}\subset\left[-\frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle, \frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle\right]\] if and only if \[A_{F}+[-\langle\nu,\mathfrak{g}\rangle,\langle\nu,\mathfrak{g}\rangle]\subset \left[-\frac{1}{2}n_{\lambda}^{\underline{1}}+\langle\lambda,\delta_{d}\rangle,\frac{1}{2}n_{\lambda}^{\underline{1}}+\langle\lambda,\delta_{d}\rangle\right].\] The conclusion then follows. Proof of Proposition 6.10.: Let \(i,\ell\in\mathbb{Z}\). By Corollary 4.9, there is a commutative diagram, where \(b=\dim\mathscr{X}(d)-i-2\ell=\dim\mathscr{X}^{\underline{1}}(d)-i-2(\ell+\dim \mathfrak{g})\): \[\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathscr{X}(d),\operatorname{Tr}W))\xrightarrow{s_{*}v^{*}}\operatorname{gr}_ {\ell+\dim\mathfrak{g}}K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathscr{X }^{\underline{1}}(d),\operatorname{Tr}W^{\underline{1}}))\] \[\operatorname{\downarrow}\operatorname{c}\] \[H^{b}(\mathscr{X}(d),\varphi_{\operatorname{Tr}W}^{\operatorname{ inv}}\operatorname{IC}_{\mathscr{X}(d)}[-2])\xrightarrow{s_{*}v^{*}}H^{b}(\mathscr{X}^{ \underline{1}}(d),\varphi_{\operatorname{Tr}W^{\underline{1}}}^{\operatorname {inv}}\operatorname{IC}_{\mathscr{X}^{\underline{1}}(d)}[-2]).\] The conclusion follows from Corollary 6.12 and Proposition 6.13. It will be convenient to make the following assumption on a quiver: **Assumption 6.1**.: Assume the quiver \(Q\) is symmetric and has at least two loops at any vertex. We introduce some notation. For any cocharacter \(\lambda\) with associated partition \(\mathbf{d}\), define \(c_{\mathbf{d}}:=c_{\lambda}:=\dim\mathscr{X}(d)-\dim\mathscr{X}(d)^{\lambda \geq 0}\). **Lemma 6.14**.: _Let \(Q=(I,E)\) be a quiver which satisfies Assumption 6.1 and let \(d\in\mathbb{N}^{I}\) be non-zero._ _(a) For any cocharacter \(\lambda\) of \(T(d)\), we have \(c_{\lambda}\geqslant 0\), and the inequality is strict if \(\lambda\) has an associated partition with at least two terms. Moreover, we have_ \[\dim\mathscr{X}(d)^{\lambda\geq 0}-\dim\mathscr{X}(d)^{\lambda}=c_{\lambda}.\] _(b) The map \(\pi_{d}\colon\mathscr{X}(d)\to X(d)\) is generically a \(\mathbb{C}^{*}\)-gerbe, in particular there exists a stable representation of dimension \(d\)._ Proof.: We only discuss the first claim of part (a), the second claim and part (b) are similar. Let \(S(d)\) be the affine space of dimension \(d\) representations of the quiver obtained from \(Q\) by deleting one loop at every vertex in \(I\). Then \[\mathscr{X}(d)=\left(S(d)\oplus\mathfrak{g}(d)\right)/G(d).\] We have that \[\dim\mathscr{X}(d)=\dim S(d)\text{ and }\dim\mathscr{X}(d)^{\lambda\geqslant 0 }=\dim\left(S(d)\right)^{\lambda\geqslant 0},\] so \(c_{\lambda}=\dim\left(S(d)\right)^{\lambda<0}\), and the first claim follows. ### Coproduct-like maps in K-theory In this subsection,we assume that \(Q\) satisfies Assumption 6.1. Consider an antidominant cocharacter \(\lambda\) of \(T(d)\) and let \(a_{\lambda}\colon\mathscr{X}(d)^{\lambda}\to\mathscr{X}(d)\) be the natural morphism inducing pullback maps for any \(i,\ell\in\mathbb{Z}\): \[a_{\lambda}^{*}\colon K_{i}^{\text{top}}\left(\operatorname{MF} \left(\mathscr{X}(d),\operatorname{Tr}W\right)\right)\to K_{i}^{\text{top}} \left(\operatorname{MF}\left(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W \right)\right),\] \[a_{\lambda}^{*}\colon\operatorname{gr}_{\ell}K_{i}^{\text{top} }(\operatorname{MF}\left(\mathscr{X}(d),\operatorname{Tr}W\right))\to \operatorname{gr}_{\ell+2d_{\lambda}}K_{i}^{\text{top}}\left(\operatorname{MF }\left(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W\right)\right),\] where \(d_{\lambda}:=\dim\mathscr{X}(d)^{\lambda}-\dim\mathscr{X}(d)\) is the relative dimension of \(a_{\lambda}\). Consider the quotient \(G(d)^{\prime}:=G(d)^{\lambda}/\text{image}(\lambda)\) and the stack \(\mathscr{X}(d)^{\prime\lambda}:=R(d)^{\lambda}/G(d)^{\prime}\). There is an isomorphism: \[K_{i}^{\text{top}}\left(\operatorname{MF}\left(\mathscr{X}(d)^{\lambda}, \operatorname{Tr}W\right)\right)\cong K_{i}^{\text{top}}\left(\operatorname{ MF}\left(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W\right)\right)[q^{ \pm 1}].\] There is an analogous isomorphism for graded K-theory. There are also maps in cohomology: \[a_{\lambda}^{*}\colon H^{{}^{\prime}}\left(\mathscr{X}(d),\varphi_{ \operatorname{Tr}W}\right) \to H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi_{ \operatorname{Tr}W}\right)[h],\] \[a_{\lambda}^{*}\colon H^{{}^{\prime}}\left(\mathscr{X}(d),\varphi_{ \operatorname{Tr}W}^{\text{inv}}\right) \to H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi_{ \operatorname{Tr}W}^{\text{inv}}\right)[h].\] Assume the associated partition of \(\lambda\) is \(\mathbf{d}=\left(d_{i}\right)_{i=1}^{k}\). Recall that \(c_{\lambda}:=c_{\mathbf{d}}:=\dim\mathscr{X}(d)-\dim\mathscr{X}(d)^{\lambda \geqslant 0}\). We define the following integers (which we call _widths_ of magic or quasi-BPS categories in this paper): \[c_{\lambda,\delta}:=c_{\lambda}+\varepsilon_{\lambda,\delta},\,c_{\mathbf{d}, \delta}:=c_{\mathbf{d}}+\varepsilon_{\mathbf{d},\delta}. \tag{6.28}\] **Proposition 6.15**.: _Let \(\lambda\) be an antidominant cocharacter of \(G(d)\) and let \(i,\ell\in\mathbb{Z}\). Consider the diagram:_ _Then the image of \(c\,a_{\lambda}^{*}\mathfrak{gr}_{\bullet}K_{\bullet}^{\text{top}}\left( \mathbb{S}(d;\delta)\right)\) lies in the subspace_ \[\bigoplus_{j=0}^{c_{\lambda,\delta}-1}H^{2\dim\mathscr{X}(d)-2\ell-i-2j} \left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\text{inv}}[-2]\right)h^{j} \subset H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\text{inv }}[-2]\right)[h].\] _Note that \(a_{\lambda}\) only depends on the partition **d**, so we obtain that the image of \(\operatorname{c}a_{\lambda}^{*}\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left( \mathbb{S}(d;\delta)\right)\) lies in the subspace_ \[\bigoplus_{j=0}^{c_{d,\delta}-1}H^{2\dim\mathscr{X}(d)-2\ell-i-2j}\left(\mathscr{ X}(d)^{\prime\lambda},\varphi^{\mathrm{inv}}[-2]\right)h^{j}\subset H^{\cdot} \left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\mathrm{inv}}[-2]\right)[h].\] Proof.: Consider a complex \(A\) in \(\mathbb{S}(d;\delta)\). Then \(a_{\lambda}^{*}(A)\) is in the subcategory of \(\operatorname{MF}(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W)\) generated by \(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W)_{v}\) for \[v\in S_{\lambda,\delta}:=\left[-\frac{1}{2}\langle\lambda,\mathbb{L}_{ \mathscr{X}(d)}^{\lambda>0}\rangle+\langle\lambda,\delta\rangle,\frac{1}{2} \langle\lambda,\mathbb{L}_{\mathscr{X}(d)}^{\lambda>0}\rangle+\langle\lambda, \delta\rangle\right]\cap\mathbb{Z}.\] Thus \[a_{\lambda}^{*}K_{i}^{\mathrm{top}}\left(\mathbb{S}(d;\delta)\right)\subset K _{i}^{\mathrm{top}}\left(\operatorname{MF}\left(\mathscr{X}(d)^{\prime\lambda },\operatorname{Tr}W\right)\right)\otimes\mathscr{A}, \tag{6.29}\] where \(\mathscr{A}:=\bigoplus_{j\in S_{\lambda,\delta}}\mathbb{Q}\cdot q^{j}\). There are filtrations pulled back from cohomology by the Chern character for both \(K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda}, \operatorname{Tr}W)\right)\) and \(K_{0}^{\mathrm{top}}\left(B\mathbb{C}^{*}\right)\), and there is an isomorphism obtained by the Kunneth formula: \[\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d) ^{\lambda},\operatorname{Tr}W)\right)\cong\bigoplus_{a+b=\ell}\mathrm{gr}_{a} K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda}, \operatorname{Tr}W)\right)\otimes\mathrm{gr}_{b}K_{0}^{\mathrm{top}}\left(B \mathbb{C}^{*}\right). \tag{6.30}\] The filtration \(E_{b}K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\) on \(K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\) induces a filtration \[E_{b}\mathscr{A}:=\mathscr{A}\cap E_{b}K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\] on \(\mathscr{A}\). There are natural inclusions \(\mathrm{gr}_{b}\mathscr{A}\hookrightarrow\mathrm{gr}_{b}K_{0}^{\mathrm{top}}( B\mathbb{C}^{*})\). We obtain a Kunneth formula: \[\mathrm{gr}_{\ell}\left(K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{ X}(d)^{\prime\lambda},\operatorname{Tr}W)\right)\otimes\mathscr{A}\right)\cong \bigoplus_{a+b=\ell}\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\operatorname{MF} (\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W)\right)\otimes\mathrm{gr }_{b}\mathscr{A}. \tag{6.31}\] By (6.29) and (6.31), we have \[a_{\lambda}^{*}\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left(\mathbb{S}(d; \delta)\right)\subset\mathrm{gr}.K_{i}^{\mathrm{top}}\left(\operatorname{MF} \left(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W\right)\right)\otimes \mathrm{gr}.\mathscr{A}.\] It suffices to show that: \[\operatorname{c}\left(\mathrm{gr}.\mathscr{A}\right)\subset\bigoplus_{i=0}^{c_ {\lambda,\delta}-1}\mathbb{Q}\cdot h^{i}. \tag{6.32}\] For any \(1\leqslant i\leqslant k\), let \(F_{i}\) be a stable representation of dimension \(d_{i}\) (which exists by Lemma 6.14, and note that this is the only place where we use that \(Q\) satisfies Assumption 6.1) and let \(F:=\bigoplus_{i=1}^{k}F_{i}\). Let \(V/(\mathbb{C}^{*})^{k}\) be the moduli stack of representations of the Ext quiver of \(F\) and dimension vector \((1,\dots,1)\in\mathbb{N}^{k}\). Note that there is an etale map \[V/(\mathbb{C}^{*})^{k}\to\mathscr{X}(d),\,0\mapsto F.\] We have the equality of sets \[S_{\lambda,\delta}=\Big{[}-\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+ \langle\lambda,\delta\rangle,\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+ \langle\lambda,\delta\rangle\Big{]}\cap\mathbb{Z}.\] We denote the image of \(\lambda\) in \((\mathbb{C}^{*})^{k}\) by \(\mathbb{C}^{*}\). Consider the maps: \[V^{\lambda}\stackrel{{ q^{\prime}}}{{\longrightarrow}}V^{\lambda \geqslant 0}\stackrel{{ p^{\prime}}}{{\longrightarrow}}V,\] Let \(\ell\) be a generic linearization of \(\mathbb{C}^{*}\). By [12, Theorem 2.10], see also [16, Equation (3)], the subcategory of \(D^{b}(V/\mathbb{C}^{*})\) generated by \(\mathcal{O}_{V}(v)\) for weights \(v\in S_{\lambda,\delta}\) is equivalent to \(D^{b}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\) if \(\varepsilon_{\lambda,\delta}=0\), and has a semiorthogonal decomposition with pieces \(D^{b}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\) and \(p^{\prime}_{*}q^{\prime*}D^{b}(V^{\lambda})\) if \(\varepsilon_{\lambda,\delta}=1\). Define the map \[\text{s}\colon K_{0}^{\text{top}}(B\mathbb{C}^{*})\cong K_{0}^{\text{top}}(V/ \mathbb{C}^{*})\to K_{0}^{\text{top}}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\oplus p ^{\prime}_{*}q^{\prime*}K_{0}^{\text{top}}(V^{\lambda})^{\oplus\varepsilon_{ \lambda,\delta}}\] as the direct sum of the restriction onto \(V^{\ell\text{-ss}}/\mathbb{C}^{*}\) and the inverse of the inclusion: \[p^{\prime}_{*}q^{\prime*}\colon K_{0}^{\text{top}}\left(D^{b}(V^{\lambda})_{a }\right)^{\oplus\varepsilon_{\lambda,\delta}}\cong K_{0}^{\text{top}}(V^{ \lambda})^{\oplus\varepsilon_{\lambda,\delta}}\to K_{0}^{\text{top}}(V/ \mathbb{C}^{*})\] for a weight \(a=\left\lfloor\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+\langle\lambda, \delta\rangle\right\rfloor\in\mathbb{Z}\) of \(\lambda\), constructed by the semiorthogonal decomposition [12]. The following composition is an isomorphism: \[\mathcal{A}\hookrightarrow K_{0}^{\text{top}}(B\mathbb{C}^{*})\cong K_{0}^{ \text{top}}(V/\mathbb{C}^{*})\overset{\text{s}}{\to}K_{0}^{\text{top}}(V^{\ell \text{-ss}}/\mathbb{C}^{*})\oplus p^{\prime}_{*}q^{\prime*}K_{0}^{\text{top}}( V^{\lambda})^{\oplus\varepsilon_{\lambda,\delta}}.\] Note that the Hall product \(p^{\prime}_{*}q^{\prime*}\colon H^{\cdot}(V^{\lambda})\to H^{\cdot}(V/\mathbb{ C}^{*})\) has image \(\mathbb{Q}\cdot h^{c_{\lambda}}\), and thus it has a natural inverse \(H^{\cdot}(V/\mathbb{C}^{*})\to p^{\prime}_{*}q^{\prime*}H^{\cdot}(V^{\lambda})\). Let \(\mathrm{t}\) be the direct sum of this inverse and the restriction map: \[\mathrm{t}\colon H_{\cdot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Consider the projection map \(t_{\lambda}\colon\mathcal{X}(d)^{\lambda}\to\mathcal{X}(d)^{\prime\lambda}\). The maps introduced fit in the following diagram: Consider the following perverse truncation \[\mathrm{S}\colon t_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[c_{ \lambda}] \cong\mathrm{IC}_{\mathcal{X}(d)^{\prime\lambda}}[c_{\lambda}-1] \otimes\mathbb{Q}[h]\] \[\to{}^{p}{}_{\tau}{}^{\geqslant c_{\lambda}+1}\left(\mathrm{IC}_ {\mathcal{X}(d)^{\prime\lambda}}[c_{\lambda}-1]\otimes\mathbb{Q}[h]\right)\] \[\cong\bigoplus_{j\geqslant 0}\mathrm{IC}_{\mathcal{X}(d)^{\prime \lambda}}[-c_{\lambda}-1-2j]\cong t_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{ \lambda}}[-c_{\lambda}]. \tag{6.35}\] Define the map \(\Delta_{\lambda}\) as the composition: \[\Delta_{\lambda}\colon\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\to\pi_{d*}a_{\lambda *}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[2c_{\lambda}]=i_{\lambda*}\pi_{ \lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[2c_{\lambda}]\xrightarrow{ \mathrm{S}}i_{\lambda*}\pi_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}. \tag{6.36}\] Recall the notations from Subsection 6.2 and the decomposition theorem (6.16). Consider the total perverse cohomology \[\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right):=\bigoplus_{i \in\mathbb{Z}}{}^{p}\mathcal{H}^{i}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)} \right)[-i].\] For \(A\in\mathcal{P}\) as in Subsection 6.2, there are then natural maps \[\mathrm{P}_{A}\to\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right) \to\mathrm{P}_{A}.\] **Proposition 6.16**.: _Let \(A,B\in\mathcal{P}\) with corresponding sheaves \(\mathrm{P}_{A}\) and \(\mathrm{P}_{B}\) of different support. Assume that \(p_{B}\leqslant p_{A}\)._ _(a) The map (6.36) induces an isomorphism_ \[\Delta_{\lambda}\colon\mathrm{P}_{A}\xrightarrow{\sim}\mathrm{P}_{A}. \tag{6.37}\] _(b) The map \(\Delta_{\lambda}\colon\mathrm{P}_{B}\to\mathrm{P}_{A}\) is zero._ Proof.: (a) Assume \(\lambda\) has associated partition \((d_{i})_{i=1}^{k}\). Assume further that the set \(\{d_{1},\dots,d_{k}\}=\{e_{1},\dots,e_{s}\}\) has cardinality \(s\) and that each \(e_{i}\) appears \(m_{i}\) times among the \(d_{j}\) for \(1\leqslant j\leqslant k\). Let \(A^{\circ}\in\mathcal{P}\) be the tuplet \((e_{i},m_{i,a})\) with \(m_{i,0}=m_{i}\) and \(m_{i,a}=0\) for \(a\geqslant 1\). For \(d\in\mathbb{N}^{I}\), let \(\hbar_{d}:=c_{1}(\mathcal{O}(\sigma_{d}))\in H^{2}(\mathcal{X}(d))\). By [1, Theorem C], the summand \(\mathrm{P}_{A}\) of \(\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right)\) is obtained from \(\boxtimes_{i=1}^{k}(\mathrm{IC}_{X(d_{i})}[-1])\) by multiplication with the equivariant parameters \(\hbar_{d_{i}}\) for \(1\leqslant i\leqslant k\). The map \(\Delta_{\lambda}\colon\mathrm{P}_{A}\to\mathrm{P}_{A}\) is thus obtained by multiplication by equivariant parameters \(\hbar_{d_{i}}\) for \(1\leqslant i\leqslant k\) from the map \[\Delta_{\lambda}m_{\lambda}\colon i_{\lambda*}\boxtimes_{i=1}^{k}\left( \mathrm{IC}_{X(d_{i})}[-1]\right)\to i_{\lambda*}\boxtimes_{i=1}^{k}\left( \mathrm{IC}_{X(d_{i})}[-1]\right).\] By [1, Theorem C], the image of \(m_{\lambda}\left(i_{\lambda*}\boxtimes_{i=1}^{k}\left(\mathrm{IC}_{X(d_{i})}[ -1]\right)\right)\) in \(\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right)\) is \(\mathrm{P}_{A^{\circ}}\). Thus the map (6.37) is obtained by multiplication by equivariant parameters from the map \[\Delta_{\lambda}\colon\mathrm{P}_{A^{\circ}}\to\mathrm{P}_{A^{\circ}}.\] We may thus assume that \(A=A^{\circ}\). The Hall product is induced by a map \[m_{\lambda}\colon i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d) ^{\lambda}}\to\pi_{d\star}\mathrm{IC}_{\mathfrak{X}(d)}.\] The lowest non-zero piece of the perverse filtration on \(i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}\) is given by \[{}^{p}{}_{\tau}{}^{\leqslant k}i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{ \mathfrak{X}(d)^{\lambda}}=i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC} _{X(e_{i})}[-1]\right)^{\boxtimes m_{i}}.\] The (shifted) perverse sheaf \(i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{ \boxtimes m_{i}}\) splits as a direct sum of simple sheaves, and one such sheaf is \(\mathrm{P}_{A}\). There is thus a natural inclusion \(\mathrm{P}_{A}\subset i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{ \mathfrak{X}(d)^{\lambda}}\). The map \[\Delta_{\lambda}m_{\lambda}\colon\mathrm{P}_{A}\to i_{\lambda\star}\pi_{ \lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}} \tag{6.38}\] has image in the lowest non-zero perverse truncation of \(i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}\), and thus (6.38) induces a map: \[\Delta_{\lambda}m_{\lambda}\colon\mathrm{P}_{A}\to{}^{p}\tau^{\leqslant s}i_{ \lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}= \boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{\boxtimes m_{i}}. \tag{6.39}\] The (shifted) perverse sheaf \(i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{ \boxtimes m_{i}}\) has only one summand isomorphic to \(\mathrm{P}_{A}\), which is a simple (shifted) perverse sheaf. Thus the map (6.39) restricts to a map \[\mathrm{P}_{A}\to\mathrm{P}_{A} \tag{6.40}\] All such maps are given by multiplication by scalars. It is thus an isomorphism if it is not the zero map. It suffices to show that the maps (6.38) or (6.39) are not zero. We will show this after passing to a convenient open set of \(X(d)^{\lambda}\). For any non-zero \(e\in\mathbb{N}^{I}\), by the same argument used to prove that (6.14), there exists a stable point in \(R(e)\), equivalently the map \(\pi_{e}\colon\mathfrak{X}(e)\to X(e)\) is generically a \(\mathbb{C}^{*}\)-gerbe. For \(1\leqslant i\leqslant k\), let \(R_{i}\) be a simple representation of \(Q\) of dimension \(d_{i}\) such that \(R_{i}\) and \(R_{j}\) are not isomorphic for \(1\leqslant i<j\leqslant k\). Let \(R:=\bigoplus_{i=1}^{k}R_{i}\). Note that the stabilizer of \(R\) is \(T=(\mathbb{C}^{*})^{k}\). By the etale slice theorem, there is an analytic smooth open substack \(R\in\mathcal{U}/T\subset\mathfrak{X}(d)\) such that \[\mathcal{U}/\!/T\to X(d)\text{ and }\mathcal{U}^{\lambda}\to X(d)^{\lambda}\] are an analytic neighborhoods of \(\pi_{d}(R)\) and \(\times_{i=1}^{k}\pi_{d_{i}}(R_{i})=\pi_{\lambda}(R)\), respectively. After possibly shrinking \(\mathcal{U}\), we may assume that \(\mathcal{U}\) and \(\mathcal{U}^{\lambda}\) are contractible. The maps are, analytically locally over \(\pi_{d}(R)\in X(d)\), isomorphic to the following: (6.41) Note that the maps \(p_{\lambda}\) and \(a_{\lambda}\) in (6.41) are closed immersions. To show that the map (6.40) is non-zero, it suffices to check that the map \[\Delta_{\lambda}m_{\lambda}|_{\mathcal{U}^{\lambda}}\colon\mathrm{P}_{A}|_{ \mathcal{U}^{\lambda}}\to\mathrm{P}_{A}|_{\mathcal{U}^{\lambda}} \tag{6.42}\] is non-zero. It suffices to check that the map is non-zero after passing to global sections. We drop the restriction to \(\mathcal{U}^{\lambda}\) from the notation from now on. The element \(1\in H^{0}(\mathcal{U}^{\lambda}/T)\) is in \(P^{\leqslant s}H\left(\mathcal{U}^{\lambda}/T\right)\). We check by a direct computation that \[\Delta_{\lambda}m_{\lambda}(1)=1\in H^{0}(\mathcal{U}^{\lambda}/T). \tag{6.43}\] Note that the computation (6.43) shows that the map (6.42) is non-zero, and thus the conclusion follows. It suffices to check the computation in (6.43) for \(\mathcal{U}^{\lambda}/\mathbb{C}^{*}\), where by \(\mathbb{C}^{*}\) we denote the image of \(\lambda\), because \(H^{0}(\mathcal{U}^{\lambda}/\mathbb{C}^{*})\cong H^{0}(\mathcal{U}^{T}/T) \cong\mathbb{Q}\). Observe that \(H^{\cdot}(\mathcal{U}^{\lambda}/\mathbb{C}^{*})\cong\mathbb{Q}[h]\) and that \[m_{\lambda}(1)=p_{\lambda*}q_{\lambda}^{*}(1)=h^{c_{\lambda}}\] because \(p_{\lambda}\) has relative dimension \(-c_{\lambda}\). Note that \(\Delta_{\lambda}(h^{c_{\lambda}})=1\) from the construction of \(\Delta_{\lambda}\), and thus the conclusion follows. (b) If \(p_{B}<p_{A}\), the map \(\Delta_{\lambda}\colon\mathrm{P}_{B}\to\mathrm{P}_{A}\) is zero by considering the perverse degree. If \(p_{B}=p_{A}\), then the map is zero because, after a shift, it is a map of simple perverse sheaves with different support. We next prove the analogue of Proposition 6.16 for a non-zero potential. Let \(W\) be an arbitrary potential of \(Q\). Recall the sheaves \(\mathrm{Q}_{A}\) from Subsection 6.2. Let \(\mathcal{H}(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-1])\) be the total perverse cohomology of \(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-1]\). There are natural maps: \[\mathrm{Q}_{A}\to\mathcal{H}(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{ \mathcal{X}(d)}[-1])\to\mathrm{Q}_{A}.\] Apply the vanishing cycle functor to the maps (6.36) to obtain: \[\Delta_{\lambda}\colon\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC }_{\mathcal{X}(d)}[-1] \to i_{\lambda*}\pi_{\lambda*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{ \mathcal{X}(d)^{\lambda}}[-1]\] \[=i_{\lambda*}\boxtimes_{i=1}^{k}\left(\pi_{d*}\varphi_{\mathrm{Tr }\,W}\mathrm{IC}_{\mathcal{X}(d_{i})}[-1]\right). \tag{6.44}\] Let \(\mathrm{Q}_{A}^{\mathrm{inv}}\) be defined by the exact triangle \[\mathrm{Q}_{A}^{\mathrm{inv}}[-1]\to\mathrm{Q}_{A}\xrightarrow{1-\mathrm{T}} \mathrm{Q}_{A}\to\mathrm{Q}_{A}^{\mathrm{inv}}.\] **Proposition 6.17**.: _Let \(A,B\in\mathcal{P}\) with corresponding sheaves \(\mathrm{Q}_{A}\) and \(\mathrm{Q}_{B}\) of different support. Assume that \(p_{B}\leqslant p_{A}\)._ _(a) The map (6.44) induces isomorphisms_ \[\Delta_{\lambda}\colon\mathrm{Q}_{A}\xrightarrow{\sim}\mathrm{Q}_{A},\,\Delta _{\lambda}\colon\mathrm{Q}_{A}^{\mathrm{inv}}\xrightarrow{\sim}\mathrm{Q}_{A} ^{\mathrm{inv}}. \tag{6.45}\] _(b) The maps \(\Delta_{\lambda}\colon\mathrm{Q}_{B}\to\mathrm{Q}_{A}\) and \(\Delta_{\lambda}\colon\mathrm{Q}_{B}^{\mathrm{inv}}\to\mathrm{Q}_{A}^{\mathrm{ inv}}\) are zero._ Proof.: The maps above are induced from the map (6.36), thus the conclusion follows from Proposition 6.16. We now record corollaries to be used in the proof of Theorem 6.3. Fix a splitting \[H^{\bullet}\left(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathcal{X}(d)}[-1]\right)=\bigoplus_{A\in\mathcal{P}}H^{\bullet}( X(d),\mathrm{Q}_{A}^{\mathrm{inv}}). \tag{6.46}\] Let \(x\in H^{\bullet}\left(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathcal{X}(d)}[-1]\right)\). Use the decomposition above to write \[x=\sum_{A\in\mathcal{P}}x_{A} \tag{6.47}\] with \(x_{A}\in H^{\bullet}(X(d),\mathrm{Q}_{A}^{\mathrm{inv}})\). **Corollary 6.18**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Let \(\lambda\) be an antidominant cocharacter of \(T(d)\) with associated partition **d** such that \(\varepsilon_{\lambda,\delta}=\varepsilon_{\textbf{d},\delta}\). Let \(x\in H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)}[-1])\) and assume that_ \[a_{\lambda}^{*}(x)\in\bigoplus_{j=0}^{c_{\textbf{d},\delta}-1}H^{i-2j}( \mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{ IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])h^{j}.\] _(a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(\Delta_{\lambda}(x)=0\)._ _(b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(\Delta_{\lambda}(x)\) is in the image of_ \[H^{i}(\mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])\hookrightarrow H^{i}( \mathscr{X}(d)^{\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)^{\lambda}}[-1]).\] Proof.: Recall the definition of S from (6.35). (a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(\mathrm{Sp}_{\lambda}^{*}(x)=0\), so \(\Delta_{\lambda}(x)=0\). (b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(\mathrm{Sp}_{\lambda}^{*}(x)\in H^{\cdot}\left(X(d)^{\lambda},\varphi_{ \mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{\mathscr{X}(d)^{\prime\lambda}}[-c_{ \lambda}-2]\right)\). The conclusion follows from the definition of \(\Delta_{\lambda}\) in (6.44). **Corollary 6.19**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Let \(\lambda\) be an antidominant cocharacter of \(T(d)\) with associated partition **d** such that \(\varepsilon_{\lambda,\delta}=\varepsilon_{\textbf{d},\delta}\). Let \(x\in H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)}[-1])\) and assume that_ \[a_{\lambda}^{*}(x)\in\bigoplus_{j=0}^{c_{\textbf{d},\delta}-1}H^{i-2j}( \mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{ IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])h^{j}.\] _Recall the decomposition (6.47)._ _(a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(x_{A}=0\) for all tuples \(A\in\mathscr{P}\) with corresponding cocharacter \(\lambda\)._ _(b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(x_{A}=0\) for all tuples \(A\in\mathscr{P}\) with corresponding cocharacter \(\lambda\) and different from \(A^{\circ}\)._ Proof.: Both claims follow from Proposition 6.17 and Corollary 6.18. Proof of Theorem 6.3.: Recall the cycle map in (6.7) \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}(\mathbb{S}(d;\delta)) \to H^{\dim\mathscr{X}(d)-2a-i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{ \mathrm{inv}}\mathrm{IC}_{\mathscr{X}(d)}[-2]).\] By Proposition 6.10, we may assume that \(Q\) has at least two loops at every vertex. Let \(y\) be in the image of the above map. By Proposition 6.15 and Corollary 6.19, we have that \(y_{A}=0\) unless \(A=A^{\circ}\) for some partition \(\textbf{d}=(d_{i},m_{i})_{i=1}^{k}\) of \(d\) with \(m_{i}\geqslant 1\) and \(d_{i}\) pairwise distinct with \(\varepsilon_{\textbf{d},\delta}=0\). The statement thus follows. ## 7. Topological K-theory of quasi-BPS categories for preprojective algebras In this section, we use the results of Sections 5 and 6 to compute the topological K-theory of preprojective algebras of quivers satisfying Assumption 2.1 in terms of BPS cohomology, see Theorem 7.6. ### The preprojective BPS sheaf Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. Recall the moduli stack of dimension \(d\) representations of the tripled quiver \(Q\) of \(Q^{\circ}\) of dimension \(d\) and its good moduli space: \[\pi_{X,d}:=\pi_{d}\colon\mathscr{X}(d)\to X(d).\] Recall also the moduli stack of dimension \(d\) representation of the preprojective algebra of \(Q^{\circ}\) and its good moduli space: \[\pi_{P,d}\colon\mathcal{P}(d)^{\mathrm{cl}}\to P(d).\] Consider the moduli stack of dimension \(d\) representations of the double quiver of \(Q^{\circ}\) and its good moduli space: \[\pi_{Y,d}\colon\mathcal{Y}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee})/G(d) \to Y(d).\] Consider the diagram: Here \(\eta\colon\mathcal{X}(d)\to\mathcal{Y}(d)\) is the projection which forgets the \(\mathfrak{g}(d)\)-component and the bottom horizontal arrows are induced maps on good moduli spaces. Let \(\mathbb{C}\hookrightarrow\mathfrak{g}(d)\) be the diagonal embedding, which induces the closed immersion \[\gamma\colon X^{\prime}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\oplus \mathbb{C})/\!\!/G(d)\hookrightarrow X(d).\] Let \(\eta^{\prime}:=\eta|_{X^{\prime}(d)}\). By [Dava, Theorem/ Definition 4.1], there exists a _preprojective BPS sheaf_ \[\mathcal{BPS}^{p}_{d}\in\operatorname{Perv}(P(d))\] such that the BPS sheaf of the tripled quiver with potential \((Q,W)\) associated to \(Q^{\circ}\) is \[\mathcal{BPS}_{d}=\gamma_{*}\eta^{\prime*}j_{*}(\mathcal{BPS}^{p}_{d})[1]\in \operatorname{Perv}(X(d)). \tag{7.1}\] For a partition \(A=(d_{i})_{i=1}^{k}\) of \(d\), define \(\mathcal{BPS}^{p}_{A}\in\operatorname{Perv}(P(d))\) as in (6.4). For \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), define the following perverse sheaves on \(P(d)\): \[\mathcal{BPS}^{p}_{\delta}:=\bigoplus_{A\in S^{d}_{\delta}}\mathcal{BPS}^{p}_{ A},\,\mathcal{BPS}^{p}_{d,v}:=\mathcal{BPS}^{p}_{v_{7d}}, \tag{7.2}\] where the set of partitions \(S^{d}_{\delta}\) is defined from the tripled quiver \(Q\), see Subsection 6.1.2. Then \(\mathcal{BPS}^{p}_{d,v}\) is a direct summand of \(\pi_{P,ds}\omega_{\mathcal{Y}(d)^{\mathrm{cl}}}\), see [Dava, Theorem A], and so \(H^{-a}(P(d),\mathcal{BPS}^{p}_{d,v})\) is a direct summand of \[H^{\mathrm{BM}}_{a}(\mathcal{P}(d)^{\mathrm{cl}})=H^{-a}\big{(}P(d),\pi_{P,ds }\omega_{\mathcal{Y}(d)^{\mathrm{cl}}}\big{)}.\] Recall the maps \[\mathcal{P}(d)\xleftarrow{\eta^{\prime}}\eta^{-1}(\mathcal{P}(d))\xrightarrow {j^{\prime}}\mathcal{X}(d).\] The dimension of \(\mathcal{P}(d)\) as a quasi-smooth stack is \(\dim\mathcal{P}(d):=\dim\mathcal{Y}(d)-\dim\mathfrak{g}(d)\). Recall the dimensional reduction isomorphism from Subsection 5.1: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{\mathrm{BM}}_{a}(\mathcal{P }(d)^{\mathrm{cl}})\cong H^{\mathrm{BM}}_{a}(\mathcal{P}(d)) \xrightarrow{\sim}H^{2\dim\mathcal{Y}(d)-a}(\mathcal{X}(d),\varphi_{ \mathrm{Tr}\,W}\mathbb{Q}_{\mathcal{X}(d)}[-1])\] \[=H^{\dim\mathcal{Y}(d)-a}(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W} \mathrm{IC}_{\mathcal{X}(d)}[-1]).\] By the construction of the PBW isomorphism for preprojective Hall algebras [Dava, Equation (31)], the above isomorphism preserves the BPS cohomologies: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{-a}(P(d),\mathcal{BPS}^{p}_{d,v}) \xrightarrow{\sim}H^{\dim\mathcal{Y}(d)-a}(X(d),\mathcal{BPS}_{d,v}). \tag{7.3}\] ### Computations Recall the categories \[\mathbb{T}(d)_{v}\subset D^{b}(\mathcal{P}(d))\text{ and }\mathbb{T}(d)_{v}^{ \operatorname{red}}\subset D^{b}(\mathcal{P}(d)^{\operatorname{red}})\] from Subsection 2.13. Consider the natural closed immersion \(l^{\prime}\colon\mathcal{P}(d)^{\operatorname{red}}\hookrightarrow\mathcal{P} (d)\). The closed immersion \(l\colon\mathcal{P}(d)^{\operatorname{cl}}\hookrightarrow\mathcal{P}(d)\) factors through \(\mathcal{P}(d)^{\operatorname{cl}}\hookrightarrow\mathcal{P}(d)^{ \operatorname{red}}\stackrel{{ l^{\prime}}}{{\hookrightarrow}} \mathcal{P}(d)\). **Proposition 7.1**.: _Let \(Q\) be a symmetric quiver. Then there is a weak equivalence of spectra \(l^{\prime}_{*}\colon K^{\operatorname{top}}(\mathbb{T}(d)_{v}^{ \operatorname{red}})\to K^{\operatorname{top}}(\mathbb{T}(d)_{v})\)._ Proof.: There is a weak equivalence of spectra \(l^{\prime}_{*}\colon G^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{ red}})\xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{P}(d))\). The claim then follows from Theorem 2.9. For \(i\in\mathbb{Z}\), consider the Chern character map (3.8) for the quasi-smooth stack \(\mathcal{P}(d)\): \[\operatorname{ch}\colon G_{i}^{\operatorname{top}}(\mathcal{P}(d))\to\widetilde {H}_{i}^{\operatorname{BM}}(\mathcal{P}(d)). \tag{7.4}\] It induces a Chern character map \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{T}(d)_{v}) \hookrightarrow G_{i}^{\operatorname{top}}(\mathcal{P}(d))\to\widetilde{H}_{i} ^{\operatorname{BM}}(\mathcal{P}(d)). \tag{7.5}\] **Corollary 7.2**.: _The maps (7.4) and (7.5) are injective._ Proof.: It suffices to check that (7.4) is injective. This follows from Proposition 4.10, Theorem 2.8 (applied to a fixed \(\mu\) and all \(\alpha\in\mathbb{Z}_{\geqslant 1}\)), and the Koszul equivalence (2.14). **Corollary 7.3**.: _We have that \(G_{1}^{\operatorname{top}}(\mathcal{P}(d))=0\). Thus also \(K_{1}^{\operatorname{top}}(\mathbb{T}(d)_{v})=0\)._ Proof.: We have that \(H_{\operatorname{odd}}^{\operatorname{BM}}(\mathcal{P}(d)^{\operatorname{cl}} )=0\) by [11]. The conclusion follows by Proposition 7.2. Recall the filtration \(E_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) of \(G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) from Subsection 3.3. Define the filtration: \[E_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}):=E_{\ell}G_{0}^{ \operatorname{top}}(\mathcal{P}(d))\cap K_{0}^{\operatorname{top}}(\mathbb{T }(d)_{v})\subset K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}).\] We denote by \(\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v})\) the associated graded piece, and note that it is a direct summand of \(\operatorname{gr}_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) by Theorem 2.9. Define similarly a filtration \(E_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{red}}) \subset G_{0}^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{red}})\) and a filtration \(E_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}^{\operatorname{red}}) \subset K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}^{\operatorname{red}})\). **Corollary 7.4**.: _The forget-the-potential functor \(\Theta\) induces an isomorphism:_ \[\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{X}(d),\operatorname{Tr}W)\right)\xrightarrow{\sim} \operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\operatorname{MF}( \mathcal{X}(d),\operatorname{Tr}W)\right). \tag{7.6}\] _There are also isomorphisms:_ \[\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\mathbb{T}(d)_{v} \right)\xrightarrow{\sim}\operatorname{gr}_{\ell+\dim\mathfrak{g}(d)}K_{0}^{ \operatorname{top}}\left(\mathbb{S}^{\operatorname{gr}}(d)_{v}\right) \xrightarrow{\sim}\operatorname{gr}_{\ell+\dim\mathfrak{g}(d)}K_{0}^{ \operatorname{top}}\left(\mathbb{S}(d)_{v}\right).\] Proof.: The isomorphism (7.6) follows from Corollaries 5.2 and 7.3. The other isomorphism follow from the Koszul equivalence, see Proposition 5.2 for an explanation of the degree of the graded pieces. **Corollary 7.5**.: _There is a commutative diagram, where the vertical maps are cycle maps and the left horizontal maps are the dimensional reduction maps \(i^{\prime}_{*}p^{\prime*}\)._ \[\begin{CD}\operatorname{gr}.G_{0}^{\operatorname{top}}(\mathcal{P}(d))@>{ \sim}>{}>\operatorname{gr}.K_{0}^{\operatorname{top}}(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{X}(d),\operatorname{Tr}W))@>{\sim}>{}> \operatorname{gr}.K_{0}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X}(d), \operatorname{Tr}W))\\ @V{\downarrow}V{}V@V{\downarrow}V{}V\\ \widetilde{H}_{0}^{\operatorname{BM}}(\mathcal{P}(d))@>{\sim}>{}>\widetilde{H}^{0}( \mathcal{X}(d),\varphi_{\operatorname{Tr}W}[-1])@>{\sim}>{}>\widetilde{H}^{0}( \mathcal{X}(d),\varphi_{\operatorname{Tr}W}^{\operatorname{inv}}[-2]).\end{CD}\] _Here we have suppressed the cohomological degrees to make the diagram simpler._ Proof.: The claim follows from Proposition 5.2 and Corollary 7.4. **Theorem 7.6**.: _For an arbitrary quiver \(Q^{\circ}\), the cycle map (7.4) for \(\mathcal{P}(d)\) induces a cycle map_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}) \cong\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}^{\mathrm{red}}) \to H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p}). \tag{7.7}\] _If \(Q^{\circ}\) satisfies Assumption 2.2, then (7.7) is an isomorphism._ Proof.: The isomorphism \(\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v})\cong\mathrm{gr}_{ \ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}^{\mathrm{red}})\) follows from Proposition 7.1. Consider the diagram, whose lower square commutes from Corollary 7.5 and the top horizontal map is an isomorphism by Corollary 7.4: By Theorem 6.3, the map \(\beta\) has image in \[H^{\dim\mathcal{P}(d)-2\ell}(\mathscr{X}(d),\mathcal{BPS}_{d,v})\subset H^{2 \dim\mathcal{Y}(d)-2\ell}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}[-1]).\] If \(Q^{\circ}\) satisfies Assumption 2.2, it is an isomorphism onto \(H^{\dim\mathcal{P}(d)-2\ell}(\mathscr{X}(d),\mathcal{BPS}_{d,v})\) by Theorem 6.2. By (7.3), the map \(\alpha\) has image in \(H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p})\), and, if \(Q^{\circ}\) satisfies Assumption 2.2, it is an isomorphism onto \(H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p})\). **Remark 7.7**.: There are two perverse filtrations on \(H^{\mathrm{BM}}(\mathcal{P}(d))\) for any quiver \(Q^{\circ}\). One of them is induced from the tripled quiver with potential \((Q,W)\) and studied in [13]; the first non-zero piece in the perverse filtration is \({}^{p}\tau^{\leqslant 1}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X}(d)}= \mathcal{BPS}_{d}\). Another filtration is induced from the map \(\pi_{P,d}\) and studied in [11], where it is called the "less perverse filtration"; the first non-zero piece in the perverse filtration is \({}^{p}\tau^{\leqslant 0}\pi_{P,d*}\omega_{\mathcal{P}(d)^{\mathrm{cl}}}\). Note that, for any \(v\in\mathbb{Z}\), \({}^{p}\tau^{\leqslant 1}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X}(d)}\) is a direct summand of \(\mathcal{BPS}_{d,v}\), which itself is a direct summand of \({}^{p}\tau^{\leqslant 0}\pi_{P,d*}\omega_{\mathcal{P}(d)^{\mathrm{cl}}}\). Thus the topological K-theory of quasi-BPS categories (for \(Q^{\circ}\) satisfying Assumption 2.2, and for any \(v\in\mathbb{Z}\)) lies between the first non-zero pieces of these two perverse filtrations. **Remark 7.8**.: Davison-Hennecart-Schlegel Mejia [15, 16] computed the preprojective BPS sheaves in terms of the intersection complexes of the varieties \(P(d)\). We note the following numerical corollary of Theorem 7.6. **Corollary 7.9**.: _Let \(Q^{\circ}\) be a quiver satisfying Assumption 2.2 and let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\). Then_ \[\dim_{\mathbb{Q}}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v})=\dim_{\mathbb{Q}}H^{ \cdot}(P(d),\mathcal{BPS}_{d,v}).\] Proof.: The map (7.5) is injective by Proposition 7.2. The conclusion then follows from Theorem 7.6. ## 8. Examples In this section, we discuss some explicit examples of computations of the topological K-theory of quasi-BPS categories. All vector spaces considered in this section are \(\mathbb{Q}\)-vector spaces. We first note a preliminary proposition. **Proposition 8.1**.: _Let \(Q=(I,E)\) be a symmetric quiver, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\). Then_ \[\dim K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v})=\#\left(M(d)^{+}\cap(\mathbf{W}( d)+v\tau_{d}-\rho)\right).\] Proof.: There is a natural isomorphism \[K_{0}(\mathcal{X}(d))\xrightarrow{\sim}K_{0}^{\mathrm{top}}(\mathcal{X}(d)) \cong K_{0}(BG(d)).\] The category \(\mathbb{M}(d)_{v}\) is admissible in \(D^{b}(\mathcal{X}(d))\), so the above isomorphism restricts to the isomorphism \[K_{0}(\mathbb{M}(d)_{v})\xrightarrow{\sim}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{ v}).\] The generators of \(K_{0}(\mathcal{X}(d))\) are the classes of the vector bundles \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) and \(\Gamma_{G(d)}(\chi)\) is the irreducible representation of \(G(d)\) of highest weight \(\chi\). The computation \[\dim K_{0}(\mathbb{M}(d)_{v})=\#\left(M(d)^{+}\cap(\mathbf{W}(d)+v\tau_{d}- \rho)\right)\] follows then from the definition of \(\mathbb{M}(d)_{v}\). **Remark 8.2**.: In view of Proposition 8.1 and Theorem 6.6, the total intersection cohomology of the spaces \(X(d)\) can be determined by counting lattice points inside the polytope \((\mathbf{W}(d)+v\tau_{d}-\rho)\). ### Toric examples Let \(g\in\mathbb{N}\). Consider the quiver \(Q=(I,E)\), where \(I=\{1,2\}\) and \(E\) has one loop at \(1\), one loop at \(2\), \(2g+1\) edges \(\{e_{1},\ldots,e_{2g+1}\}\) from \(0\) to \(1\) and \(2g+1\) edges \(\{\overline{e}_{1},\ldots,\overline{e}_{2g+1}\}\) from \(1\) to \(0\). The following is a figure for \(g=1\). Fix \(d=(1,1)\in\mathbb{N}^{I}\). Then \[\mathcal{X}(d)=\left(\mathbb{C}^{2}\oplus\mathbb{C}^{2(2g+1)}\right)/(\mathbb{ C}^{*})^{2}.\] The diagonal \(\mathbb{C}^{*}\hookrightarrow(\mathbb{C}^{*})^{2}\) acts trivially on \(\mathbb{C}^{2}\oplus\mathbb{C}^{2(2g+1)}\). The factor \(\mathbb{C}^{*}\) corresponding to the vertex \(1\) acts with weight \(0\) on \(\mathbb{C}^{2}\), weight \(1\) on \(\mathbb{C}^{2g+1}\), and weight \(-1\) on \(\mathbb{C}^{2g+1}\). We consider the stack, which is the \(\mathbb{C}^{*}\)-rigidification of \(\mathcal{X}(d)\): \[\mathcal{X}^{\prime}(d)=\left(\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1} \oplus\mathbb{C}^{2g+1}_{-1}\right)/\mathbb{C}^{*}.\] The GIT quotient for any non-trivial stability condition provides a small resolution of singularities: \[\tau\colon Y:=\left(\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1}\oplus \mathbb{C}^{2g+1}_{-1}\right)^{\mathrm{ss}}/\mathbb{C}^{*}=\mathbb{C}^{2} \times\operatorname{Tot}_{\mathbb{P}^{2g}}\left(\mathcal{O}(-1)^{2g+1}\right) \to X(d).\] Here, _small_ means that \(\dim Y\times_{X(d)}Y=\dim X(d)\) and \(Y\times_{X(d)}Y\) has a unique irreducible component of maximal dimension. Then, by the BBDG decomposition theorem, we have that \(\tau_{*}\mathrm{IC}_{Y}=\mathrm{IC}_{X(d)}\). We decorate the BPS sheaves with a superscript zero to indicate that the potential is zero. We obtain that: \[\mathcal{BPS}^{0}_{d}=\tau_{*}\mathrm{IC}_{Y}=\mathrm{IC}_{X(d)}\text{ and }\mathcal{BPS}^{0}_{(1,0)}=\mathcal{BPS}^{0}_{(0,1)}=\mathrm{IC}_{\mathbb{C}}. \tag{8.1}\] **Proposition 8.3**.: _If \(v\) is odd, then \(\mathbb{M}(d)_{v}\cong D^{b}(Y)\) and \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\)._ _If \(v\) is even, then \(\mathbb{M}(d)_{v}\) has a semiorthogonal decomposition with summands equivalent to \(D^{b}(Y)\) and \(D^{b}(\mathbb{C}^{2})\), and \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\oplus\mathcal{BPS}^{0}_{(1,0)} \boxtimes\mathcal{BPS}^{0}_{(0,1)}\)._ Proof.: The category \(\mathbb{M}(d)_{v}\) is the subcategory of \(D^{b}(\mathscr{X}(d))\) generated by the line bundles \(\mathcal{O}_{\mathscr{X}(d)}(w\beta_{2}+(v-w)\beta_{1})\) for \(w\in\mathbb{Z}\) such that \[\frac{v}{2}\leqslant w\leqslant 2g+1+\frac{v}{2}. \tag{8.2}\] One can show that \(\mathbb{M}(d)_{v}\) is equivalent to the "window subcategory" (in the sense of [11]) of \(D^{b}(\mathscr{X}^{\prime}(d))\) containing objects \(F\) such that the weights of \(\mathbb{C}^{*}\) on \(F|_{0}\) are in \(\left[\frac{v}{2},\frac{v}{2}+2g+1\right]\cap\mathbb{Z}\). If \(v\) is odd, then \(\mathbb{M}(d)_{v}\cong D^{b}(Y)\) by [11, Theorem 2.10]. The boundary points \(\frac{v}{2}\) and \(\frac{v}{2}+2g+1\) are not integers, so \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\). If \(v\) is even, then \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\oplus\mathcal{BPS}^{0}_{(1,0)} \boxtimes\mathcal{BPS}^{0}_{(0,1)}\). The fixed locus of the unique Kempf-Ness locus in the construction of \(Y\) is \((\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1}\oplus\mathbb{C}^{2g+1}_{-1})^{ \mathbb{C}^{*}}=\mathbb{C}^{2}\). As a corollary of [11, Theorem 2.10], see the remark in [12, Equation (3)], the category \(\mathbb{M}(d)_{v}\) has a semiorthogonal decomposition with summands \(D^{b}(Y)\) and \(D^{b}(\mathbb{C}^{2})\). As a corollary of the above proposition and of the computations (8.1), we obtain the following: \[\dim K^{\mathrm{top}}_{0}(\mathbb{M}(d)_{v})\overset{(*)}{=}\dim H^{\cdot}(X( d),\mathcal{BPS}^{0}_{d,v})=\begin{cases}2g+1,\text{ if }v\text{ is odd},\\ 2g+2,\text{ if }v\text{ is even}.\end{cases}\] The equality \((*)\) is also the consequence (6.4) of Theorem 6.2. Note that the dimensions of \(K^{\mathrm{top}}(\mathbb{M}(d)_{v})\) can be computed immediately using Proposition 8.1 and (8.2), and then \((*)\) can be checked without using window categories. However, by Proposition 8.3, the equality \((*)\) is obtained as a corollary of the Atiyah-Hirzebruch theorem for the smooth varieties \(Y\) and \(\mathbb{C}^{2}\). Further, Proposition 8.3 is useful when considering a non-zero potential for \(Q\). For example, consider the potential \[W:=\sum_{i=1}^{2g+1}e_{i}\overline{e}_{i}.\] Note that \(W\colon Y\to\mathbb{C}\) is smooth. The computation (8.1) implies that: \[\mathcal{BPS}_{d}=\varphi_{W}\mathrm{IC}_{X(d)}=\tau_{*}\varphi_{W}\mathrm{IC} _{Y}=0\text{ and }\mathcal{BPS}^{0}_{(1,0)}=\mathcal{BPS}^{0}_{(0,1)}=\mathrm{IC}_{\mathbb{C}}. \tag{8.3}\] The BPS sheaves have trivial monodromy. Further, Proposition 8.3 implies that: \[\mathbb{S}(d)_{v} \simeq\mathrm{MF}(Y,W)=0\text{ if }v\text{ is odd},\] \[\mathbb{S}(d)_{v} =\langle\mathrm{MF}(Y,W),\mathrm{MF}(\mathbb{C}^{2},0)\rangle \simeq\mathrm{MF}(\mathbb{C}^{2},0)\text{ if }v\text{ is even}.\] Let \(i\in\mathbb{Z}\). The following equality (which also follows by (6.4)) holds by a direct computation: \[\dim K_{i}^{\text{top}}(\mathbb{S}(d)_{v})=\dim H^{\cdot}(X(d),\mathcal{BPS}_{d,v })=\begin{cases}0,\text{ if }v\text{ is odd},\\ 1,\text{ if }v\text{ is even}.\end{cases}\] **Remark 8.4**.: A similar analysis can be done for any symmetric quiver \(Q=(I,E)\) (not necessarily satisfying Assumption 2.1) and a dimension vector \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) such that \(d^{i}\in\{0,1\}\) for every \(i\in I\). We do not give the details for the proofs. Let \(v\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=1\). Assume \(W=0\). One can show that, for a generic GIT stability \(\ell\in M(d)_{\mathbb{R}}^{W_{d}}\cong M(d)_{\mathbb{R}}\), the GIT quotient \(Y:=R(d)^{\ell\text{-ss}}/G(d)\cong R(d)^{\ell\text{-ss}}/\!\!/G(d)\) is a small resolution \[\tau\colon Y\to X(d).\] Then \(\mathcal{BPS}_{d}^{0}=\tau_{*}\text{IC}_{Y}.\) By [10], there is an equivalence: \[\mathbb{M}(d)_{v}\cong D^{b}(Y).\] The following equality (which is a corollary of Theorem 6.2) follows then by the Atiyah-Hirzebruch theorem for the smooth variety \(Y\): \[\dim K_{0}^{\text{top}}(\mathbb{M}(d)_{v})=\dim K_{0}^{\text{top}}(Y)=\dim H^{ \cdot}(Y)=\dim H^{\cdot}(X(d),\mathcal{BPS}_{d}^{0}).\] Similar computations can be done also for a general \(v\in\mathbb{Z}\). ### Quivers with one vertex and an odd number of loops Let \(g\in\mathbb{N}\). Consider \(Q\) the quiver with one vertex and \(2g+1\) loops. The following is a picture for \(g=1\). (8.4) For \(d\in\mathbb{N}\), recall the good moduli space map: \[\mathcal{X}(d):=\mathfrak{gl}(d)^{\oplus(2g+1)}/GL(d)\to X(d):=\mathfrak{gl}(d )^{\oplus(2g+1)}/\!\!/GL(d).\] For \(g>0\), the variety \(X(d)\) is singular. For every stability condition \(\ell\in M(d)_{\mathbb{R}}^{W_{d}}\), we have that \(\mathcal{X}(d)^{\ell\text{-ss}}=\mathcal{X}(d)\), so we do not obtain resolutions of singularities of \(X(d)\) as in the previous example. There are no known crepant geometric resolutions (in particular, small resolutions) of \(X(d)\). For \(\gcd(d,v)=1\), Spenko-Van den Bergh [14] proved that \(\mathbb{M}(d)_{v}\) is a twisted noncommutative crepant resolution of \(X(d)\). In view of Theorem 6.6, we regard \(\mathbb{M}(d)_{v}\) as the categorical analogue of a small resolution of \(X(d)\). Reineke [13] and Meinhardt-Reineke [12] provided multiple combinatorial formulas for the dimensions of the individual intersection cohomology vector spaces \(\operatorname{IH}^{\bullet}(X(d))\). As noted in Remark 8.2, Theorem 6.6 also provides combinatorial formulas for the total intersection cohomology of \(X(d)\). We explain that our formula recovers a formula already appearing in the work of Reineke [12, Theorem 7.1]. Fix \(v\in\mathbb{Z}\). By Proposition 8.1, we need to determine the number of (integral, dominant) weights \(\chi=\sum_{i=1}^{d}c_{i}\beta_{i}\in M(d)^{+}\) with \(\sum_{i=1}^{d}c_{i}=v\) and \(c_{i}\geqslant c_{i-1}\) for every \(2\leqslant i\leqslant d\), such that \[\chi+\rho-v\tau_{d}\in\frac{2g+1}{2}\text{sum}[0,\beta_{i}-\beta_{j}], \tag{8.5}\] where the Minkowski sum is after all \(1\leqslant i,j\leqslant d\). Define \(\widetilde{\chi}\in M(d)\) and \(\widetilde{c}_{i}\in\mathbb{Z}\) for \(1\leqslant i\leqslant d\) as follows: \[\widetilde{\chi}:=\chi-g\cdot(2\rho)=\sum_{i=1}^{d}\widetilde{c}_{i}\beta_{i}.\] Note that, for every \(2\leqslant i\leqslant d\), the inequality \(c_{i}\geqslant c_{i-1}\) becomes: \[\widetilde{c}_{i}-\widetilde{c}_{i-1}+2g\geqslant 0. \tag{8.6}\] A dominant weight \(\chi\) satisfies (8.5) if and only if, for all dominant cocharacters \(\lambda\) of \(T(d)\subset GL(d)\), we have: \[\langle\lambda,\chi+\rho-v\tau_{d}\rangle\leqslant\frac{2g+1}{2}\langle\lambda,\mathfrak{g}^{\lambda>0}\rangle=\frac{2g+1}{2}\langle\lambda,\rho\rangle. \tag{8.7}\] **Proposition 8.5**.: _The inequalities (8.7) hold for all dominant cocharacters \(\lambda\) if and only they hold for the cocharacters \(\lambda_{k}(z)=(\overbrace{1\dots,1}^{d-k},\overbrace{z,\dots,z}^{k})\in T(d)\) for \(1\leqslant k\leqslant d\)._ Proof.: In the cocharacter lattice, any dominant cocharacter \(\lambda\) is a linear combination with nonnegative coefficients of \(\lambda_{k}\) for \(1\leqslant k\leqslant d\). Then, if (8.7) holds for all \(\lambda_{k}\), it also holds for all dominant \(\lambda\). We rewrite the conditions (8.7) for \(\lambda_{k}\) using the weight \(\widetilde{\chi}\): \[\langle\lambda_{k},\widetilde{\chi}\rangle\leqslant\langle\lambda_{k},v\tau_ {d}\rangle.\] Alternatively, the condition above can be written as: \[\sum_{i=d-k+1}^{d}\widetilde{c}_{i}\leqslant\frac{vk}{d}. \tag{8.8}\] **Definition 8.6**.: Let \(\mathcal{H}_{d,v}^{2g+1}\) be the set of tuplets of integers \((\widetilde{c}_{i})_{i=1}^{d}\in\mathbb{Z}^{d}\) satisfying the inequality (8.6) and (8.8) for every \(2\leqslant k\leqslant d\) and such that \(\sum_{i=1}^{d}\widetilde{c}_{i}=v\). Let \(H_{d,v}^{2g+1}:=\#\mathcal{H}_{d,v}^{2g+1}\). **Remark 8.7**.: The numbers \(H_{d,0}^{2g+1}\) appear in combinatorics as "score sequences of complete tournaments", and in the study of certain \(\mathbb{C}^{*}\)-fixed points in the moduli of \(SL(n)\)-Higgs bundles, see [12, Section 7]. By Proposition 8.1, we have that: \[\dim K_{0}^{\text{top}}(\mathbb{M}(d)_{v})=H_{d,v}^{2g+1}.\] By Theorem 6.6, for any \(v\in\mathbb{Z}\) such that \(\gcd(d,v)=1\), we obtain that: \[\dim\text{IH}^{\cdot}(X(d))=H_{d,v}^{2g+1}. \tag{8.9}\] The above statement was already proved (by different methods) by Reineke and Meinhardt-Reineke by combining [12, Theorem 7.1] and [19, Theorem 4.6], see also [19, Section 4.3]. Note that we assume that the number of loops is odd in order to apply Theorem 6.2. In loc. cit., Reineke also provided combinatorial formulas for \(m\)-loop quivers for \(m\) even. **Remark 8.8**.: Note that, as a corollary of (8.9), we obtain that \(H^{2g+1}_{d,v}=H^{2g+1}_{d,v^{\prime}}\) if \(\gcd(d,v)=\gcd(d,v^{\prime})=1\). There are natural bijections \(\mathcal{H}^{2g+1}_{d,v}\xrightarrow{\sim}\mathcal{H}^{2g+1}_{d,v^{\prime}}\) for \(d|v-v^{\prime}\) or for \(v^{\prime}=-v\), but we do not know such natural bijections for general \(v,v^{\prime}\) coprime with \(d\). For \(\gcd(d,v)=1\) and \(n\in\mathbb{Z}_{\geqslant 1}\), the topological K-theory \(K^{\operatorname{top}}_{i}(\mathbb{M}(nd)_{nv})\) is computed from the intersection cohomology of \(X(e)\) for \(e\in\mathbb{N}\), and the set \(S^{nd}_{nv}\). The following is a corollary of Proposition 6.1: **Corollary 8.9**.: _For \(\gcd(d,v)=1\) and \(n\in\mathbb{Z}_{\geqslant 1}\), the set \(S^{nd}_{nv}\) consists of partitions \((d_{i})^{k}_{i=1}\) of \(nd\) such that \(d_{i}=n_{i}d\) for \((n_{i})^{k}_{i=1}\in\mathbb{N}^{k}\) a partition of \(n\)._ **Example 8.10**.: Suppose that \(g=0\). In this case, the variety \(X(d)\) is smooth: \[X(d)=\mathfrak{gl}(d)/\!\!/GL(d)\xrightarrow{\sim}\operatorname{Sym}^{d}( \mathbb{C})\cong\mathbb{C}^{d}.\] The above isomorphism is given by sending an element of \(\mathfrak{gl}(d)\) to the set of generalized eigenvalues. However \(X(d)^{\operatorname{st}}=\emptyset\) if \(d>1\), thus \(\mathcal{BPS}_{d}=\operatorname{IC}_{\mathbb{C}}\) if \(d=1\), and \(\mathcal{BPS}_{d}=0\) for \(d>1\). Then by Corollary 8.9, we have \(\mathcal{BPS}_{d,v}=0\) unless \(d|v\), case in which \(\mathcal{BPS}_{d,v}=\operatorname{Sym}^{d}(\mathcal{BPS}_{1})=\operatorname{IC }_{X(d)}\). Thus for \(g=0\), we have \[\dim H\left(X(d),\mathcal{BPS}_{d,v}\right)=\begin{cases}1,\text{ if }d|v,\\ 0,\text{ otherwise}.\end{cases} \tag{8.10}\] On the other hand, by [10, Lemma 3.2] we have that \(\mathbb{M}(d)_{v}=0\) unless \(d|v\), case in which it is the subcategory of \(D^{b}(\mathfrak{X}(d))\) generated by \(\mathcal{O}_{\mathfrak{X}(d)}(v\tau_{d})\), and thus equivalent to \(D^{b}(X(d))\), see [10, Lemma 3.3]. Then: \[\dim K^{\operatorname{top}}_{0}(\mathbb{M}(d)_{v})=\begin{cases}1,\text{ if }d|v,\\ 0,\text{ otherwise}.\end{cases} \tag{8.11}\] For \(g=0\), we can thus verify (6.4) by the direct computations (8.10), (8.11). ### The three loop quiver In this subsection, we make explicit the corollary of Theorem 6.2 for the three loop quiver (8.4) with loops \(\{x,y,z\}\) and with potential \(W=x[y,z]\). The quasi-BPS categories \(\mathbb{S}(d)_{v}\) are the quasi-BPS categories of \(\mathbb{C}^{3}\) and are admissible in the DT category \(\mathcal{DT}(d)\) studied in [PTa]. The quasi-BPS categories \(\mathbb{T}(d)_{v}\) are the quasi-BPS categories of \(\mathbb{C}^{2}\) and are admissible in \(D^{b}(\mathcal{C}(d))\), where \(\mathcal{C}(d)\) is the commuting stack of matrices of size \(d\). For \(n\in\mathbb{N}\), we denote by \(p_{2}(n)\) the number of partitions of \(n\). **Proposition 8.11**.: _Let \((d,v)\in\mathbb{N}\times\mathbb{Z}\) be coprime, let \(n\in\mathbb{N}\) and \(i\in\mathbb{Z}\). Then:_ \[\dim K^{\operatorname{top}}_{i}(\mathbb{S}(nd)_{nv})=\dim K^{\operatorname{ top}}_{0}(\mathbb{T}(nd)_{nv})=p_{2}(n).\] Proof.: By a theorem of Davison [10, Theorem 5.1], we have that \[\mathcal{BPS}_{e}=\operatorname{IC}_{\mathbb{C}^{3}}\] for every \(e\in\mathbb{N}\), where \(\mathbb{C}^{3}\hookrightarrow X(e)\) is the subvariety parameterizing three diagonal matrices. Then \(\dim H\dot{\cdot}(X(e),\mathcal{BPS}_{e})=1\), and so \(\operatorname{Sym}^{k}\left(H\dot{\cdot}(X(e),\mathcal{BPS}_{e})\right)=1\) for every positive integers \(e\) and \(k\). Then \(H\left(X(nd),\mathcal{BPS}_{A}\right)\) is also one dimensional for every \(A\in S^{nd}_{nv}\). Note that \(\#S^{nd}_{nv}=p_{2}(n)\) by Corollary 8.9. Then \[H\dot{\cdot}(X(nd),\mathcal{BPS}_{nd,nv})=\bigoplus_{A\in S^{nv}_{nd}}H\dot{ \cdot}(X(nd),\mathcal{BPS}_{A})=\mathbb{Q}^{\oplus p_{2}(n)}.\] The monodromy is trivial on \(H^{\cdot}(X(nd),\mathcal{BPS}_{nd,nv})\). By Theorems 6.2 and 7.6, we obtain the desired computations. **Remark 8.12**.: By Theorem 6.2, the topological K-theory of quasi-BPS categories may be determined whenever one can compute the BPS cohomology and the set \(S^{v}_{d}\). Proposition 8.11 is an example of such a computation. We mention two other computations for the three loop quiver with potentials \(W^{\prime}:=x[y,z]+z^{a}\) (for \(a\geqslant 2\)) and \(W^{\prime\prime}:=x[y,z]+yz^{2}\). Let \(\mathcal{BPS}^{\prime}_{d}\) and \(\mathbb{S}^{\prime}(d)_{v}\) be the BPS sheaves and the quasi-BPS categories of \((Q,W^{\prime})\). Denote similarly the BPS sheaves and the quasi-BPS categories of \((Q,W^{\prime\prime})\). By [10, Theorem 1.5], we have that \(H^{\cdot}(X(d),\mathcal{BPS}^{\prime}_{d})^{\mathrm{inv}}=0\) because \(H^{\cdot}(\mathbb{C},\varphi_{t^{a}})^{\mathrm{inv}}=0\). Then Theorem 6.2 implies that, for every \(i,v\in\mathbb{Z}\) with \(\gcd(d,v)=1\): \[K^{\mathrm{top}}_{i}(\mathbb{S}^{\prime}(d)_{v})=0.\] By [10, Corollary 7.2], we have that \(H^{\cdot}(X(d),\mathcal{BPS}^{\prime\prime}_{d})^{\mathrm{inv}}=H^{\cdot}(X(d ),\mathcal{BPS}^{\prime\prime}_{d})\) is one dimensional. As in Proposition 8.11, we have that, for every \(i,v\in\mathbb{Z}\): \[\dim K^{\mathrm{top}}_{i}(\mathbb{S}^{\prime\prime}(d)_{v})=p_{2}(\gcd(d,v)).\] ## 9. Etale covers of preprojective algebras In this section, we prove an extension of Theorem 7.6 to etale covers of preprojective stacks which we use to compute the topological K-theory of quasi-BPS categories of K3 surfaces in [26]. We first define quasi-BPS categories and BPS cohomology for etale covers of preprojective stacks. BPS sheaves are defined by base-change from the BPS sheaves of preprojective algebras. Quasi-BPS categories are defined via graded matrix factorizations and the Koszul equivalence, see Subsection 9.2. Recall that Theorem 7.6 follows, via dimensional reduction and the Koszul equivalence, from Theorem 6.2. The two main ingredients in the proof of Theorem 6.2 are the semiorthogonal decomposition from Theorem 2.5 and the construction of the cycle map (6.9) from Theorem 6.3. The analogous statements for etale covers are Propositions 9.5 and 9.6, respectively. We will use the notations and constructions from Subsection 7.1. Throughout this section, we fix a quiver \(Q^{\circ}\) satisfying Assumption 2.2 and a dimension vector \(d\in\mathbb{N}^{I}\). We begin by discussing the setting and by stating Theorem 9.2, the main result of this section. ### Preliminaries Let \(E\) be an affine variety with an action of \(G:=G(d)\) and with a \(G\)-equivariant etale map \[e\colon E\to R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}.\] Consider the quotient stacks with good moduli spaces \[\pi_{F}\colon\mathcal{F}:=(E\oplus\mathfrak{g})/G\to F:=(E\oplus\mathfrak{g}) /\!\!/G.\] Consider the moment map \(\mu\colon E\to\mathfrak{g}^{\vee}\) and the induced function \(f\colon\mathcal{F}\to\mathbb{C}\), where \(f(x,v)=\langle\mu(x),v\rangle\) for \(x\in E\) and \(v\in\mathfrak{g}\). Consider the quotient stack with good moduli space \[\pi_{L}\colon\mathcal{L}:=\mu^{-1}(0)/G\to L:=\mu^{-1}(0)/\!\!/G.\] There are maps: \[\begin{CD}\mathcal{L}^{\mathrm{cl}}@>{e}>{}>\mathcal{P}(d)^{\mathrm{cl}}\\ @V{\pi_{L}}V{\pi_{P,d}}V\\ L@>{e}>{}>P(d).\end{CD}\] Throughout this section, we assume that both horizontal maps in the above diagram are etale. Note that the moment map \(\mu\) has image in the traceless subalgebra \(\mathfrak{g}_{0}^{\vee}\cong\mathfrak{g}_{0}\subset\mathfrak{g}\). Let \(\mu_{0}\colon E\to\mathfrak{g}_{0}^{\vee}\) and let \(\mathcal{L}^{\mathrm{red}}:=\mu_{0}^{-1}(0)/G\). **Definition 9.1**.: Let \(\mathcal{BPS}_{d}^{p}\in\mathrm{Perv}(P(d))\) be the preprojective BPS sheaf and let \[\mathcal{BPS}^{L}:=e^{*}(\mathcal{BPS}_{d}^{p})\in\mathrm{Perv}(L). \tag{9.1}\] One defines \(\mathcal{BPS}_{\delta}^{L}\) for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and \(\mathcal{BPS}_{d,v}^{L}:=\mathcal{BPS}_{v_{\mathcal{T}d}}^{L}\) as in (7.2). By Theorem 2.9, there is a semiorthogonal decomposition: \[D^{b}\left(\mathcal{P}(d)\right)_{v}=\big{\langle}\mathbb{A}(d)_{v},\mathbb{ T}(d)_{v}\big{\rangle}\] for any \(v\in\mathbb{Z}\). The purpose of this section is to prove the following: **Theorem 9.2**.: _Let \(v\in\mathbb{Z}\). There are subcategories \(\mathbb{T}=\mathbb{T}(L)_{v}\) and \(\mathbb{A}=\mathbb{A}(L)_{v}\) of \(D^{b}(\mathcal{L})_{v}\) such that:_ 1. _there is a semiorthogonal decomposition_ \(D^{b}(\mathcal{L})_{v}=\langle\mathbb{A},\mathbb{T}\rangle\)_,_ 2. _if_ \(e\) _is the identity, then_ \(\mathbb{T}=\mathbb{T}(d)_{v}\) _and_ \(\mathbb{A}=\mathbb{A}(d)_{v}\)_,_ 3. _if_ \(h\colon E^{\prime}\to E\) _is an etale map inducing_ \(e^{\prime}:=e\circ h\colon E^{\prime}\to R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\)_, and if we consider_ \(\pi_{L^{\prime}}\colon\mathcal{L}^{\prime}\to L^{\prime}\) _and the categories_ \(\mathbb{A}(L^{\prime}),\mathbb{T}(L^{\prime})\subset D^{b}(\mathcal{L}^{\prime})\) _for_ \(E^{\prime}\)_, then_ \(h\) _induces functors_ \(h^{*}\colon\mathbb{T}(L)_{v}\to\mathbb{T}(L^{\prime})_{v}\) _and_ \(h^{*}\colon\mathbb{A}(L)_{v}\to\mathbb{A}(L^{\prime})_{v}\)_,_ 4. _for any_ \(i,\ell\in\mathbb{Z}\)_, the cycle map (_3.6_) for_ \(\mathcal{L}\) _induces isomorphisms_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathbb{T})\xrightarrow{ \sim}H^{-2\ell-i}(L,\mathcal{BPS}_{d,v}^{L}).\] _Further, one can also define categories \(\mathbb{T}^{\mathrm{red}},\mathbb{A}^{\mathrm{red}}\subset D^{b}(\mathcal{L}^ {\mathrm{red}})_{v}\) which satisfy the analogous conditions to (1)-(4) above. In particular, the map \(l^{\prime}\colon\mathcal{L}^{\mathrm{red}}\to\mathcal{L}\) induces an isomorphism_ \[\mathrm{c}\circ l^{\prime}_{*}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}( \mathbb{T}^{\mathrm{red}})\xrightarrow{\sim}\mathrm{gr}_{\ell}K_{i}^{\mathrm{ top}}(\mathbb{T})\xrightarrow{\sim}H^{-2\ell-i}(L,\mathcal{BPS}_{d,v}^{L}). \tag{9.2}\] We will only explain the constructions for \(\mathcal{L}\), the case of \(\mathcal{L}^{\mathrm{red}}\) is similar. In Subsection 9.2, we define the categories \(\mathbb{T}\) and \(\mathbb{A}\) using graded matrix factorizations and the Koszul equivalence. In Subsection 9.3, we prove the third claim in Theorem 9.2. ### Quasi-BPS categories for etale covers We will use the setting from Subsection 9.1. There is a Cartesian diagram, where the maps \(e\) are etale maps: \[\begin{CD}\mathcal{F}@>{e}>{}>\mathcal{X}(d)\\ @V{\pi_{F}}V{\pi_{X,d}}V@V{\pi_{X,d}}V{}V\\ F@>{e}>{}>X(d).\end{CD}\] By Theorem 2.9, there is a semiorthogonal decomposition \[D^{b}\big{(}\mathcal{X}(d)\big{)}_{v}=\big{\langle}\mathbb{B}(d)_{v},\mathbb{ M}(d)_{v}\big{\rangle}. \tag{9.3}\] Define subcategories \(\mathbb{B}=\mathbb{B}(F)\), \(\mathbb{M}=\mathbb{M}(F)\) of \(D^{b}(\mathcal{F})_{v}\) to be classically generated (see Subsection 2.15) by \(e^{*}\mathbb{B}(d)_{v}\), \(e^{*}\mathbb{M}(d)_{v}\) respectively. Note that, for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), we can define analogously \[\mathbb{M}(\delta)=\mathbb{M}(F;\delta)\subset D^{b}(\mathcal{F}). \tag{9.4}\] Lemma 2.11 implies that: **Corollary 9.3**.: _There is a semiorthogonal decomposition_ \[D^{b}(\mathcal{F})_{v}=\langle\mathbb{B},\mathbb{M}\rangle.\] _If \(e\) is the identity, then \(\mathbb{M}(d)=\mathbb{M}(d)_{v}\) and \(\mathbb{B}(d)=\mathbb{B}(d)_{v}\)._ Consider the category of graded matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{F},f)\), where the grading is of weight \(2\) for the summand \(\mathfrak{g}\) and is of weight \(0\) on \(E\). By the Koszul equivalence, we have that: \[\kappa_{L}\colon D^{b}(\mathcal{L})\xrightarrow{\sim}\operatorname{MF}^{ \operatorname{gr}}(\mathcal{F},f). \tag{9.5}\] Define the subcategories of \(D^{b}(\mathcal{L})\): \[\mathbb{T}=\mathbb{T}(L):=\kappa_{L}^{-1}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathbb{M},f)\right),\ \mathbb{A}=\mathbb{A}(L):=\kappa_{L}^{-1}\left( \operatorname{MF}^{\operatorname{gr}}(\mathbb{B},f)\right).\] By [PTa, Proposition 2.5], we obtain: **Corollary 9.4**.: _The properties (1), (2), and (3) in the statement of Theorem 9.2 hold for the categories \(\mathbb{A}\) and \(\mathbb{T}\) of \(D^{b}(\mathcal{L})\)._ We also need a version of Theorem 2.8 for etale covers. Consider the forget-the-framing map \(\tau_{\alpha}\colon\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\to\mathcal{X }(d)\). Let \(\mathcal{F}^{\alpha f}\) be such that the following diagram is Cartesian: (9.6) Recall the semiorthogonal decomposition of Theorem 2.5: \[D^{b}\left(\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\right)=\Big{\langle} \tau_{\alpha}^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right) \Big{\rangle}.\] For a partition \(\underline{d}:=(d_{i})_{i=1}^{k}\) of \(d\in\mathbb{N}^{I}\) and for integer weights \(\underline{v}:=(v_{i})_{i=1}^{k}\), define \(\mathbb{M}(\underline{d},\underline{v})\subset D^{b}\left(\mathcal{F}^{ \alpha f}\right)\) to be classically generated by \(e^{*}\tau_{\alpha}^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right)\). By Lemma 2.11, we obtain that: **Proposition 9.5**.: _There is a semiorthogonal decomposition_ \[D^{b}\left(\mathcal{F}^{\alpha f}\right)=\Big{\langle}\mathbb{M}(\underline{ d},\underline{v})\Big{\rangle},\] _where the left hand side is as in Theorem 2.5._ The category \(\mathbb{M}(\underline{d},\underline{v})\) can be described via the Hall product, same as in Theorem 2.5. Let \(\lambda\) be an antidominant cocharacter associated to \((d_{i})_{i=1}^{k}\). Consider the diagram: \[\mathcal{F}^{\lambda}\xrightarrow{q_{F}}\mathcal{F}^{\lambda\geqslant 0} \xrightarrow{p_{F}}\mathcal{F}.\] There is an etale map \(e\colon\mathcal{F}^{\lambda}\to\mathcal{X}(d)^{\lambda}\cong\times_{i=1}^{k} \mathcal{X}(d_{i})\). Then the Hall product \[*_{F}=p_{F*}q_{F}^{*}\colon D^{b}(\mathcal{F}^{\lambda})\to D^{b}(\mathcal{F})\] is base-change of the categorical Hall product \(D^{b}\big{(}\times_{i=1}^{k}\mathcal{X}(d_{i})\big{)}\cong\otimes_{i=1}^{k}D^{ b}(\mathcal{X}(d_{i}))\to D^{b}(\mathcal{X}(d))\). Let \(\widetilde{\mathbb{M}}(\underline{d},\underline{v})\subset D^{b}\left( \mathcal{F}(\underline{d})\right)\) be the subcategory classically generated by \(e^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right)\). There is then an equivalence: \[\tau_{\alpha,F}^{*}\circ*_{F}\colon\widetilde{\mathbb{M}}(\underline{d}, \underline{v})\xrightarrow{\sim}\mathbb{M}(\underline{d},\underline{v}). \tag{9.7}\] ### Comparison with BPS cohomology Recall the notation from Subsection 7.1. Consider the commutative diagram: Recall the sheaf \(\mathcal{BPS}^{L}\in\operatorname{Perv}(L)\) defined in (9.1) and consider the BPS sheaf \(\mathcal{BPS}_{d}\in\operatorname{Perv}(X(d))\) for the tripled quiver with potential \((Q,W)\) associated to \(Q^{\circ}\). Define the BPS sheaf: \[\mathcal{BPS}^{F}=e^{*}(\mathcal{BPS}_{d})\in\operatorname{Perv}(F).\] For \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), define \[\mathcal{BPS}_{\delta}^{F}\in D_{\operatorname{con}}^{b}(F),\,\mathcal{BPS}_ {d,v}^{F}:=\mathcal{BPS}_{v\tau_{d}}^{F}\] as in (6.5). Note that, by base-change of the decomposition (6.18), we obtain the analogous decomposition for \(\pi_{F}\colon\mathcal{F}\to F\): \[\pi_{F*}\varphi_{f}\mathrm{IC}_{\mathcal{F}}[-1]=\bigoplus_{A\in\mathcal{P}}e ^{*}(\mathrm{Q}_{A}). \tag{9.8}\] By base-change of (6.17), we obtain the following decomposition for the map \(\pi_{\alpha,F}:=\pi_{F}\circ\tau_{\alpha,F}\colon\mathcal{F}^{\alpha f}\to F\): \[\pi_{\alpha,F*}\varphi_{f}\mathbb{Q}_{\mathcal{F}^{\alpha f}}[\dim\mathcal{F} -1]=\bigoplus_{A\in\mathcal{P}_{\alpha}}e^{*}(\mathrm{Q}_{A}). \tag{9.9}\] The monodromy on \(H^{\bullet}(\mathcal{F},\varphi_{f})\) is trivial, so there is a cycle map: \[\mathrm{c}\colon\operatorname{gr}_{a}K_{i}^{\operatorname{top}}\left( \operatorname{MF}(\mathcal{F},f)\right)\xrightarrow{\sim}H^{2\dim\mathcal{F}- i-2a}(\mathcal{F},\varphi_{f}\mathbb{Q}_{\mathcal{F}}[-1])\oplus H^{2\dim \mathcal{F}-i-2a}(\mathcal{F},\varphi_{f}\mathbb{Q}_{\mathcal{F}}[-2]). \tag{9.10}\] We now define a cycle map from topological K-theory of quasi-BPS categories to BPS cohomology, which is the analogue of Theorem 6.2. **Proposition 9.6**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and recall the categories \(\mathbb{M}(\delta)\) from (9.4). The cycle map (9.10) has image in_ \[\mathrm{c}\colon\operatorname{gr}_{a}K_{i}^{\operatorname{top}}\left( \operatorname{MF}(\mathbb{M}(\delta),f)\right)\to H^{\dim\mathcal{F}-i-2a}(F, \mathcal{BPS}_{\delta}^{F})\oplus H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{ \delta}^{F}[-1]). \tag{9.11}\] _Thus, for \(\delta=v\tau_{d}\) and \(\mathbb{M}=\mathbb{M}(v\tau_{d})\), the cycle map (9.10) has image in_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}(\mathbb{M },f)\right)\to H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F})\oplus H^{ \dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F}[-1]). \tag{9.12}\] Proof.: The same argument used in the proof of Theorem 6.3 applies here. The \(\lambda\)-widths (see (6.28)) of the category \(\mathbb{M}(\delta)\) are equal to the \(\lambda\)-widths of the category \(\mathbb{M}(d;\delta)\) for all cocharacters \(\lambda\). The analogue of Proposition 6.15 then holds for \(\mathrm{MF}(\mathbb{M}(\delta),f)\). There is an explicit decomposition of \(\pi_{F*}\mathrm{IC}_{\mathcal{F}}\) obtained by base-change from (6.16), and where the summands are in the image of (the base-change of the) Hall product. In Subsection 6.6, we constructed the map (6.36), proved Proposition 6.16, and noted corollaries of Proposition 6.16. There are versions of the map (6.36) and of Proposition 6.16 by \(F\) by base-change, and the results in Subsection 6.6 also apply for \(\pi_{F*}\mathrm{IC}_{\mathcal{F}}\), and thus for \(\pi_{F*}\varphi_{f}\mathrm{IC}_{\mathcal{F}}[-1]\). We next prove the analogue of Theorem 6.2. **Proposition 9.7**.: _The cycle map (9.12) is an isomorphism._ Proof.: The same argument used to prove Theorem 6.2 applies here, see Subsection 6.3, that is, the statement follows from comparing summands in the semiorthogonal decomposition (9.5) with summands in the decomposition (9.9). The cycle map (9.12) is injective by (9.10) and the admissibility of \(\mathrm{MF}^{\mathrm{gr}}(\mathbb{M},f)\) in \(\mathrm{MF}^{\mathrm{gr}}(\mathcal{F},f)\). Consider a partition \(\underline{d}=(d_{i})_{i=1}^{k}\) of \(d\) and weights \(\underline{v}=(v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\). Consider the perverse sheaf \[\mathcal{BPS}_{\underline{d},\underline{v}}:=\oplus_{*}\left(\boxtimes_{i=1}^ {k}\mathcal{BPS}_{d_{i},v_{i}}\right)\in\mathrm{Perv}(X(d)),\] where \(\oplus\colon\times_{i=1}^{k}X(d_{i})\to X(d)\) is the direct sum map. By Proposition 9.6 for the disjoint union of \(k\) copies of \(Q\), dimension vector \((d_{i})_{i=1}^{k}\in\left(\mathbb{N}^{I}\right)^{k}\), and \(\delta=\sum_{i=1}^{k}v_{i}\tau_{d_{i}}\), there is an injective map for any \(i\in\mathbb{Z}\): \[\mathrm{gr}.K_{i}^{\mathrm{top}}\left(\mathbb{M}(\underline{d},\underline{v}) \right)\hookrightarrow H^{\,\left(F,e^{*}\mathcal{BPS}_{\underline{d}, \underline{v}}\right).}\] The claim now follows as in the proof of Theorem 6.2. We prove the analogue of Theorem 7.6. **Proposition 9.8**.: _The cycle map obtained by composing (9.10) with the forget-the-potential map is an isomorphism:_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}^{\mathrm{ gr}}(\mathcal{F},f)\right)\xrightarrow{\sim}H^{2\dim\mathcal{F}-i-2a}(\mathcal{F}, \varphi_{f}[-1]). \tag{9.13}\] _Thus there is an isomorphism:_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}^{\mathrm{ gr}}(\mathbb{M},f)\right)\to H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F}). \tag{9.14}\] _Further, there is an isomorphism:_ \[\mathrm{gr}_{a}K_{i}^{\mathrm{top}}(\mathbb{T})\xrightarrow{\sim}H^{-2a-i}(L, \mathcal{BPS}_{d,v}^{L})\] Proof.: The isomorphism (9.13) follows from Proposition 5.2. The isomorphism (9.14) follows then from Proposition 9.6. The last isomorphism follows from (9.14) and the compatibility between dimensional reduction and the Koszul equivalence from Proposition 5.2. Proof of Theorem 9.2.: The first three properties hold by Corollary 9.4. The fourth property follows from Proposition 9.8. The statement for reduced stacks follows similarly. The isomorphism (9.2) also follows directly from Proposition 3.5. We also note the following analogue of Corollary 7.2. **Proposition 9.9**.: _The Chern character map_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{T})\hookrightarrow G _{i}^{\operatorname{top}}(\mathcal{L})\to\bigoplus_{j\in\mathbb{Z}}H_{i+2j}^{ \operatorname{BM}}(\mathcal{L})\] _is injective._ Proof.: The proof is analogous to that of Corollary 7.2. The claim follows from Proposition 4.10, a version of Proposition 9.5 involving potential, and the Koszul equivalence.
2309.13774
Combinatorial summation of Feynman diagrams: Equation of state of the 2D SU(N) Hubbard model
Feynman's diagrammatic series is a common language for a formally exact theoretical description of systems of infinitely-many interacting quantum particles, as well as a foundation for precision computational techniques. Here we introduce a universal framework for efficient summation of connected or skeleton Feynman diagrams for generic quantum many-body systems. It is based on an explicit combinatorial construction of the sum of the integrands by dynamic programming, at a computational cost that can be made only exponential in the diagram order on a classical computer and potentially polynomial on a quantum computer. We illustrate the technique by an unbiased diagrammatic Monte Carlo calculation of the equation of state of the $2D$ $SU(N)$ Hubbard model in an experimentally relevant regime, which has remained challenging for state-of-the-art numerical methods.
Evgeny Kozik
2023-09-24T23:07:56Z
http://arxiv.org/abs/2309.13774v4
# Combinatorial summation of Feynman diagrams: Equation of state of the \(2d\)\(Su(n)\) Hubbard model ###### Abstract Feynman's diagrammatic series is a common language for a formally exact theoretical description of systems of infinitely-many interacting quantum particles, as well as a foundation for precision computational techniques. Here we introduce a universal framework for efficient summation of connected or skeleton Feynman diagrams for generic quantum many-body systems. It is based on an explicit combinatorial construction of the sum of the integrands by dynamic programming, at a computational cost that can be made only exponential in the diagram order on a classical computer and potentially polynomial on a quantum computer. We illustrate the technique by an unbiased diagrammatic Monte Carlo calculation of the equation of state of the \(2D\)\(SU(N)\) Hubbard model in an experimentally relevant regime, which has remained challenging for state-of-the-art numerical methods. ## I Introduction In the diagrammatic Monte Carlo (DiagMC) approach to correlated systems [1; 2; 3; 4], thermodynamic observables or correlation functions are expressed in terms of a sum of all connected Feynman diagrams of the many-body perturbation theory [5], which is then sampled stochastically. Each diagram represents a formula for computing one term in this sum, which in the simplest case consists of a product of one-particle non-interacting Green's functions \(G^{0}\) and a number \(n\) (the diagram order) of interaction vertices \(V\)[66] (see Fig. 1a,b), integrated over all internal variables. The key advantage of this approach is that the diagrams are defined directly in the thermodynamic limit, circumventing the need to extrapolate the result with the system size, which is typically hard in unbiased quantum Monte Carlo methods due to the negative sign problem [6; 7]. However, the control of precision in DiagMC relies on its ability to accurately compute the sum of all diagrams to a sufficiently high order \(n\), which is inhibited by a factorially increasing with \(n\) number of diagrams and hence exploding Monte Carlo variance. Indeed, in correlated regimes all \(\sim n!\) order-\(n\) diagrams are typically of comparable magnitude [8], which largely negates the chief advantage of Monte Carlo - that of importance sampling. This suggests that the summation over diagram topologies and indices on which there is only weak dependence could be done deterministically to a similar effect, or more efficiently if such summation could be performed faster than in \(\mathcal{O}(n!)\) steps. For fermionic systems, where the diagrams have alternating signs, this also helps lower the Monte Carlo variance [9; 10; 11; 12]. Crucially, if the computational cost could be reduced to exponential in \(n\), it was shown in Ref. [11] (with an extension to divergent series [13], if necessary) that the computational time would scale only _polynomially_ with the inverse of the desired error bound. An instructive example is the \(SU(N)\)-symmetric Hubbard model for \(N\) species of fermions. An approximate large-\(N\) (pseudo-)spin symmetry emerges in multi-orbital condensed matter systems due to orbital degeneracy [14; 15]. It is relevant to the description of, e.g., transition-metal oxides and orbital-selective Mott and superconducting transitions [14; 16; 17; 18; 19], graphene and twisted bilayer graphene [15; 19; 20; 21], and is expected to harbour exotic phases of matter, such as topologically non-trivial spin liquids [22]. However, it poses a serious challenge for precision numerical methods owing to the additional exponential scaling of the Hilbert space with \(N\), aggravating the sign problem [23]. Existing DiagMC algorithms based on determinantal summation of connected diagrams [9; 10], which are very efficient in the \(SU(2)\) case, are limited by the rigid structure of the determinant: the \(\sim N^{2}/2\) choices for each of the \(n\) interaction lines increase the computational cost of summing all diagrams of order \(n\) by a factor \(\sim(N^{2}/2)^{n}\). The recent studies by Ibarra-Garcia-Padilla _et al._[24] using the determinantal quantum Monte Carlo (DQMC) [25; 26] and numerical linked-cluster expansion (NLCE) [27; 28] methods at finite temperature, and by Feng _et al._[29] using the auxiliary-field quantum Monte Carlo (AFQMC) method [30] with improvable constraints [31] at zero temperature, revealed a rich phase diagram of the \(SU(N)\) Hubbard model at \(N=3\) and density \(\langle n\rangle=1\). At large \(N\), however, unbiased numerical methods are currently outperformed by experimental realisations of the system with ultracold alkaline-earth-like atoms in optical lattices [32; 33; 34; 35; 36; 37; 38]--analogue quantum simulators [39; 40]--in accessing the regimes of low temperatures and strong correlations [37; 38]. Here we develop a framework for efficient evaluation of Feynman's diagrammatic series of arbitrary structure by deterministic summation of all diagram integrands. The approach is based on an explicit combinatorial construction of each term in the sum, one Green's function at a time, whereby at each step the result is maximally factorised into sums of single Green's functions by dynamic programming. Specifically, the result takes the form of a directed graph (Fig. 1d), with each node being a sum of contributions from its incoming edges, and each edge conveying the value of the previous node multiplied by a Green's function. In this approach, the \(SU(N)\) symmetry is accounted for by merely an additional multiplication of certain edges by a constant factor, while all connected diagrams of order \(n\) can be summed in at most \(\mathcal{O}(n^{3}4^{n})\) steps independently of \(N\). This is reduced for the special case of \(N=1\) (spinless fermions or bosons) and \(SU(2)\) Hubbard model to \(\mathcal{O}(n^{3}3^{n})\). The factorisation of the sum, which serves to minimise the number of repeated uses of each Green's function value, is the essence of the speed-up. As a byproduct, the result is also symmetrised over all \(n!\) permutations of interaction lines, helping to further reduce the variance of the DiagMC evaluation of the series. Following Ref. [11] (and [13] in the case of a divergent series), the exponential computational cost of this approach implies polynomial scaling of the calculation time with the required accuracy. The approach admits a vector formulation, which is potentially suitable for a realisation on a quantum computer with a further exponential speed-up. We apply the combinatorial summation (CoS) technique to a calculation of the equation of state (EoS) of the \(2D\)\(SU(N)\) Hubbard model in the case of \(N=6\), which is relevant for experiments, but hard for numerical methods. We first address the low-temperature regime studied very recently by Pasqualetti _et al._[38], where the system was realised using the 6 nuclear spin states of \({}^{173}\)Yb atoms loaded in an optical lattice, and the experimentally obtained EoS was cross-benchmarked against unbiased DQMC calculations. The range of the CoS technique is then explored by extending the calculations to lower temperatures and greater interaction strengths, where the sign problem is known to rapidly intensify [23] and experimental data for \(N=6\) cannot be reliably captured by numerical methods [37]. At the low-temperature/strong-coupling boundary of the studied regime, traits of a developing (pseudo-)gapped state are observed. ## II Combinatorial summation For simplicity, let us confine ourselves to the fermionic \(SU(N)\) Hubbard model from the start, which is defined by the Hamiltonian \[\hat{H}=-t\sum_{\langle i,j\rangle,\sigma}\left(\hat{c}^{\dagger}_{i\sigma}\hat{c }_{j\sigma}+H.c.\right)+\frac{U}{2}\sum_{i,\sigma_{1}\neq\sigma_{2}}\hat{n}_{i \sigma_{1}}\hat{n}_{i\sigma_{2}}-\mu\sum_{i,\sigma}\hat{n}_{i\sigma}. \tag{1}\] Here the operators \(\hat{c}^{\dagger}_{i\sigma}\) and \(\hat{c}_{i\sigma}\) create and annihilate a fermion on site \(i\) with the spin \(\sigma=1,\ldots,N\), respectively, \(\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}\) is the number operator, \(t\) the hopping amplitude, \(U\) the on-site interaction, \(\mu\) the chemical potential, and \(\langle i,j\rangle\) implies that the summation is over nearest-neighbour lattice sites. A thermodynamic observable, such as, e.g., the average potential energy, is expressed diagrammatically (Fig. 1a,b) as the sum of all connected closed-loop diagrams obtained by linking vertices \(\alpha\), representing a point on the lattice \(i_{\alpha}\) and in imaginary time \(\tau_{\alpha}\), by the interaction lines \(V_{\sigma_{\alpha}\sigma_{\alpha}^{\prime}\sigma\sigma\sigma_{\beta}^{\prime }}(\alpha,\beta)=\frac{U}{2}\delta_{\alpha_{\alpha},\sigma_{\alpha}^{\prime} }\delta_{\sigma_{\beta},\sigma_{\beta}^{\prime}}(1-\delta_{\sigma_{\alpha}, \sigma_{\beta}})\delta_{i_{\alpha},i_{\beta}}\delta(\tau_{\alpha}-\tau_{ \beta})\) and non-interacting propagators (Green's functions) \(G^{0}_{\sigma}(\alpha,\beta)=-\langle\mathcal{T}\hat{c}_{i_{\beta}\sigma}( \tau_{\beta})\hat{c}^{\dagger}_{i_{\alpha}\sigma}(\tau_{\alpha})\rangle_{0}\), where \(\mathcal{T}\) is the time ordering operator and the statistical average \(\langle\ldots\rangle_{0}\) is taken with the Hamiltonian at \(U=0\), and summing or integrating the result over all its \(\sigma_{\alpha},i_{\alpha},\tau_{\alpha}\) variables. It is well known--and used in finite-size determinant diagrammatic Monte Calro methods [41; 42]--that the sum of all combinations of \(n\) interactions with the propagators is generated by the determinant of a \(2n\times 2n\) matrix \(g_{\alpha\beta}=G^{0}(\alpha,\beta)\), \(\alpha,\beta=1,\ldots,2n\) (the spin indices are omitted for clarity), multiplied by the corresponding values of \(V(\alpha,\beta)\). This way the \(2n!\) terms can be produced extremely efficiently in \(\mathcal{O}(n^{3})\) operations, but having to eliminate the unwanted disconnected diagrams from the determinant afterwards requires at least an exponential number of steps [10]. Our strategy, in contrast, will be to not generate the disconnected diagrams from the start. ### Evaluation of the determinant A good starting point is the algorithm for division-free calculation of the determinant [43] based on its permutation cycle decomposition. In terms of the cycle covers \(\mathcal{C}=(\alpha_{1},\alpha_{2},\ldots\alpha_{c_{1}})\ldots(\alpha_{c_{m-1 }+1},\alpha_{c_{m-1}+2},\ldots,\alpha_{2n})\), representing an ordered sequence of matrix indices (called elements) grouped into \(m\) cycles by the parenthesis, the determinant becomes \[\det\{g_{\alpha\beta}\}=\sum_{\mathcal{C}}\operatorname{sign}\mathcal{C} \cdot\operatorname{weight}\mathcal{C}, \tag{2}\] where \(\operatorname{sign}\mathcal{C}=(-1)^{2n+m}\) and \(\operatorname{weight}\mathcal{C}=(g_{\alpha_{1}\alpha_{2}}\ldots g_{\alpha_{ c_{1}}\alpha_{1}})\ldots(g_{\alpha_{m-1}+1\alpha_{c_{m-1}+2}}\ldots g_{ \alpha_{2n}\alpha_{c_{m-1}+1}})\). For instance, the cycle cover \(\mathcal{C}=(1\,2\,5\,3)(4\,8\,7)(6)\) has \(\operatorname{sign}\mathcal{C}=(-1)^{3}\) and \(\operatorname{weight}\mathcal{C}=(g_{12}g_{25}g_{3}g_{31})(g_{48}g_{87}g_{74})( g_{66})\). In this form, one easily recognises Feynman's rules for constructing the diagrams [5], with the cycles corresponding to fermionic loops. It is useful to view building each \(\mathcal{C}\), one element at a time, by an ordered walk of \(2n\) steps, where at each step \(l\) the current element is \(e\) and the new element \(e^{\prime}\) is selected according to some rules, while the current weight \(\mathcal{C}\) is multiplied by \(g_{ee^{\prime}}\), as well as by an additional \(-1\) when the cycle is closed. An expression like Eq. (2) is then evaluated as a sum over all such walks. The central observation [43] is that, when different walks are executed in parallel, there will be many for which the step \(l\) is identical. Thus, before step \(l\) the weights of all such walks constructed up to this point can be combined, and the multiplication by \(g_{ee^{\prime}}\) applied to the sum. This suggests linking all walks in a graph, such as that in Fig. 1c, where the result of the summation before each step is stored in the nodes and the steps are the edges. An optimal structure of the graph minimises the number of times the multiplication by \(g_{ee^{\prime}}\) needs to be performed, and finding it is the task of dynamic programming. In the case of the determinant, the total number of edges can be made only polynomial in \(n\). A unique element \(e\) must appear in \(\mathcal{C}\) only once, which in general makes step \(l\) dependent on all the steps before it. However, it was demonstrated in Ref. [43] that all terms with repeated elements will cancel out due to the sign structure, provided the lowest element in each cycle within \(\mathcal{C}\), called the cycle head \(h\), is present in \(\mathcal{C}\) only once. Then, for each \(\mathcal{C}\) with a repeated element, there will be exactly one such \(\mathcal{C}^{\prime}\) that \(\operatorname{weight}\mathcal{C}^{\prime}=\operatorname{weight}\mathcal{C}\), but the number of its cycles differs by one, i.e. \(\operatorname{sign}\mathcal{C}^{\prime}=-\operatorname{sign}\mathcal{C}\). This is straightforward to ensure if, at each step \(l\), the head of the current cycle \(h\) is stored alongside the current element \(e\), and the next step is either to any other element \(e^{\prime}>h\) within the cycle, or starts a new cycle with \(h^{\prime}>h\) and \(e^{\prime}=h^{\prime}\). Therefore, each unique node must carry the three numbers \([l,h,e]\), \(l=0,\ldots 2n\); \(h,e=1,\ldots 2n+1\). The resulting graph, computing the determinant in \(\mathcal{O}(n^{4})\) floating-point operations, is illustrated in Fig. 1c. ### Approach 1: modification of the determinant Since there is a one-to-one correspondence between a particular path in the graph and the diagram it generates, the task of omitting the disconnected diagrams from the determinant can be formulated as that of identifying the corresponding paths and eliminating them selectively. Preserving all other paths is in principle accomplished by duplicating certain nodes along the unwanted paths and re-routing the paths to be kept through the copies, as in the example in Fig. 1d. This suggests that the information \([l,h,e]\), which uniquely identifies the nodes in the determinant, is incomplete for a diagrammatic series obeying more general rules, and the node label must be extended by some additional record \(\mathcal{R}\). If what constitutes \(\mathcal{R}\) is identified, the right graph can be constructed from the start by the same one-step-at-a-time algorithm, only now the two nodes \([l_{1},h_{1},e_{1}]\otimes\mathcal{R}_{1}\) and \([l_{2},h_{2},e_{2}]\otimes\mathcal{R}_{2}\) are considered identical, and are merged, only if \(\mathcal{R}_{1}=\mathcal{R}_{2}\), in addition to \(l_{1}=l_{2}\), \(h_{1}=h_{2}\), \(e_{1}=e_{2}\). In principle, the information in \(\mathcal{R}\) should be kept minimal to prevent spawning redundant nodes, but a sub-optimal graph can always be pruned in the end, without changing its value, by merging all nodes with equal \([l,h,e]\) that connect to the same nodes at the next level. A disconnected diagram is produced when not all of its cycles (fermionic loops) end up linked by the interaction lines. Thus, an obvious choice for \(\mathcal{R}\) is the list of vertices visited until the current step and grouped together according to their cycles, with the groups merged at each step if the corresponding cycles become linked by an interaction. Denoting each group by \(\{\ldots\}\) and listing the current unfinished group last, the highlighted path in Fig. 1c would become \([0,1,1]\otimes\{1\}\rightarrow[1,2,2]\otimes\{1\}\{2\}\rightarrow[2,3,3] \otimes\{2\}\{13\}\rightarrow[3,4,4]\otimes\{1\,3\}\{2\,4\}\rightarrow\) result, and it is now obvious that it produces a disconnected diagram because the two groups in \(\mathcal{R}=\{1\,3\}\{2\,4\}\) cannot be linked. Note that, for this choice of \(\mathcal{R}\), the cancellation between terms with repeated elements, relied on in the calculation of the determinant, is in general between a connected and disconnected term. Thus it is generally necessary to also prohibit sequences \(\mathcal{C}\) with repeated elements. The cancellation can still be usefully employed in certain cases, as explained below. For the \(SU(N)\) Hubbard interaction in the form (1), where the same-spin coupling is excluded, the sum over different combinations of spin indices implies that each diagram comes with the spin- and topology-dependent multiplicity factor \(M=\sum_{\sigma_{1},\ldots,\sigma_{n}}\prod_{\text{interactions}}(1-\delta_{ \sigma_{i},\sigma_{j}})/2\), where \(m\) is the number of loops and each interaction in the product connects a loop with spin \(\sigma_{i}\) to that with spin \(\sigma_{j}\), as in the example in Fig. 1b. A strength of our approach is that an arbitrary factor can be accounted for by merely (i) grouping the diagrams with the same \(M\) together and (ii) multiplication of the value of each node at the penultimate level \(l=2n-1\) by \(M\). To this end, we also store in \(\mathcal{R}\) a matrix of connections between the cycles, which does not need to be optimal, and prune the final graph to minimise its size. Despite the combinatorial structure of \(\mathcal{R}\), this algorithm is already efficient at diagram orders typically accessible in calculations. Indeed, since the Monte Carlo variance of integration over the vertex positions and times scales exponentially with diagram order \(n\)[11], in correlated regimes only contributions from \(n\lesssim 10\) can typically be evaluated with \(<10\%\) statistical error required for precision reconstruction of observables [44]. Fig. 2 shows the actual number of floating point operations required to sum all connected diagrams of order \(n\), with this approach labelled there as CoS-1. For the \(SU(2)\) Hubbard model, where each diagram has multiplicity \(M=1\) and an efficient algorithm (CDet [10]) exists, CoS-1 already exhibits competitive performance. In the \(SU(N)\) case, the computational cost of CoS-1 is independent of \(N\) (for \(N>4\)) and appears exponential for orders \(n\lesssim 6\), although it is expected to eventually rise combinatorially. Nonetheless, at large \(n\), CoS-1 is superseded by an approach of exponential complexity described below. ### Approach 2: constructing connected diagrams from the start There is much freedom in how the graph summing a particular series is designed, and the following general principles can aid its efficiency: (i) allowing unwanted or unphysical sequences \(\mathcal{C}\) might be useful if they cancel in the final sum, and (ii) walks can traverse the cycle covers \(\mathcal{C}\) and be grouped in arbitrarily order, provided all the required sequences are generated in the end. Principle (i) was key for computing the determinant, but it has another use here: we can formally allow on-site interactions between same-spin fermions in the Hamiltonian (1) since the resulting diagrams cancel. Instead of the topology-dependent factor \(M\), the diagrammatic rules [5] for fully spin-symmetric interactions prescribe that each fermionic loop is multiplied by the number of spins, implying a multiplication of each node that closes a cycle merely by \(N\). Although having to construct diagrams that cancel is a hindrance at lower orders, the simpler diagrammatic rules allow for a more efficient scaling at \(n\gtrsim 5-6\). Our recipe for organising the walks that constitute the graph has so far been borrowed from the determinant, forcing us to keep track in \(\mathcal{R}\) of how different cycles are connected. This is not necessary if we reorganise the walks to generate only connected diagrams from the start. Since for generic Hamiltonians we cannot rely on the cancellation of terms with repeated elements, we at Figure 2: Number of floating-point operations (FLOP) required to evaluate the sum of the integrands of all Feynman diagrams of order \(n\): **(a)** for connected diagrams of the \(SU(2)\) Hubbard model summed by the algorithm of Sec. II.2 (CoS-1) and that of Sec. II.3 (CoS-2), for which the curve closely follows \(\approx n^{3}3^{n}/8\); the reference dotted line \((2n)^{3}2^{n}+3^{n}\) indicates the theoretical scaling of the CDet algorithm of Ref. [10]; **(b)** for connected diagrams in the \(SU(N)\) case, with the curve for CoS-2 following \(\approx n^{3}4^{n}/7\). Shown as CoS-GW is the computational cost of summing the skeleton (bold-line) series in terms of the renormalised Green’s function \(G\) and screened interaction \(W\). least must keep track of the elements visited up to the current step \(l\), \(\mathcal{R}=\{e_{1},e_{2},\ldots e_{l}\}\), and ban adding \(e\) to \(\mathcal{C}\) if \(e\in\mathcal{R}\). Demoting the role of \(h\) in the node label \([l,h,e]\otimes\{e_{1},e_{2},\ldots e_{l-1},e\}\) to being merely the first element in the current cycle, we can generate only connected diagrams if each new cycle starts with the element that is paired by an interaction to one of the already visited ones \(e_{i}\in\{e_{1},e_{2},\ldots e_{l-1},e\}\), e.g. the smallest in \(\mathcal{R}\) that is not already paired, for uniqueness. It is easy to see that the number of floating point operations in this graph is only exponential, \(\mathcal{O}(n^{3})4^{n}\), and that the information about visited elements carried in \(\mathcal{R}\) is minimal for this order of traversing the sequences \(\mathcal{C}\), i.e. the graph cannot be pruned any further. The computational cost of this algorithm, labelled CoS-2, is shown for the \(SU(N)\) case in Fig. 2b. In our calculations, we employ the CoS-1 approach for \(n<5\) and CoS-2 for \(n\geq 5\). In systems where there is no non-trivial factor associated with each fermionic loop, as, e.g., in the \(SU(2)\) Hubbard model, or for \(N=1\), cancellations between cycle covers with repeated elements can still be utilised to reduce the cost further to \(\mathcal{O}(n^{3})3^{n}\). To this end, \(\mathcal{R}\) only needs to store the list of interactions that a visited element belongs to, and whether only one vertex of the interaction or both have been visited, i.e. 3 possibilities for each interaction. Since there is no record of which of the two vertices of an interaction has been visited, both options for the element that starts a new cycle need to be allowed, with the cycle cover that ends up repeating the vertex cancelling out, as in the case of the determinant. The complexity of this algorithm is plotted for the \(SU(2)\) case in Fig. 2a. Finally, sums of skeleton (bold-line) diagrams in arbitrary channels can be straightforwardly generated in our approach. For instance, the computational cost of producing an expansion in terms of the full (interacting) Green's function \(G\) and screened interaction \(W\)[45] by a simple extension of the CoS-2 algorithm is plotted in Fig. 2b as CoS-GW. The challenge of restricting the series to irreducible diagrams in both channels is met here by supplementing the nodes in the CoS-2 graph with a record \(\mathcal{R}\) that keeps track of connectivity when a propagator or interaction line is cut, similarly to the CoS-1 approach of Sec. II.2. Curiously, there is no notable cost increase relative to the CoS-1 for connected diagrams. The versatility of the CoS platform could enable more efficient algorithms for skeleton series in the future. ### Vector variant and quantum speed-up The CoS algorithm can be cast in a vector form, in which the graph remains of a polynomial in \(n\) size with the nodes uniquely identified by \([l,h,e]\) (as in Fig. 1c), but operates on a _vector_ of values \(|\psi\rangle=\sum_{\mathcal{R}}v_{\mathcal{R}}|\mathcal{R}\rangle\), with the floating-point numbers \(v_{\mathcal{R}}\) used to construct its value and the vectors \(|\mathcal{R}\rangle\) of an orthonormal set \(\{|\mathcal{R}\rangle\}\) being responsible for filtering valid diagram configurations. For the algorithm of Sec. II.3 (CoS-2), \(|\mathcal{R}\rangle\) is a direct product of \(2n\) orthonormal states \(|0\rangle\) or \(|1\rangle\), indicating whether an element \(e\) has been visited (\(|1\rangle_{e}\)) or not (\(|0\rangle_{e}\)), so that \(\mathcal{R}=\{e_{1},e_{2},\ldots e_{l}\}\) corresponds to \(|\mathcal{R}\rangle=|1\rangle_{e_{1}}|1\rangle_{e_{2}}\ldots|1\rangle_{e_{l}}|0 \rangle_{e_{l+1}}\ldots|0\rangle_{e_{2n}}\). The subspace of \(\{|\mathcal{R}\rangle\}\) to be passed on by each edge is selected using the projection operators \(\hat{P}^{0}_{e}=|0\rangle_{e}\langle 0|_{e}\), \(\hat{P}^{1}_{e}=|1\rangle_{e}\langle 1|_{e}\), \(\hat{P}^{0}_{e}=|1\rangle_{e}\langle 0|_{e}\) and \(\hat{P}^{1}_{e}=|0\rangle_{e}\langle 1|_{e}\). Specifically, an edge adding a new element within a cycle, \([l,h,e_{1}]\rightarrow[l+1,h,e_{2}]\), must project out all contributions in which the element \(e_{2}\) has already been visited before multiplying the result by \(g_{e_{1}e_{2}}\) and adding it to the next node, \[[l,h,e_{1}]\rightarrow[l+1,h,e_{2}]:\\ |\psi_{2}\rangle:=|\psi_{2}\rangle+g_{e_{1}e_{2}}\hat{P}^{0}_{e _{2}}|\psi_{1}\rangle, \tag{3}\] where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are the vectors stored in the nodes \([l,h,e_{1}]\) and \([l+1,h,e_{2}]\), respectively. Edges that start a new cycle with an element \(h_{2}\) act on the subspace in which \(h_{2}\) is paired by an interaction to the lowest unpaired visited vertex in \(|\mathcal{R}\rangle\), \[[l,h_{1},e_{1}]\rightarrow[l+1,h_{2},h_{2}]:\\ |\psi_{2}\rangle:=|\psi_{2}\rangle-g_{e_{1}h_{1}}\hat{P}^{0}_{ h_{2}}\hat{P}^{1}_{h_{2}}\prod_{e<h_{2}}\big{[}\hat{P}^{1}_{\bar{e}}\hat{P}^{1}_{e }+\hat{P}^{0}_{e}\big{]}|\psi_{1}\rangle, \tag{4}\] where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are the vectors stored in the nodes \([l,h_{1},e_{1}]\) and \([l+1,h_{2},h_{2}]\), respectively, and \(\bar{e}\) is the vertex paired to \(e\) by an interaction. Following this recipe, at the result node we obtain a pure state \(|\psi_{\text{result}}\rangle=v|1\rangle_{1}|1\rangle_{2}\ldots|1\rangle_{2n}\) with \(v\) being the value of the graph. On a classical computer, the elementary base vectors have to be represented by two components, \(|0\rangle=(1,1)^{T}/\sqrt{2}\), \(|1\rangle=(1,-1)^{T}/\sqrt{2}\), implying that \(|\psi\rangle\) is a \(2^{2n}\)-component vector, and the edges (3), (4) take \(\mathcal{O}(4^{n})\) floating-point operations to evaluate. Given that the number of edges scales as \(\mathcal{O}(n^{4})\), the computational cost of this approach, \(\mathcal{O}(n^{4}4^{n})\), is a factor \(\propto n\) higher than that of the CoS-2 algorithm of Sec. II.3. Nonetheless, an efficient processing of vector operations, e.g. by GPUs, could make the vector implementation faster in practice. The ability to efficiently operate with vector superpositions makes the quantum computer a promising platform for this approach. To this end, the graph defines a quantum circuit processing the state \(|\psi\rangle=\sum_{\mathcal{R}}|v_{\mathcal{R}}\rangle|\mathcal{R}\rangle\), where \(|v_{\mathcal{R}}\rangle\) encodes the value and \(|\mathcal{R}\rangle\) is represented by \(2n\) qubits. Projections can be generally performed by unitary quantum gates, while the multiplication by the matrix elements of \(g_{\alpha\beta}\) could be implemented, e.g., by quantum floating-point arithmetic [46; 47]. Provided a practical quantum implementation incurs at most polynomial computational overheads, the \(\mathcal{O}(n^{4})\) graph could be evaluated in a polynomial number of operations on a quantum processor. The result could then be used in the Monte Carlo integration over vertex coordinates on a classical processor, similarly to the quantum-classical approach [48]. An interesting possibility to explore is making the Metropolis sampling quantum as well [49; 50], e.g., through a mapping of the graph value to the quantum eigenvalue/eigenvector problem [51], which could enable a further speed-up. ## III Results We compute the average particle number per lattice site \(\langle n\rangle\) as a function of the chemical potential \(\mu\), expressed in our approach as an expansion in the powers of the Hubbard coupling \(U\), \(\langle n\rangle(T,\mu,U)=\sum_{m=0}^{\infty}a_{m}(T,\mu)U^{m}\), in the thermodynamic limit. The series coefficients \(a_{m}\) are obtained by the Monte Carlo integration of all connected diagrams of order \(m\) over the positions of the \(2m\) vertices in space-imaginary-time. Thus, \(a_{m}\) are known numerically exactly with statistical error bars, while the only source of systematic error is the truncation of the series at order \(n\). Although the series turns out to be divergent in all regimes of interest, being able to evaluate \(a_{m}\) up to \(n=8\) with \(<5\%\) statistical error (and fractions of percent at lower orders) enables an accurate reconstruction of the result with controlled precision. The recent study by Pasqualeti _et al._[38] has revealed a perfect agreement between the DQMC calculations and experimental measurements of the EoS of the \(2D\)\(SU(N)\) Hubbard model down to \(T/t=0.3\) and a coupling value up to \(U/t=2.3\) for \(N=6\). Fig. 3a shows the partial sum for the density at these parameters and \(\mu/t=0.575\) as a function of the truncation order \(n\). The series is seen to wildly diverge, but its analytic structure is rather simple, with the dominating singularity at \(U_{c}/t\approx-0.9(1)\), which allows us to reconstruct the result following the approach developed in Ref. [44]. Specifically, we employ the Dlog-Pade [52] technique, taking into account the statistical error bars of \(a_{m}\) and making sure that the systematic error of the evaluation--detected by the variation of the answer with free parameters of Dlog-Pade--is negligible. The result for the series in Fig. 3 is \(\langle n\rangle=1.335(5)\). As a benchmark, we proceed to obtain the \(\langle n\rangle(\mu)\) curve at \(T/t=0.3\), \(U/t=2.3\), plotted in Fig. 4, and find it to be in perfect agreement [67] with that computed and measured in Ref. [38]. The singularity at a real \(U\) is indicative of a phase transition exhibited by the attractive \(SU(N)\) Hubbard model, which is likely to a superfluid state. When the series for the relevant susceptibility is considered, the divergence at \(U_{c}\) is an accurate tool for characterising the critical point [53]. Leaving the calculation of susceptibilities for a more focused study, we plot in Fig. 3b a crude estimate of \(U_{c}\) from the divergence of density at \(T/t=0.3\). [68] Ibarra-Garcia-Padilla _et al._[23] demonstrate that the sign problem in DQMC rapidly intensifies with lowering \(T\) and increasing \(U\) and \(N\) at the considered densities, as long as the system remains compressible. To explore the more challenging regime, the EoS was obtained by DiagMC at a lower temperature \(T/t=0.15\) (see Figure 4), for which the \(\langle n\rangle(\mu)\) curve is below that for \(T/t=0.3\), indicating that the system is in the metallic state at \(U/t=2.3\) and in this range of \(\mu\)[54]. We further evaluate the series at larger values of \(U\) up to \(U=8\), where a faint shoulder around \(\langle n\rangle=1\) is seen to emerge. This is consistent with the development of a (pseudo-)gapped state. At these couplings, the systematic error of resummation beyond the convergence radius becomes comparable to the propagated statistical error, and the combined error bar (a sum of the two errors) shown in Fig. 4 grows substantially. Nonetheless, the analytic structure of the series appears free from singularities near a positive real \(U\), such as those in the \(SU(2)\) Hubbard model at similar parameters [55]. There, the growth of the antiferromag Figure 3: **(a)** Partial sum of the diagrammatic series for density \(\langle n\rangle\) as a function of the truncation order \(n\) for \(N=6\) and \(T/t=0.3\), \(\mu/t=0.575\), \(U/t=2.3\). The horizontal line is the result of a reconstruction of the value from the series, \(\langle n\rangle=1.335(5)\). **(b)** The location of the singularity \(U_{c}\) responsible for the series divergence at \(N=6\) and \(T/t=0.3\) as a function of the chemical potential \(\mu\) (corresponding to \(\langle n\rangle(\mu)\sim 1\)–\(2.5\) in this range and at \(U=U_{c}\)). Figure 4: Equation of state for the \(2D\)\(SU(N)\) Hubbard model with \(N=6\) for \(T/t=0.3,0.15\) and \(U/t=2.3,4,8\). netic (AFM) correlation length beyond \(\sim 10\) lattice sites was shown to be responsible for a near-critical behaviour of the diagrammatic expansions at these temperatures and \(\langle n\rangle=1\) already at \(U/t\sim 3\). Also in contrast to the \(SU(3)\) case, where an insulating AFM ground state at \(\langle n\rangle=1\) emerges already at \(U/t\approx 5.5\)[29] and strong AFM correlations (with a transformation upon heating) are observed up to \(T/t\sim 0.5\) at \(U/t=8\)[24], for \(N=6\) AFM correlations appear weak down to \(T/t=0.15\) at this coupling. Thus, there is no fundamental difficulty to reduce the errors bars and access larger \(U\) values at the expense of a polynomially-longer calculation. ## IV Discussion The introduced approach represents a versatile platform for evaluating Feynman's diagrammatic series: It is naturally applicable to fermionic as well as bosonic systems, to expansions in bare coupling and renormalised or skeleton series [5], to expansions derived from the homotopic action [13], in and out of equilibrium with the extension by the Keldysh formalism [56; 57], and may find use in other advanced approaches based on the diagrammatic theory [58; 59; 60; 61]. Being intrinsically division-free, the technique is compatible with diagrammatic methods based on algorithmic integration over Matsubara frequency [62] or imaginary time [63], in which dynamic correlation functions are computed directly without the need for numerical analytic continuation, and an efficient way of summing the diagrams would be crucial for accessing strongly correlated regimes. The vector formulation of the algorithm is a promising foundation for realising DiagMC on a quantum computer by mapping the polynomial-size graph to a quantum circuit, with the Quantum DiagMC offering an exponential speed-up over the classical counterpart. On a classical computer, the exponential scaling of the number of operations needed to evaluate all terms of a given order places [11] the CoS approach in the class of numerical methods with polynomial computational complexity. The rigid graph structure lends itself to efficient hardware acceleration and parallelisation, while the partial summation and subtraction at intermediate levels of the graph reduces the bit complexity, making the algorithm robust against rounding errors. The example application to the EoS of the \(2D\)\(SU(6)\) Hubbard model provides controlled benchmarks for ongoing theoretical and experimental studies, aimed at accessing lower temperatures and novel quantum many-body states. As a byproduct of a diagrammatic calculation, the analytic structure of the series offers additional insights in the physics of the system. The results suggest a phase transition in the attractive \(SU(N)\) Hubbard model at a coupling strength as low as \(U_{c}/t\sim-1\) up to temperatures \(T/t\lesssim 0.5\), and absence of strong AFM correlations in the repulsive case at the considered temperatures and interaction strengths, at which the \(SU(2)\)[55; 64] and \(SU(3)\)[24; 29] Hubbard models are already in the (quasi-)AFM state. In the \(SU(2)\) case, the formulation in the thermodynamic limit enabled DiagMC to attain controlled accuracy in the regime where correlations are intrinsically long-range and are difficult to capture reliably by finite-size methods even in the absence of the sign problem [55; 64]. Such regimes of the \(SU(N)\) model is where the developed technique can prove particularly useful. The possibility of a direct calculation of entropy in the DiagMC approach [54] could be instrumental for thermometry in experiments with ultracold atoms that are currently testing the limits of state-of-the-art theoretical methods. ###### Acknowledgements. The author is grateful to Kaden Hazzard for illuminating and stimulating discussions, to Eduardo Ibarra-Garcia-Padilla, Sohail Dasgupta, Kaden Hazzard, and Richard Scalettar for sharing their DQMC data, and to Sohail Dasgupta, Simon Folling, Kaden Hazzard, Eduardo Ibarra-Garcia-Padilla, Giulio Pasqualetti, and Richard Scalettar for a fruitful exchange of results and ideas. This work was supported by EPSRC through Grant No. EP/X01245X/1.
2309.04208
Prediction of even and odd sunspot cycles
Here we study the prediction of even and odd numbered sunspot cycles separately, thereby taking into account the Hale cyclicity of solar magnetism. We first show that the temporal evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. We find that the parameters describing even sunspot cycles can be predicted quite accurately using the sunspot number 41 months prior to sunspot minimum as a precursor. We find that the parameters of the odd cycles can be best predicted with maximum geomagnetic aa index close to fall equinox within a 3-year window preceding the sunspot minimum. We use the found precursors to predict all previous sunspot cycles and evaluate the performance with a cross-validation methodology, which indicates that each past cycle is very accurately predicted. For the coming sunspot cycle 25 we predict an amplitude of 171 +/- 23 and the end of the cycle in September 2029 +/- 1.9 years. We are also able to make a rough prediction for cycle 26 based on the predicted cycle 25. While the uncertainty for the cycle amplitude is large we estimate that the cycle 26 will most likely be stronger than cycle 25. These results suggest an increasing trend in solar activity for the next decades.
Timo Asikainen, Jani Mantere
2023-09-08T08:39:07Z
http://arxiv.org/abs/2309.04208v1
# Prediction of even and odd sunspot cycles ###### Abstract Here we study the prediction of even and odd numbered sunspot cycles separately, thereby taking into account the Hale cyclicity of solar magnetism. We first show that the temporal evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. We find that the parameters describing even sunspot cycles can be predicted quite accurately using the sunspot number 41 months prior to sunspot minimum as a precursor. We find that the parameters of the odd cycles can be best predicted with maximum geomagnetic _aa_ index close to fall equinox within a 3-year window preceding the sunspot minimum. We use the found precursors to predict all previous sunspot cycles and evaluate the performance with a cross-validation methodology, which indicates that each past cycle is very accurately predicted. For the coming sunspot cycle 25 we predict an amplitude of \(171\pm 23\) and the end of the cycle in September 2029 \(\pm\) 1.9 years. We are also able to make a rough prediction for cycle 26 based on the predicted cycle 25. While the uncertainty for the cycle amplitude is large we estimate that the cycle 26 will most likely be stronger than cycle 25. These results suggest an increasing trend in solar activity for the next decades. ## 1 Introduction Prediction of the sunspot number has been an everlasting interest in the space science community since the discovery of the sunspot cycle by Schwabe (1844). Sunspot number is an indirect indicator of many different solar phenomena, e.g., total and spectral solar radiation (e.g. Krivova et al., 2011; Frohlich, 2012), coronal mass ejections (e.g. Richardson and Cane, 2012), solar flares and magnetic active regions (e.g. Toriumi et al., 2017). Its cyclic variation can even be used as a pacemaker to time different aspects of solar activity, solar wind and resulting geomagnetic variations (Chapman et al., 2021; Leamon et al., 2022). Therefore, there is considerable practical interest in predicting the evolution of future sunspot cycle(s). This is especially true in today's technological society where space hazards pose a significant threat, e.g., to satellites, communications and electric grids on ground (e.g. Lanzerotti, 2001). Another interest for predicting sunspots arises from the relatively recently recognized influences of variable solar radiation and solar wind activity on Earth's climate system (Gray et al., 2010; Ward et al., 2021; Salminen et al., 2020). Over the period of about last 100 years a vast array of different methods ranging from statistical methods to intensive physical simulations have been developed for predicting sunspots. As an unbiased introduction to all of them would be a futile effort here, the interested reader is referred to several excellent reviews on the subject by Hathaway (2009), Pesnell (2012) and Petrovay (2020). According to the classic solar dynamo theory poloidal solar magnetic field in the solar minimum gets stretched to the toroidal magnetic field of the next cycle, which then produces sunspots and magnetic active regions that ultimately make up the poloidal field in the next solar minimum (Parker, 1955; Babcock, 1961; Leighton, 1969; Charbonneau, 2020). Physically motivated solar cycle predictions are based on numerical modeling of the solar dynamo process and the transport of magnetic flux on the solar surface (Charbonneau, 2020; Nandy, 2021; Karak, 2023). However, some of the most successful, yet much simpler prediction methods are based on precursors that serve as indicators for the strength of the coming solar cycle. The so called polar field precursor methods have found a good correlation between the magnetic field observed at the solar polar region up to a few years before the sunspot minimum and the amplitude of the next sunspot cycle (Schatten et al., 1978; Petrovay, 2020; Kumar et al., 2021, 2022). As the polar field reflects the strength of the poloidal phase of the solar magnetic field the precursor methods are deeply rooted in the core idea of the dynamo theory. Because the polar field has been systematically measured only since 1970s some longer running proxy measures for the polar field have also been used. The most successful polar field proxies are based on geomagnetic activity measures close to the solar minimum (e.g. Ohl, 1966; Du et al., 2012). It has been shown that certain geomagnetic activity indices, e.g., the _aa_ index correlate quite well with the open solar magnetic flux carried by the solar wind (e.g. Lockwood et al., 1999, 2014). Close to the solar minimum the geomagnetic activity is dominantly driven by high speed solar wind streams (e.g. Richardson and Cane, 2012), which emanate from coronal holes that eventually form the solar polar field in the declining phase of the cycle (Krieger et al., 1973; Bame et al., 1976). During these times the geomagnetic activity is most directly connected to the polar field and, thereby acts as a good proxy for the amplitude of the next sunspot cycle. In accordance with the solar dynamo theory the sunspot cycle prediction methods often consider only the predictability of the cycle based on the previous cycle. Most of these methods do not typically take into account the 22-year Hale cycle of solar magnetism, which is well known in the solar cycle (e.g. Gnevyshev and Ohl, 1948; Takalo and Mursula, 2018; Leamon et al., 2022) and geomagnetic phenomena (e.g. Chernosky, 1966; Takalo, 2021; Chapman et al., 2021). However, some recent studies have considered the even/odd cycle parity in the context of sunspot cycle prediction (e.g. Du, 2020; Kakad and Kakad, 2021; Penza et al., 2021; Du, 2022b; Nagovitsyn and Ivanov, 2023). Here we study the prediction of sunspot cycles accounting for the 22-year Hale cycle by considering the differences in the even and odd numbered sunspot cycles. In Section 2 we first present our data and then in Section 3 proceed to show that the time evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. In Section 4 we discuss the mutual dependencies of the parameters describing the sunspot cycles and in Section 5 we show that the parameters can be predicted using precursors found partly from past sunspot number values and partly from geomagnetic activity, which is often used as a precursor proxy for solar polar magnetic field in the sunspot minimum. Most importantly, though, we find that even and odd sunspot cycles obey different statistics therefore implying Hale cyclicity in their predictability. Separately these statistical relations are stronger and more significant than those based on combin ing even and odd cycles together. Using these statistics we construct in Section 6 a new method to separately predict even and odd sunspot cycles and apply it to the coming solar cycle 25. We also find an interesting connection between consecutive odd-even sunspot cycle pairs, which allows us to make rough early predictions for cycle 26 as well. In Section 7 we discuss the results and give our conclusions. ## 2 Data In this work we use monthly values of the version 2 Sunspot Number (SSN) obtained from the SILSO World Data Center (1749-2023). At the time of writing this paper the SSN v2 series covers time from 1749 to October 2022 therefore covering full sunspot cycles 1-24 and about two years from the start of sunspot cycle 25. The monthly values of SSN were first smoothed with a 13-month window, where the first and last month are given a weight of 0.5 and all other months a weight of 1. Using this smoothed SSN curve we identified the times of sunspot minima and maxima. In addition to the sunspot data we use in this work the geomagnetic \(aa\) index. Recently Lockwood et al. (2018a,b) presented a homogenized version of the \(aa\) index, which inter-calibrates the observations of the different stations used to compose the \(aa\) index. They also correct the index for the secular drift in the location of the auroral oval (due to secular changes in Earth's magnetic field) in relation to the observing stations. The homogenized \(aa\) index was obtained from the supplementary data of Lockwood et al. (2018b), which offers the 3-hourly values for years 1868 to 2017. We first extended this series forward in time from Jan 2018 to Dec 2021 by calculating the monthly means of the homogenized \(aa\) index and by calibrating the raw \(aa\) index (obtained from ISGI: [http://isgi.unistra.fr/](http://isgi.unistra.fr/)) against the corresponding homogenized values. The calibration was found by fitting a regression line to the logarithmic monthly averaged homogenized \(aa\) (\(aa_{H}\)) and raw (\(aa\)) values using the data between 1980 and 2017 when the \(aa\) index is based on the latest pair of observatories (Hartland in England and Canberra in Australia). The best fit line was \[\log(aa_{H})=1.049(\pm 0.004)\times\log(aa)-0.257(\pm 0.011). \tag{1}\] The uncertainties of the fit parameters have been indicated in parentheses. Note that this scaling uses logarithmic values, since in logarithmic scale the residuals of the fit are closely homoscedastic (constant variance) while in linear scale they display large heteroscedasticity thereby compromising the basic assumptions of the least-squares fit. The 3-hourly raw \(aa\) values since Jan 2018 were then scaled with Eq. 1 and the resulting data was appended to the homogenized \(aa\) index time series. We also extended the \(aa\) index backward in time using the daily magnetic declination based \(Ak(D)\) indices recorded at the Helsinki geomagnetic observatory since 1844 (Nevanlinna, 2004) and obtained from [https://space.fmi.fi/MAGN/K-index/](https://space.fmi.fi/MAGN/K-index/). These values have previously been used successfully to extend the \(aa\) index time series backward from 1868 to 1844 (e.g. Lockwood et al., 2014). Unlike Lockwood et al. (2014), who used annual averages, we calibrated here the daily \(Ak(D)\) values against the simultaneous homogenized \(aa\) index values by Lockwood et al. (2018b) for the overlapping time period from 1.1.1868 to 31.12.1879, where the \(Ak(D)\) data is rather continuous. Starting from 1880 the \(Ak(D)\) data series has large data gaps. We found that the daily \(Ak(D)\) values can be scaled to the corresponding daily homogenized \(aa\) index values by the following equation \[\log(aa_{H})=1.199(\pm 0.012)\times\log(Ak(D)+5\text{ nT})-0.97(\pm 0.04). \tag{2}\] Also here the logarithmic scale ensures homoscedasticity of the fit residuals. The correlation between the scaled \(Ak(D)\) values and the homogenized \(aa\) index values is 0.84 indicating a rather reliable scaling. Figure 1 shows the annual averages of homogenized \(aa\) index (blue), scaled raw \(aa\) index (red) (shown since year 1980) and scaled Helsinki \(Ak(D)\) values (shown for years 1844-1879). The times when the different curves overlap indicate a very good correspondence between the different datasets. The extended \(aa\) index time series is formed by the scaled \(Ak(D)\) data from 1844-1867, by the homogenized \(aa\) index from 1868-2017 and by the scaled raw \(aa\) index since 2018. For the purposes of this study we will use monthly averages of the extended \(aa\) index composite. ## 3 Parameterization of the sunspot cycle Sunspot cycles are often fitted with a parameterized curve (e.g. Stewart and Panofsky, 1938; Hathaway et al., 1994; Volobuev, 2009). Stewart and Panofsky (1938) showed that the sunspot cycles could be described roughly by curves of the form \(c(t-t_{0})^{a}e^{-b(t-t_{0})}\), where \(a,b,c\) and \(t_{0}\) are free parameters. Hathaway et al. (1994) used a slightly modified version \[f(t)=\frac{a(t-t_{0})^{3}}{\exp((t-t_{0})^{2}/b^{2})-c} \tag{3}\] and fitted this model curve to sunspot cycles 1-21. They also showed that many of the parameters correlated rather well with each other thereby offering a possibility to reduce the effective number of parameters in the curve down to one free parameter. Many studies have since used similar parameterizations. However, to be useful in predicting the sunspot cycles the parameters of the future cycle should be predicted by some means. Several studies have found relatively good correlations between different precursors and the amplitude of sunspot cycle or the maximum SSN during the Figure 1: Annual averages of homogenized \(aa\) index (blue), scaled raw \(aa\) index (red) (shown since year 1980) and scaled Helsinki \(Ak(D)\) values (shown for years 1844-1879). cycle. The parameterizations described above (e.g., Eq. 3) may not be optimal in light of these correlations, because for those the amplitude (maximum) of the sunspot cycle depends on a combination of several parameters. Therefore, we formulated a new parameterization for the sunspot curve, where the parameters are perhaps better interpreted in terms of well known solar cycle properties (amplitude, rise time, asymmetry etc.). We use here an asymmetric Gaussian curve of form \[f(t)=A\exp\left(-\frac{(t-B)^{2}}{(C\times g(t,B))^{2}}\right), \tag{4}\] where \(A\) is the sunspot cycle maximum, \(B\) is the time of the sunspot maximum measured from the sunspot minimum beginning the cycle (i.e., \(B\) is the cycle rise time), \(C\) is the time scale of the rising phase (cf. standard deviation of a Gaussian) and function \(g(t,B)\) is defined as \[g(t,B)=\left\{\begin{array}{ll}1,&\mbox{if $t\leq B$,}\\ D,&\mbox{if $t>B$.}\end{array}\right. \tag{5}\] Therefore the parameter \(D\) appearing in \(g(t,B)\) describes the asymmetry of the time scales in the declining and rising phases. The more positive the \(D\) is the longer the declining phase is compared to the rising phase. We then fitted the Eq. 4 to the 13-month smoothed SSN of each sunspot cycle separately. Each cycle was defined from the time of the sunspot minimum to the next minimum. The 4-parameter fit was done with the non-linear Levenberg-Marquardt optimization implemented in Matlab software. Although the consecutive sunspot cycles are known to be overlapping (so that the cycle already starts before the minimum SSN time) the fitting of our parametric model is largely dependent on the whole cycle and results are not strongly sensitive to the data around sunspot minima. This was tested either by leaving out up to 1 year of data from the beginning of the cycles or by extending the previous fitted cycle and subtracting it from the next cycle. In either case the fitted parameter values remained practically the same. Figure 2 shows the time series of the 13-month smoothed SSN in black, the 4-parameter model fits for cycles 1-24 in red and the Hathaway et al. (1994) fit of Eq. 3 in blue. One can see that the fitted curves describe all the individual cycles reasonably well, although some of the detailed structure in the SSN cycles cannot be described by a smooth asymmetric Gaussian. Such structures are for example the very sharp-peaked cycles like cycles 1-4 and 8-11 and the often seen double peaks (see, e.g., Karak et al. (2018)), which are quite prominent, e.g., in cycles 22-24. However, the rising and declining phases of each cycle are well captured by the fit and the cycle amplitudes are quite close to the real cycle amplitudes. The average \(R^{2}\)-value of the 4-parameter fit over all cycles is 0.973, indicating that over 97% of the variability in the SSN cycles is captured by the model curves. The average root-mean-squared error for all the cycles is 8.7 and there is no statistically significant difference in this between even and odd numbered sunspot cycles. We also note that for most cycles the asymmetric Gaussian function used here is very close to the function (Eq. 3) used by Hathaway et al. (1994). However, in some cycles (3, 4, 8, 10) there is a clear difference with the asymmetric Gaussian providing a somewhat better fit. ## 4 Relationships between cycle parameters Let us next consider the relationships between the fitted values of the four parameters over all cycles. Figure 3 shows as scatter plot the relationship between parameters \(C\) and \(D^{-1/2}\). In the figure the odd numbered cycles have been indicated with blue dots and even numbered cycles with red squares. Figure 2: Time series of 13-month smoothed sunspot number (black), parameterized fits for each sunspot cycle (red curves) and the fit by Hathaway et al. (1994) for comparison (blue curves). The sunspot cycle numbers are indicated on the plot as well. The top panel shows cycles 1-8, middle panel cycles 9-16 and bottom panel cycles 17-24. One can see that all the cycles depict quite a strong correlation between \(C\) and \(D^{-1/2}\) and that there is no significant difference between the odd and even cycles. The correlation coefficient between \(C\) and \(D^{-1/2}\) over all the cycles is 0.94 and it is highly significant (p-value is \(10^{-11}\)). This relationship indicates that cycles with a steep rising phase (i.e., small \(C\)) have a relatively more gradual (i.e., large \(D\) and small \(D^{-1/2}\)) declining phase and vice versa. The correlation in Figure 3 is therefore a manifestation of the well known property of sunspot cycles that small amplitude cycles tend to be more symmetric about the sunspot maximum time than large amplitude cycles (Waldmeier, 1968). For our purposes the tight relationship between \(C\) and \(D\) allows us to eliminate the parameter \(D\) from the sunspot model curve and thereby reduce the number of free parameters to 3. The parameter \(D\) is therefore replaced in Eqs. 4 and 5 by expression \[D=(0.25(\pm 0.05)+0.226(\pm 0.02)\times C)^{-2}\,, \tag{6}\] which corresponds to the linear fit (yellow line) in Figure 3. After replacing \(D\) with Eq. 6 we repeated the fitting procedure for each sunspot cycle, but now using the remaining 3-parameter model. The correlation between \(C\) and \(D^{-1/2}\) is so high that the 3-parameter model is roughly equally good as the 4-parameter model in describing each sunspot cycle. The average \(R^{2}\)-value for the 3-parameter model is 0.959 and therefore not much smaller than 0.973 for the 4-parameter model. Note also that the elimination of \(D\) from the model does not significantly alter the values of the remaining parameters \(A\), \(B\) and \(C\). The correlations of the corresponding parameters of the 4-parameter and 3-parameter fits exceed 0.97. Figure 4 shows the relationship between the parameter \(B\) (time of SSN maximum counted from SSN minimum in years) and cycle amplitude \(A\). One can see that there is a general anti-correlation between \(A\) and \(B\). This anti-correlation between the cycle rise time and amplitude has been long known as the Waldmeier effect (Waldmeier, 1935). Here one can see that this anti-correlation is somewhat stronger (cc = -0.88, p-value = \(10^{-4}\)) in even cycles than in odd cycles (cc = -0.69, p-value = 0.014). The correlation between \(A\) and \(B\) found here is clearly higher than, e.g., the correlation between SSN maximum and cycle rise time found by Karak and Choudhuri (2011) (cc=-0.5). It therefore appears that the parameters of the fitted SSN model curve indicate the Waldmeier effect more robustly than the exact SSN data. This is likely because of the sensitivity of the Waldmeier effect to the of timing and height of the sunspot maximum, which can be difficult to determine if the cycle has multiple peaks Karak and Choudhuri (2011). The linear fits to even and odd cycles depicted by the red and blue lines in Figure 4 are given by equations \[B_{\mbox{even}} = 5.9(\pm 0.4)-0.013(\pm 0.003)\times A_{\mbox{even}} \tag{7}\] \[B_{\mbox{odd}} = 7.2(\pm 1.0)-0.016(\pm 0.006)\times A_{\mbox{odd}}\cdot \tag{8}\] The correlation coefficients and the slopes/intercepts of the best fit regression lines are different in the even and odd cycles only with a weak statistical significance (the p-values for the differences exceed 0.12). However, the mean squared errors of the linear fits to the even and odd cycles are highly significantly different (0.14 for even cycles and 1.01 for odd cycles and the p-value for the difference is 0.003). This result indicates that the Waldmeier effect is more strongly valid for even cycles than for odd cycles. Figure 5 shows the relationship between the final parameter \(C\) (time scale of rising phase) and cycle amplitude \(A\). Here one can also see a general anti-correlation, which is yet another manifestation of the Waldmeier effect. Actually, there is also a quite good correlation (cc = 0.91, p-value = \(8\times 10^{-10}\)) between the \(B\) and \(C\) parameters, which explains the two manifestations of the Waldmeier effect. The linear fits to even and odd cycles depicted by the red and blue lines in Figure 4 are given by equations \[C_{\mbox{\small even}} = 3.5(\pm 0.4)-8(\pm 2)\times 10^{-3}\times A_{\mbox{\small even}} \tag{9}\] \[C_{\mbox{\small odd}} = 4.7(\pm 0.7)-1.3(\pm 0.4)\times 10^{-2}\times A_{\mbox{\small odd}}. \tag{10}\] Figure 3: Relationship between the \(C\) and \(D^{-1/2}\) parameters of the 4-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The yellow line depicts the linear fit to all the cycles having a correlation coefficient of 0.94 (p-value is \(10^{-11}\)) The conclusions about the differences between even and odd cycles are the same as for \(B\) vs. \(A\) relation in Figure 4. I.e., the linear relationships or the correlations are only weakly statistically significantly different, but the difference in the mean squared errors is highly significant (p-value is 0.038). Figure 4: Relationship between the \(A\) and \(B\) parameters of the 3-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The blue and red lines depict the linear fits to odd and even cycles respectively. Overall the results in Figures 4-5 indicate that the cycle amplitude \(A\) could further be used to reduce the number of parameters in the sunspot cycle fit. Moreover, there are indications that the accuracies of these fits are significantly different in even and odd cycles. Figure 5: Relationship between the \(A\) and \(C\) parameters of the 3-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The blue and red lines depict the linear fits to odd and even cycles respectively. ## 5 Precursors for cycle parameters Based on the above relations we could further reduce the SSN model curve to a one parameter model. However, each simplification of the model makes it less flexible and decreases the model accuracy. Therefore it is useful to first consider whether the cycle parameters \(A\), \(B\) and \(C\) can be directly predicted by some suitable precursor. Cameron and Schussler (2007) showed that the solar activity level three years before the sunspot minimum starting the cycle is a relatively good predictor for the amplitude of the cycle. This result for sunspot numbers was shown by Petrovay (2020) (his Figure 6), who found a correlation of 0.8 between maximum sunspot number of the cycle and the sunspot number taken 3 years before the sunspot minimum that starts the cycle. A correlation of 0.8 implies that only about 62% variability in cycle amplitudes could be explained by the past sunspot number, thereby offering a rather mediocre accuracy in predicting the coming cycle amplitudes as also Petrovay (2020) mentions. However, still motivated by this result we calculated in Figure 6 the correlation coefficient between the cycle amplitude \(A\) and the lagged sunspot number as a function of the time lag in months before the SSN minimum. Unlike in past studies we did the calculation here separately for even and odd cycles and using the 2nd power of SSN (SSN\({}^{2}\)), which we found to produce slightly larger correlations than SSN (the difference, however, is rather small and not statistically significant). One can see that for the even cycles the correlation is systematically higher than for the odd cycles. The difference between even and odd cycles is quite large, but because of the rather low number of data points (only 12 even and odd cycles) the 95% confidence limits remain rather large even when the correlation coefficient is high. However, there is a location around the lag of 41 months (about 3.4 years) before sunspot minimum where the correlations do differ from each other statistically significantly. This lag of optimal correlation is quite close to the 3 years found by Cameron and Schussler (2007). Note, however, that the good correlation is not specific to exactly the optimum of 41 months but is seen over a broad range of lags from 34 to 44 months. We chose the lag of 41 months for a closer inspection in Figure 7, which displays all the three parameters \(A\), \(B\) and \(C^{1/2}\) as a function of the sunspot number taken 41 months before the sunspot minimum (SSN(41)). One can see that the SSN(41)\({}^{2}\) correlates quite well not only with cycle amplitude \(A\), but with all the three parameters in the even cycles. In the odd cycles there is evident correlation too, but it is clearly much lower than for the even cycles due to the larger scatter. In particular the SSN(41)\({}^{2}\) vs. cycle amplitude \(A\) resembles the plot in Figure 6 of Petrovay (2020), but now reveals a large difference between even and odd cycles. Recently Du (2020) also found that the sunspot number 39 months before the sunspot minimum is a precursor for the maximum SSN of the next cycle and this relation is stronger for even cycles. A similar finding was done by Nagovitsyn and Ivanov (2023). We shall later discuss the reasons for the better correlation in even cycles but for now we concentrate on the fact that SSN(41) can be used as a quite accurate predictor of the three cycle parameters for the even cycles. For cycle amplitude the correlation coefficient is 0.984 (p-value = \(8\times 10^{-9}\)) indicating that about 96.9% of the variation in cycle amplitude may be predicted with SSN(41). For cycle rise time \(B\) the correlation is slightly lower -0.86 (p-value = 0.0003) and for \(C^{1/2}\) (time scale of rising phase) the correlation is -0.89 (p-value = \(10^{-4}\)). The linear fits to the even cycles indicated by the yellow lines in Figure 7 are given by equations \[A_{\mathrm{even}}\ =\ 88(\pm 6)+0.0103(\pm 0.0006)\times\mathrm{SSN}(41)^{2}, \tag{11}\] \[B_{\text{even}} = 4.8(\pm 0.3)-1.3(\pm 0.3)\times 10^{-4}\times\text{SSN}(41)^{2}, \tag{12}\] \[C_{\text{even}} = \left(1.70(\pm 0.05)-3.1(\pm 0.5)\times 10^{-5}\times\text{SSN}(41)^{2} \right)^{2}. \tag{13}\] We also studied the correlation between cycle amplitude and geomagnetic activity, which is known to be a good precursor for the sunspot cycle amplitude. We tested various different quantities calculated from the extended homogenized geomagnetic \(aa\) index. We found that the best predictor for the cycle amplitude for odd sunspot cycles is given by the maximum average September-October \(aa^{3}\) value within the 3-year window extending backward from the sunspot minimum. Figure 8 shows Figure 6: Correlation coefficient between cycle amplitude \(A\) and lagged 13-month smoothed sunspot number (SSN\({}^{2}\)) as a function of the time lag in months before the SSN minimum. Blue (red) curve indicates the odd (even) cycles and the correspondingly colored regions indicate the 95% confidence limits for the correlation coefficients. The green dot indicates the maximum correlation at 41 months before the SSN minimum for the even cycles. the relationship between the cycle amplitude and this geomagnetic activity measure separately for the odd cycles (blue) and even cycles (red). The yellow line indicates the best linear fit of \[A=130(\pm 7)+0.0036(\pm 0.0004)\times\max_{[0,3\mathrm{y}]}\left(aa_{\mbox{Sep-Oct}}^ {3}\right). \tag{14}\] The linear correlation coefficient between the odd cycle amplitude and maximum \(aa_{\mbox{Sep-Oct}}^{3}\) is 0.981 (p \(<9\times 10^{-5}\)). One can also see from Figure 8 that the corresponding relation for the even cycles is much worse, but still statistically significant (correlation 0.76, p = 0.027). It should be Figure 7: Dependence of \(A\), \(B\) and \(C\) parameters on 13-month smoothed sunspot number 41 months before the sunspot minimum (SSN(41)\({}^{2}\)). Blue dots (red squares) indicate odd (even) numbered cycles. The cycle numbers have been further indicated beside all points. The red lines depict the linear fits to even numbered cycles. noted though, that trying different quantities calculated from the \(aa\) index one can obtain higher correlations for the even cycles as well, but it seems that none of such correlations exceeds the extremely good correlation between even cycle amplitude and SSN(41) found above. We also evaluated the correlation between the cycle amplitude and the geomagnetic precursor defined above as a function of time lag counted from the sunspot minimum preceding the cycle. I.e., instead of taking the maximum \(aa^{3}_{\mbox{Sep-Oct}}\) in a 3-year window ending at the cycle minimum Figure 8: Relationship between cycle amplitude \(A\) and maximum September-October \(aa^{3}\) from a 3-year window before SSN minimum. The blue points indicate odd cycles and red squares the even cycles. The blue line indicates the linear fit to the odd cycles. The green circle indicates the estimated amplitude for cycle 25. the window was shifted backward in time by varying lags. The resulting correlations are shown in Figure 9 separately for odd (blue) and even (red) cycles. One can see that the highest correlations for odd cycles are indeed found up to 16 months before the cycle minimum. There is a drop in correlation between 16-55 months before minimum but the correlation then rises again and remains rather higher (about 0.8) between 55-100 months before the SSN minimum. Note however, that this does not imply a longer lead time for prediction, since the timing of the SSN minimum is still needed to determine the precursor. Figure 9 also shows that the correlation for even cycles is clearly much lower than for the odd cycles at practically all lags. However, because of the somewhat limited number of data points the statistical 95% confidence limits for the correlations are rather large. Even Figure 9: Correlation coefficient between cycle amplitude \(A\) and lagged maximum monthly September-October \(aa^{3}\) within a 3 year window. Blue (red) curve indicates the odd (even cycles) and the correspondingly colored regions indicate the 95% confidence limits for the correlation coefficients. though there are indications for systematically higher correlation for the odd cycles, there is no firm evidence that the correlations are statistically significantly different for the even and odd cycles. The specific recipe of calculating the geomagnetic activity based precursor for odd cycles was found by trial and error, manually testing a few tens of different (rather random) possibilities. This raises the question of whether the found precursor is statistically significantly better than some other, perhaps more common, choice like a mere the average \(aa\) index in the year of sunspot minimum. We tested the possibility of randomly obtaining a geomagnetic activity based precursor as good as the one found above. This was done by generating \(10^{4}\) random geomagnetic precursor time series by varying the calendar months from which \(aa\) index is taken (2 random calendar months), the length of the time window (randomly selected between 1 and 5 years) and the exponent assigned for the \(aa\) index (randomly selected between 1 to 3). In addition we randomly varied whether we take the maximum of the yearly values within the time window (as we did for the chosen precursor) or the average. For each such randomly generated precursor we calculated the correlation between the precursor and the following cycle amplitude as a function of lag up to 11 years and found the maximum correlation. This procedure simulates the act of randomly choosing a recipe for determining the precursor and finding the maximal correlation over all these lags. Finally, we randomly grouped the remaining correlations into groups of 10 values and determined the maximum correlation in each group. This simulated the act of testing 10 random precursors and choosing the one that gives the maximal correlation at some lag. Now, comparing the correlation coefficient (0.981) for the precursor used above in Figures 8 and 9 to the maximal correlations of these randomly generated precursors in shows that there is only a probability of less than 4.5% to randomly obtain a correlation higher than 0.981. This indicates that the recipe for the geomagnetic precursor we have chosen for the odd cycles is indeed statistically significant by 95% significance level. We also compared the geomagnetic precursor found above to a more commonly used geomagnetic precursor, which is the annual average of the \(aa\) index at the solar minimum, for which the correlation with the following cycle amplitude is 0.834. The correlation for our precursor was 0.981 and the difference of it from 0.834 is statistically highly significant (p-value for the difference is about 0.0007). The length of the sunspot cycle is also an interesting and important quantity. However, it is difficult to estimate from the predicted sunspot cycle curve because typically the SSN does not drop to zero at the end of the cycle. We studied the association of the cycle length and the four cycle parameters, but found no strong relationships, which would allow one to estimate the length of the cycle based on the parameters of the same cycle. However, we found that the length of the cycle seems to significantly correlate with the ratio of \(D\) and \(C\) parameters (i.e., \(D/C\)) of the _previous cycle_. This relationship is shown in Figure 10. The correlation coefficient between \(D/C\) of the previous cycle and length of the current cycle is 0.66 (p = 5.7 \(\times\)\(10^{-4}\)) and is statistically very significant. There are three cycles (6, 13 and 23) which are clear outliers. Excluding these cycles yields an even higher and more significant correlation of 0.88 (p = 2.6 \(\times\)\(10^{-7}\)). We note that these correlations are both significantly higher than, e.g., the correlation between the cycle length and the amplitude of the same cycle (about -0.5) (Wolf, 1861; Petrovay, 2020). The yellow line in Figure 10 shows the linear fit excluding the cycles 6, 13 and 23. The equation for this fit is \[L_{i}=9.2(\pm 0.4)+1.5(\pm 0.3)\frac{D_{i-1}}{C_{i-1}}, \tag{15}\] where \(L_{i}\) is the length of cycle \(i\) in years and \(D_{i-1}\) and \(C_{i-1}\) refer to the parameters of previous cycle \(i-1\). ## 6 Prediction of sunspot cycles ### Cross-validated predictions for past cycles and prediction for cycle 25 Based on the discussion above we can predict the sunspot cycle once the time of the sunspot minimum starting the new cycle is known. We use the 3-parameter description (\(A\), \(B\) and \(C\)) for the sunspot cycle of Eqs. 4-5. Parameter \(D\) has been eliminated using Eq. 6. For the even sunspot cycles we directly estimate the three \(A\), \(B\) and \(C\) parameters with \(\mathrm{SSN}(41)^{2}\) as shown in Figure 7. Figure 10: Relationship between the cycle length and the \(D/C\) ratio of the previous cycle. The numbers of each data point refer to the current cycle. The blue points (red squares) correspond to odd (even) cycles and the green point to the estimated length of the cycle 25. The yellow line indicates the linear fit excluding the three cycles 6, 13 and 23. For the odd cycles we first estimate the cycle amplitude \(A\) from the geomagnetic precursor as shown in Figure 8 and then estimate \(B\) and \(C\) using the linear relationships for the odd cycles given depicted in Figures 4 and 5 respectively. Using this approach we predicted past even sunspot cycles starting from cycle 2 and each odd sunspot cycle starting from cycle 11. We predicted each cycle separately with the so-called leave-one-out cross validation method. This means that when predicting the \(i\):th cycle all the fits between different parameters and precursor values discussed above were determined by using data from all other cycles except the \(i\):th cycle. Therefore, the fitted relationships between the parameters and precursors (Eqs. 8, 10, 11, 12, 13, 14) change from one predicted cycle to the next. This variability in the model parameters together with the residual variability incorporates the total prediction uncertainty of the model. It is important to note that the numerical values in Eqs. 11, 12 and 13 for \(A\), \(B\) and \(C\) of even cycles and in Eqs. 14, 8 and 10 for \(A\), \(B\) and \(C\) of odd cycles as well as their standard errors correspond to the fitted values when all available sunspot cycles are used in the fit. These values are therefore appropriate for prediction of sunspot cycles from cycle 25 onward. An important step in applying the cross-validation method is to estimate the prediction error of the model. Therefore, when proceeding through all the past cycles 1-24 (excluding odd cycles 1, 3, 5, 7 and 9, for which no geomagnetic precursor could be determined) we obtained the residuals of the 13-month smoothed \(\mathrm{SSN}^{3/4}\) values and the corresponding predicted values each time neglecting the cycle to be predicted as \[r=\mathrm{SSN}^{3/4}-\mathrm{SSN}^{3/4}_{\mbox{pred}}. \tag{16}\] The exponent of \(3/4\) in the above equation was used because the residuals calculated this way were more homoscedastic, i.e., the residual variance was more uniform over different values of \(\mathrm{SSN}\) compared to regular residuals in linear scale (exponent of 1 in Eq. 16). We then used the collection of residuals from Eq. 16 to generate a spread of possible sunspot number predictions for the \(i\):th predicted cycle. This was done by bootstrapping \(10^{5}\) values for the residuals (i.e., randomly resampling with replacement from the collection of residuals) for each predicted monthly sunspot number value in the cycle. The residuals were added to the \(\mathrm{SSN}^{3/4}_{\mbox{pred}}\) values and the result was then converted back to linear scale by the transformation \[\mathrm{SSN}_{k}=\left(\mathrm{SSN}^{3/4}_{\mbox{pred}}+r_{k}\right)^{4/3}, \tag{17}\] where \(k=1,2,...,10^{5}\), \(r_{k}\) indicates the \(k\):th bootstrapped residual and \(\mathrm{SSN}_{k}\) indicates the \(k\):th predicted SSN value for a particular month. These \(10^{5}\) values form an ensemble of \(\mathrm{SSN}\) predictions for each monthly \(\mathrm{SSN}\) value in the predicted cycle. Figure 11 shows the predicted past cycles (green curves) and their uncertainty ranges (blue shading) together with the 13-month smoothed sunspot number (black curve) and the optimal 4-parameter model curves (red curves) as in Figure 2. The figure also shows the predicted curve for cycle 25 (magenta curve) and as a comparison the cycle 25 prediction offered by the Space Weather Prediction Center ([https://www.swpc.noaa.gov/products/solar-cycle-progression](https://www.swpc.noaa.gov/products/solar-cycle-progression)). One can see that the predicted past cycles are rather close to both the optimal 4-parameter model curves and the 13-month smoothed sunspot number. Particularly noteworthy is the fact that the amplitude of all of the predicted past cycles quite accurately matches the real amplitude of the sunspot cycles. This is true for the very small cycles 6, 12, 14 and 24 and also for the largest cycles, e.g., cycle 19. Furthermore, by definition about 95% of the monthly values of the 13-month smoothed sunspot number are within the 2-standard deviation uncertainty range of the predicted curve. This extremely good performance of the past predictions gives strong reasons to believe that the prediction of the sunspot cycle 25 is reliable as well. Figure 12 shows a closeup of cycle 25 prediction in the same format as in Figure 11, but with the addition of the monthly unsmoothed values of SSN also included as the thin black curve. Our predicted curve peaks in May 2024 and the peak value of the cycle is \(171\pm 23\) (1 standard deviation uncertainty). Using Eq. 15 and the \(D\) and \(C\) parameters of cycle 24 the predicted length of cycle 25 is about \(9.7\pm 1.9\) years (2 standard deviation confidence interval). This indicates that the end of cycle 25 is attained likely in September 2029 with the 2-standard deviation uncertainty range extending from October 2027 to July 2031. At the time of writing this paper the cycle 25 has already begun and the recorded SSN is already significantly higher than the prediction offered by the SWPC. On the other hand, the recorded SSN follows quite closely our predicted curve, which indicates that the cycle 25 will be considerably stronger than cycle 24 and roughly the same size as cycle 23. There have also been a range of other predictions for cycle 25. For example, Upton and Hathaway (2018) used advective flux transport modeling to predict that cycle 25 amplitude will be about 95% of cycle 24 amplitude which would make cycle 25 the weakest cycle in 100 years. Bhowmik and Nandy (2018), on the other hand, used solar dynamo modeling to predict that cycle 25 would be slightly stronger (about 14%) than cycle 24. Pesnell and Schatten (2018) used the SODA index (based on a combination of solar polar magnetic field and solar F10.7 index) to predict an amplitude of 135\(\pm\)25 for cycle 25, i.e., slightly larger than cycle 24. Kumar et al. (2021) used various polar field precursors extracted, e.g., from solar magnetograms, to predict a cycle amplitude of 126\(\pm\)3 for cycle 25. Kumar et al. (2022) used the correlation between the rise rate of polar field and the amplitude of the next cycle to predict an amplitude of 137\(\pm\)23 for cycle 25. Sarp et al. (2018) used non-linear time series modeling approach to predict a clearly stronger cycle 25 with an amplitude of 154\(\pm\)12 peaking in early 2023. Li et al. (2018) used statistical modeling to reach a similar prediction with a predicted amplitude of 168.5\(\pm\)16.3 and peak of the cycle in October 2024. Both Sarp et al. (2018) and Li et al. (2018) predictions are fairly close, but slightly lower than our prediction. A quite different prediction was given by McIntosh et al. (2020), who used the timing of termination of toroidal bands of solar activity to predict a rather strong cycle 25 with amplitude of 233\(\pm\)21. However, this prediction was recently revised to an amplitude of 184\(\pm\)17 (McIntosh et al., 2022), which is in agreement with our prediction when considering the ranges of uncertainty. Du (2022) used the correlation between the cycle rising rate and cycle amplitude to predict the cycle 25 based on the 2 years of data from the beginning of the cycle. They predicted the cycle amplitude to be 135.5\(\pm\)33.2, which is somewhat lower than our prediction, although not in complete disagreement given the uncertainty range. Penza et al. (2021) found a correlation between the parameters describing the sunspot cycle shape of even and subsequent odd cycles. Based on this they predicted the cycle 25 to be similar or slightly larger than cycle 24. While the studies discussed above are only a subset of all predictions made for cycle 25 it seems that our prediction is clearly above the average in predicted cycle amplitude and also clearly above the SWPC prediction issued by the Solar Cycle Prediction Panel. ### Attempt at predicting the cycle 26 Above we found that for even numbered cycles the 13-month smoothed sunspot number evaluated 41 months before the start of cycle provides an extremely good estimate for the amplitude and other parameters of the cycle. This leads to an interesting question: how accurately could one use the predicted cycle 25 SSN curve to provide a prediction of cycle 26? Evidently the uncertainty of such a prediction would be rather large but we shall here attempt to make one also for cycle 26. The first step is to evaluate how well the predicted SSN 41 months before the sunspot minima actually correspond to the true values, which were used as a precursor for even cycles. Figure 13 shows the relationship between the real and predicted 13-month smoothed SSN evaluated 41 months before the minima that end each sunspot cycle. Both odd and even cycles seem to adhere to the same overall linear relationship, which has a high correlation of 0.832 (p = 10\({}^{-5}\)). The linear relationship is given by the equation \[\mathrm{SSN}(41)_{\mbox{real}}=20(\pm 10)+0.72(\pm 0.12)\times\mathrm{SSN}(41)_{ \mbox{pred}}. \tag{18}\] Ideally there would be one-to-one relationship between the real and predicted values, but the modeled SSN curves seem to systematically slightly underestimate (overestimate) small (large) SSN(41) values. We can use this fit and its 95% prediction error limits (see Figure 13) to predict the SSN 41 months before the sunspot minimum that starts cycle 26. The timing of this minimum is determined by the timing of the minimum that starts cycle 25 and the cycle 25 length evaluated above from Eq. 15. Once the SSN(41) value to be used as a precursor for cycle 26 is known we can use Eq. 11 to estimate the cycle amplitude. Because of the uncertainties associated to the length of cycle 25 and the scaling of predicted SSN(41) according to Eq. 18 we evaluated the spread of possible cycle 26 amplitudes using a Monte Carlo simulation having 10\({}^{4}\) rounds. In each round we randomly generated a value for the length of the cycle 25 within the range of its uncertainty. This was then used to calculate the timing of 41 months before the end of cycle 25 and the SSN value at that time using the predicted SSN curve for cycle 25 (Figure 12). This value was then used to calculate a prediction for the cycle amplitude using Eq. 11. The histogram of the Monte Carlo simulation results is shown in Figure 14. As expected the results indicate a quite a large uncertainty range covering practically all past sunspot cycles. However, despite the large range of uncertainty some interesting and non-trivial aspects are seen. The median of the results implies that cycle 26 would be even slightly stronger than cycle 25. In fact, based on these results the probability that cycle 26 will be weaker than cycle 25 is only about 19%. The results also imply an even clearer difference to cycle 24, which was the weakest cycle of the last 100 years. The probability that cycle 26 would be weaker than cycle 24 is only 0.8%. cycle parameters were significantly stronger in even cycles compared to the odd ones. For example the well-known Waldmeier effect that cycle amplitude and its rise time are inversely correlated was more strictly valid for even cycles. A similar finding for the Waldmeier effect was also reported, e.g., by Du (2022a) and Dikpati et al. (2008). This result implies that for some reason the even numbered cycles possess a smaller dimensionality than odd numbered cycles. Such a systematic difference between even and odd cycles may imply a connection to the more fundamental 22-year magnetic Hale cycle of the Sun. Du (2022a) also found significant differences in the Waldmeier effect between the two hemispheres with the southern hemisphere displaying the normal Waldmeier effect, which is stronger in even cycles, while the northern hemisphere displayed an inverse-Waldmeier effect, which is stronger in odd cycles. This implies that there may also be a connection between the Waldmeier effect and the hemispheric asymmetry in the sunspot number. Our approach to prediction of sunspot cycles is based on statistical precursors for the cycle parameters. Following Cameron and Schussler (2007) we found that the 13-month smoothed sunspot number evaluated 41 months before the sunspot minimum that begins a cycle, SSN(41), is on average a fair predictor for the next cycle when considering all sunspot cycles. However, we found that this relation is much stronger for even cycles than for odd cycles, for which the SSN(41) could not be very accurately used as a predictor. The advantage of SSN(41) precursor is that we can predict for even cycles all the cycle parameters (which correlate well with the amplitude) quite accurately. Cameron and Schussler (2007) explained the tendency of past SSN to correlate with the next cycle amplitude as a result of overlapping sunspot cycles. While the past cycle is still declining the new cycle begins. Because of the Waldmeier effect the stronger the new cycle will be the faster it will rise to the maximum and the earlier the intersection time of the old and the new cycle is attained. The earlier the intersection time, the higher the SSN 41 months earlier in the declining phase of the previous cycle. The fact that this connection is here found to be more strictly valid for even cycles arises because of the fact that the Waldmeier effect is also tighter in even cycles. The question of why the Waldmeier effect is different in even and odd cycles is still open and should be investigated in future studies. For odd cycles we found that the maximum Sep-Oct geomagnetic \(aa\) index within 3 years preceding the sunspot minimum that begins the cycle is an extremely good precursor for the cycle amplitude. It has been long recognized that geomagnetic activity close to solar minimum reflects the strength of the open solar flux, which is at these times connected to the poloidal magnetic flux extending from the Sun's polar coronal holes. It is curious, however, that the best precursor for odd cycles was found to be connected to geomagnetic activity close to the fall equinox. Some other past studies have used geomagnetic activity averaged in other some ways over different periods of time and found good correlations to the next cycle amplitude. While our Sep-Oct \(aa\) precursor is probably not statistically significantly better than some of these other measures there might be a physical reason for the preference of Sep-Oct season. It is known that geomagnetic activity has strong seasonal variation, which is largely due to the Russell-McPherron (RMP) effect (Russell and McPherron, 1973; Lockwood et al., 2020). The RMP effect describes the fact that interplanetary magnetic field (IMF), which is oriented along the solar equatorial plane, projects onto the Z-axis of the GSM coordinate system close to the fall and spring equinoxes and thereby may enhance energy input from the solar wind into the magnetosphere/ionosphere system. The RMP effect is also dependent on the polarity of the IMF so that during fall (spring) equinox the IMF pointing away from (towards) the Sun enhances solar wind energy input and therefore leads into larger geomagnetic activity as well. Often both polarities of the IMF are seen within a solar rotation but especially close to solar minima the heliospheric current sheet is often flatter, which allows Earth to be more exposed to the magnetic polarity connected to Sun's northern (southern) pole in fall (spring) due to the Earth's changing heliographic latitude over the course of the year (Rosenberg and Coleman Jr., 1969). According to the 22-year Hale cycle the northern solar pole has a positive magnetic polarity close to sunspot minima preceding odd cycles, which leads to a dominance of away sector close to fall equinoxes (Hiltula and Mursula, 2007; Vokhmyanin and Ponyavin, 2012). Furthermore, it has been shown that the heliospheric current sheet is systematically tilted southward in the declining phase (Hiltula and Mursula, 2006), which further enhances the dominance of the away sector in fall and decreases the dominance of the toward sector in spring prior to odd cycles. Therefore, it is expected that also the geomagnetic activity portrayed by the _aa_ index is most sensitively proportional to the strength of the IMF (i.e., to open solar magnetic flux connected to solar polar field) at these times. A detailed confirmation of this interpretation is warranted, but out of the scope of this study. In addition to cycle amplitude and other parameters we found a curious statistical relation between the length of the sunspot cycle and the ratio of \(D\) (cycle asymmetry) and \(C\) (time scale of rising phase) of the preceding cycle. The fact that such properties of the preceding cycle somehow are connected to the length of the next cycle again highlights the fundamental 22-year Hale cyclicity of solar magnetism. While there is no physical explanation for this relationship at the moment it can be statistically used to estimate the cycle length perhaps a bit more accurately than previous metrics (Petrovay, 2020). Using the found precursors we used cross-validation to test their prediction accuracy by predicting the past solar cycles. For all past cycles the predictions were very close to the real sunspot cycles thereby giving strong confidence that the prediction of future cycles would be equally successful. We proceeded to predict the odd cycle 25 and found that its amplitude will be \(171\pm 23\), thus about 1.6 times stronger than cycle 24. There are already clear indications that our prediction closely follows the progression of the cycle 25 which has already started at the time of writing this paper. It is also noteworthy that the prediction issued by the Solar Cycle 25 Prediction Panel at the SWPC suggests cycle 25 to be similar to cycle 24, which is already now clearly below the current sunspot levels. Using the predicted cycle 25 and the fact that SSN(41) could be used as a predictor for the even cycles we provided a rough prediction also for cycle 26. As expected the uncertainty range of the prediction was rather large, but based on the results it seems rather likely that the cycle 26 will be stronger than both cycles 24 and 25. Therefore, we find no evidence for an imminent drop of solar activity to a grand solar minimum as suggested by several past studies (e.g. Abreu et al., 2008; Owens et al., 2011; Lockwood et al., 2011). Overall these results display the capability to predict even and odd cycles using different precursors with rather high accuracy. The results also clearly indicate a connection between odd-even pairs of sunspot cycles and highlight the 22-year Hale cyclicity. Accordingly, the Hale cyclicity should be considered more carefully also by more physically motivated dynamo and flux transport prediction models of solar activity. ###### Acknowledgements. We acknowledge the financial support by the Academy of Finland to the PROSPECT (project no. 321440). The sunspot number data was obtained from World Data Center SILSO, Royal Observatory of Belgium, Brussels ([https://www.sidc.be/SILSO/home](https://www.sidc.be/SILSO/home)).
2309.05670
Towards Supporting Sustainable Grocery Shopping through Joyful Technology: Annotated Portfolio of Speculative Ideas
A third of greenhouse gas emissions are attributable to the food sector. A shift in dietary habits could reduce these by half. Engaging and empowering consumers is vital to this critical shift; yet, if we get the framing wrong, we might cause distress or eco-anxiety, impeding initial engagement as well as longer-term diet change. Evoking joy is a powerful yet under-explored motivator to overcome psychological barriers and support pro-environmental attitudes. This pictorial presents the outcomes of a one-day workshop as a series of speculative ideas in the form of an annotated portfolio, highlighting design qualities and interaction mechanisms that afford joy and sustainability in food choices. Our contribution will inspire HCI researchers and designers to reposition joy as a fundamental value to sustainability communication
Gözel Shakeri, Frederike Jung, Ferran Altarriba Bertran, Daniel Fernandez Galeote, Adrian Friday
2023-09-08T12:38:26Z
http://arxiv.org/abs/2309.05670v1
Towards Supporting Sustainable Grocery Shopping through Joyful Technology: Annotated Portfolio of Speculative Ideas ###### Abstract A third of greenhouse gas emissions are attributable to the food sector. A shift in dietary habits could reduce these by half. Engaging and empowering consumers is vital to this critical shift; yet, if we get the framing wrong, we might cause distress or eco-ancxiety, impending initial management as well as longer-term diet change. Evolving jobs is a powerful yet under-explored motivator to overcome psychological barriers and support pro-environmental attitudes. This pictorial presents the outcomes of a one-day workshop as a series of speculative ideas in the form of an annotated port, highlighting design qualities and interaction mechanisms that afford joy and sustainability in food choices. Our contribution will inspite HCI researchers and designers to reposition ways as a fundamental value to sustainability communication. Behavior change, speculative design, persuasive technology, joy, sustainable HCI, sustainability communication, sustainable interaction design ## 1 Introduction A third of environmental degradation is attributable to the food sector [40]. Dietary change could reduce this by half [53]. Thus, eating more sustainably (i.e., consuming more plant-based, local, and seasonal foods) is of growing importance. Accordingly, nearly three-quarters of North-West Europeans [1, 2] think it is important for them to buy food that has a low environmental impact, yet only 7% regularly do [11, 5]. While there are clearly many system barriers beyond the individual consumer, engaging consumers in this change is more important than ever. Much research focuses on the provision of information when sustainable grocery shopping, in-store and online. Solutions range from sustainable recommender systems [57] and user-preference based systems [26] to incen-ivivisation strategies [19, 37], and behavioural nudges [32, 37, 51]. However, there are two major issues with these approaches. First, existing solutions side-line positive emotional involvement. A growing body of literature argues that the widely-held view of "lack of information" about climate change is not the main obstacle to engagement [31]; rather a lack of positive emotional involvement is. When exposed to environmental degradation [58], the pri-many emotional reactions we may feel are a sense of diminished control and powerlessness [2, 13, 14]. Mechanisms aimed at relieving us from these negative feelings are denial, rational distancing, agathy, and delegation [25]. Thereby, emotions function as _ante-cedent_ to engagement [47], obstructing sustainable behaviours. Additionally, emotions function as _consequence_ of engagement. Individuals who are particularly invested in sustainability are further impacted by negative feelings such as anxiety, distress, dread, guilt, shame, and frustration [34, 59, 60], reinforced by the notion that engagement is intrinsically utilitarian and denies self-indulgence. Ultimately, negative feelings prior and post-environmental behaviours, and the shift of responsibility onto the individual and away from system drivers including affordually and access causes disengagement with climate action altogether [41]. Interventions that only target factors such as knowledge, instead of attempting to overcome negative emotions are unlikely to cause significant changes in socially and culturally embedded high-impact eating habits [5, 39]. We posit that _joy1_ may help consumers to find and maintain holistically meaningful ways to take action, thus promoting and sustaining pro-environmental efforts [25, 47, 56, 61]. Tools that elicit positive emotions, such as feeling amazed, cheerful, proud, and hopeful, have been shown to be significant predictors of purchase intentions, and can increase the probability of making a purchase [17, 48]. Overlooking the importance of joy when designing systems supporting sustainable food purchases may hinder their success [17, 49, 4]. Second, existing solutions utilise primarily (front-of-package) labels as a language to communicate a food's sustainability, focusing again on quantitative and symbolic representation rather than sparking joy. While co-labels could elicit positive emotions such as pride over purchase behaviour [17, 36], eco-labels can lead to consumer confusion through information overload [63] as well as lack of information [20], decreased consumer trust and label credibility [21, 33], and ultimately in feelings of helplessness [34, 59, 60] once sustainable purchase intentions are abandoned [3, 16, 54]. More generally, the intention and impact of eco-labels and existing digital solutions do not extend beyond technology as guidance, as mediators, or as tools for reflection [29]. Despite potentially playing an important role in shaping people's climate conscious decision-making, joy is clearly not sufficient nor is there a one-size-fits-all approach [47]. The expression of joy depends on situational circumstances and individual variables, including one's personal understanding of joy [47]. Therefore, in this work, we explore what might be 'joyful elements' for sustainability signalling by a research-through-design workshop. The ubiquity of digital tools can cater to a multitude of factors by providing versatile media (e.g. apps, web-broward old-ons, Augmented Reality (AR) glasses) in versatile ways (e.g. in-store, online). Actnowledging the possibly pivotal role of digital technologies, we invited HCI researchers and designers to a workshop where we co-designed technology-mediated eco-labels which support consumers in pro-environmental decision-making. The overarching theme of the workshop was to rethink sustainability communication and to incorporate joy into it. This paper presents the outcomes of conversations that took place during the workshop, aimed at imagining the future of digital sustainability support. In particular, our contribution is twofold: first, we contribute an annotated portfolio of five speculative ideas for eco-labels that utilise emerging technologies in ways that may be both _effective_ and _joyful_. Second, we provide a close reading of these designs [9] to reflect on common qualities and themes to sustainability signalling. Aiming to'reposition joy' as a fundamental aspect of sustainability signalling, this work presents researchers and designers in this field with tangible ideas on how to incorporate joy in sustainable grocery shopping. ## Background Joy is key in pro-environmental behaviour and must be considered for productive engagement with climate change. For example, the way we communicate about animal-derived products often highlights positive emotions, triggering re-experiences of eating and enjoying foods, often in social circumstances [39] (e.g., family barbecue), thereby guiding consumption processes such as habits and perceived pleasure [15, 42, 50]. In contrast, communication about sustainable foods often lacks enjoyment, focusing instead on informational and utilitarian factors such as the sustainability of the ingredients or the greater good for society and the planet [39]. Consequently, sustainability becomes a moral obligation towards others rather than an intrinsic value. This, once framed as an extrinsic source of motivation, can tie the behaviour to contingent and external dynamics of reward and punishment [44]. Then, associated negative emotions may function as hindrance to future purchase behaviours. Evolving joy instead may be key to promoting productive engagement with climate change. The single most used technology to support sustainable grocery shopping are food labels. The basic idea of food label technology is to provide information, enable comparison, and give feedback [10, 46], using either abstract or concrete information, which is presented visually or through text, giving complex information about a product's characteristics, such as animal welfare standards, environmental impacts, and ethical wage labour, in a simplified form to make it easier for consumers to make informed decisions [4, 8, 21, 28]. HCI examples of co-labels on food are EoFormed [6]. Social Recipes [62], Envirofy [51], Nu-Food [38], Eco-nundrum [46], and Food Qualiculator [7]. These works investigated labels as a means of providing information tailored to users' own context and choices, without a direct focus on enabling joy. Beyond being primarily informational, labels are traditionally static. They are either printed onto food packaging, visualised in sculptures [46], or displayed with a product during technology-mediated shopping [52]. Static solutions operate from a one-size-fits-all mentality; pro-environmental consciousness [25] however is a complex made up of environmental knowledge, values and attitudes, and emotional involvement. A single solution cannot cater to the multitude of know-ledges and aspects of joy. In HCI, attempts to create dynamic labels exist (e.g. [26, 51]), but they do not consider joy, ignoring a key aspect necessary for successful sustainability support when grocery shopping. Finally, labels may elicit emotions, but often negative ones [17]. On the one hand, labels empower consumers to adhere to their sustainability goals, inhibiting a sense of triple through accomplishment [17]. On the other hand, labels do not address the distress experienced in regards to climate change engagement, such as the sacrifice of personal pleasure over global interests, etc. Invoking people's emotional responses more intentionally and actively may overcome these psychological (and societal) challenges. In summary, there exists a gap in the literature which investigates _effective_ and _joyful_ sustainability support. Digital technological solutions, to a larger extent than printed packaging labels, have the ability to display a multitude of personalised or emotionally appealing content and allow the shopper to interact with sustainability information. HCI researchers play a crucial role in designing, creating, and evaluating effectiveness visualisations that encourage sustainable food choices, positive grocery shopping experiences, and ultimately reduce greenhouse gas emissions. This pictorial presents a series of five speculative ideas in the form of an annotated portfolio, highlighting interesting questions that play joyful and sustainable food choices. **Workshop** This pictorial is based on the results of a workshop held at [omitted]. It was based on the concept of "choosing (with) joy" in which participants explored the idea of joy in co-labels, sustainable food common-nication, and technologies. After piloting the workshop with six participants in a prior setting, we then cont-ducted the workshop in one day with 11 participants, including 5 of the co-authors. All participants worked in the field of HCI, as researchers or designers. Overall, the workshop consisted of three parts: Examining the state-of-the-art in sustainability communication; design-ing speculative, digital tools for sustainability communication; and delying into the ideas and their underlying qualities. Together with the participants we discussed surfaced overall themes to joyful, technology-mediated sustainability communication. After the workshop, the authors interact this analysis. We revisited the common qualities and themes to sustainability signalling through a close reading of the speculative designs [9], and linked them back to existing literature, in an attempt to challenge and solidify the insights from the workshop. The result is a set of joyful speculative ideas, their design qualities and themes that inspire designers and HCI researchers to reclaim joy as a fundamental value for sustainable grocery shopping. # Bee/mind the story: multiple points of view _Be(c)hind the Story_ imagines a pot of honey bearing a label that can be scanned using a phone. The label is not only a symbol, but the drawing of a bee which asks directly: "Have you ever wondered how honey gets from me to you?". When the label is scanned, an AR environment appears on the screen, inviting the user to select one of the points of view involved in the production and distribution of the honey. The unique stories included involve humans, such as the beepers, but also other animal species such as bees. In addition, the user will be able to focus on the honey's point of view itself, including its distribution process. These are presented as videos sprilled with moments of choice where users can decide whether they want more information on different issues, from animal welfare to carbon emissions. Overall, the system dopants from statistical information to wave unique stories, as well as an overarching one, which are then tailored to the consumer according to their own preferences and other circumstances (e.g., their country, representing an and point of the distribution process that may be very different from others). In this way, _Be(c)hind the Story_ prioritiishes the consumer's autonomy, as well as their relatedness to various multi-species actors to whom they have, indeed, a deep connection: they are about to eat the fruit of their labour. By being able to choose the various stories of how the honey got the consumer's particular city or shop, _Be(c)hind the Story_ aims to provoke reflection and a deeper engagement with what is behind the scenes. ## ECO Bloom: Personalized adaptation _Eco Bloom_ provides a layered overview about how a products' qualities align with a consumers' personal pro-environmental attitudes. Upon scanning a product in the _Eco Bloom_ caters to the specific needs of users, who, a colourful closed clover appears on the consumers' smart watch or phone. The closed clover shows an overall score (1-10). Depending on the value, the label is coloured in a traffic-light metaphor. Each folded leaf, surrounding the overall score, shows a symbol of different eco-categories that are important to the consumer, i.e. animal welfare, water consumption etc. When tapping on one leaf, it folds open, revealing a detailed score for the individual category and follow-up information. Every score the consumer sees is geared towards their personal attitudes, i.e. what they consider as important and (un-)acceptable. As pro-environmental attitudes are highly individual, _Eco Bloom_ caters to the specific needs of users, providing personalised product feedback. Thereby, _Eco Bloom_ adjusts to any phase of an individual's journey towards sustainability. Moreover, a key quality of this label is that it does not overwhelm the consumer with a information unnecessary to their interests and by too much information at once. The overall folded label fits onto the size of a smart watch display. As it is a layered "one-size-fits" all information, this type of label encourages individual perspectives and recommendations for sustainable shopping. A label of our own: community-driven eco-labeling _A label of our own_ is a smartphone app for crowdsourcing eco-labeling among consumers. It uses an AR system to enable consumers to target any product's packaging and augment it with meta-data such as: text-based annotations, drawings to hack or otherwise modify the visual appearance of the product, ratings based on consumers' perception of sustainability. By adding content to emergent AR-based eco-labels, users contribute to a collectively-owned, multi-faceted rating of products - thereby claiming shared responsibility over the fiscalisation of those products' socio-ecological impact. Because the system is not owned by anyone in particular (the government, the retailer, the producer, the consumer), the content of the collectively crafted eco-labels can hardly be censored in a top-down manner. Rather, these eco-labels are the result of an emergent and evolving process of negotiation between consumers. As such, this design idea tackles an important issue with eco-labels: credibility. By displacing their creation _from_ producers and retailers to consumers, it reduces chances of greenwashing. From an experiential perspective, _A label of our own_ appeals to people's desire for sharing knowledge and opinion; and, conversely, from their will to learn from others'. It also pins into the joy that can derive from engaging in creative, subversive activity with an intention of making a societal impact. It builds on existing traditions of activism such as guerilli art, which use creative practice as a form of societal transformation through subsection of the status quo. Borrowing from that principle, _A label of our own_ reclaims consumers' voice when it comes to presenting products - if those products are labelled by producers in a non-truthful way, they will likely be called out by the community. Loud&Bold encourages joyful discovery through layers of information. Olfactory, auditory, and haptic feedback provide high-level sustainability information. Upon active search however (i.e. turning the product around), consumers receive detailed information. The alteration of test on the back provides radical transparency increasing consumer trust, detecting them, making them confident in their choice and themselves. _Load&Bold_ is a technology which invades a consumer's physical, emotional, and rational space - loudly and boldy. Simultaneously, it asks consumers to be loud and bold about their choices, be it for good or ill, as the choice is made in public. Loud&Bold encourages joyful discovery through layers of information. Olfactory, auditory, and haptic feedback provide high-level sustainability information. Upon active search however (i.e. turning the product around), consumers receive detailed information. The alteration of test on the back provides radical transparency increasing consumer trust, detecting them, making them confident in their choice and themselves. _Load&Bold_ is a technology which invades a consumer's physical, emotional, and rational space - loudly and boldy. Simultaneously, it asks consumers to be loud and bold about their choices, be it for good or ill, as the choice is made in public. Loud&Bold assesses the sustainability of its content, independent of food category and can thereby be used for any packaged foods, given the embedded technology within _Loud&Bold_ is sourced sustainably, with long-term usage in mind. Through its multimodal design, _Loud&Bold_ encourages consumer sustainability-ability, but also raises the question of social acceptability, especially if an individual is not loud and bold, yet desires the "unsuitable" choice. Finally, the multimodal layers allow for disabled people to engage in everyday sustainability (which is shockingly under-explored). Providing access to participation in sustainability beyond the "perfectly sighted" population might elicit feelings of sense of control, confidence in choice, equality, and ultimately, by anyone disabled communities, therefore greater sustainability, environmentally and socially. ## UNPACKAGED: Disruptive Packaging _Unpackaged_ represents a retail system in which packaging and labels are intertwined. In it, producers are only allowed to design and use aesthetically pleasing packaging designs if their product meets all the necessary eco-cerifications. The _Unpackaged_ eco-labels are graphical symbols imprinted on the package, but they can also be scanned using a digital device such as a mobile phone to show, via augmented reality, the story behind the product. The other side of this system are the products that do not comply with the certifications required to have an attractive presentation. In this case, products must come in bland, homogeneous packaging, In addition, the absence of the necessary eco-labels is also highlighted in the packaging, showing an empty space where they should be. Since this is a status-quo disrupting system, it would likely need strong support and implementation at the policy level, similarly to tobacco and alcohol marketing. _Unpackaged_ aims to provide curiosity through its bland packaging: Why do all these products look the same? After this first impression, the consumer could choose to engage in a closer exploration, in which they would easily find that eco-labels are missing. Once again, this absence aims to stimulate further learning about the reason for this absence.
2309.11605
Thermodynamic driving forces in contact electrification between polymeric materials
Contact electrification, or contact charging, refers to the process of static charge accumulation after rubbing, or even simple touching, of two materials. Despite its relevance in static electricity, various natural phenomena, and numerous technologies, contact charging remains poorly understood. For insulating materials, even the species of charge carrier may be unknown, and the direction of charge-transfer lacks firm molecular-level explanation. We use all-atom molecular dynamics simulations to investigate whether thermodynamics can explain contact charging between insulating polymers. Building on prior work implicating water-ions (e.g., hydronium and hydroxide) as potential charge carriers, we predict preferred directions of charge-transfer between polymer surfaces according to the free energy of water-ions within water droplets on such surfaces. Broad agreement between our predictions and experimental triboelectric series indicate that thermodynamically driven ion-transfer likely influences contact charging of polymers. Importantly, simulation analyses reveal how specific interactions of water and water-ions proximate to the polymer-water interface explains observed trends. This study establishes relevance of thermodynamic driving forces in contact charging of insulators with new evidence informed by molecular-level interactions. These insights have direct implications for future mechanistic studies and applications of contact charging involving polymeric materials.
Hang Zhang, Sankaran Sundaresan, Michael A. Webb
2023-09-20T19:40:48Z
http://arxiv.org/abs/2309.11605v2
# Evidence of thermodynamic driving forces in the contact charging of insulating polymers ###### Abstract Contact electrification, or contact charging, refers to the process of static charge accumulation after rubbing, or even simple touching, of two materials. Despite its relevance in static electricity, various natural phenomena, and numerous technologies, contact charging remains poorly understood. For insulating materials, even the species of charge carrier may be unknown, and the direction of charge-transfer lacks firm molecular-level explanation. We use all-atom molecular dynamics simulations to investigate whether thermodynamics can explain contact charging between insulating polymers. Building on prior work implicating water-ions (e.g., hydronium and hydroxide) as potential charge carriers, we predict preferred directions of charge-transfer between polymer surfaces according to the free energy of water-ions within water droplets on such surfaces. Broad agreement between our predictions and experimental triboelectric series indicate that thermodynamically driven ion-transfer likely influences contact charging of polymers. Importantly, simulation analyses reveal how specific interactions of water and water-ions proximate to the polymer-water interface explains observed trends. This study establishes relevance of thermodynamic driving forces in contact charging of insulators with new evidence informed by molecular-level interactions. These insights have direct implications for future mechanistic studies and applications of contact charging involving polymeric materials. ## Introduction Contact electrification, or contact charging, is a widely observed phenomenon that results in static charges present on materials based on their touching [1, 2, 3, 4, 5, 6, 7]. In nature, such charging manifests in dust storms, which generate substantial charge via collisions of sand particles [8, 9], and in ash plumes of volcanic eruptions, which accumulate and release charge in the form of volcanic lightning [10]. In modern technology, contact charging enables xerographic printing [11, 12] and energy generation in wearable devices [13, 14]. Undesirable charging also underlies issues in several industrial applications [15, 16], such as wall-sheeting in reactors [17], disruption of particle mixing [18] and hazardous electrostatic discharge [19]. Despite this prevalence, precisely how and why contact charging occurs in many scenarios remains ambiguous. Therefore, understanding contact charging is of interest to advance fundamental science and to enhance technological processes [20, 21, 22]. The mechanism of contact charging strongly depends on the nature of the charge carriers, the materials, and the environment. Three modes of charging include electron transfer [23, 24, 6, 25] wherein surface work functions direct charge transfer, ion transfer [3, 26] wherein intrinsic or acquired mobile ions transfer between materials, and material transfer [27] wherein charged pieces of material physically move between surfaces. While electron transfer dominates charging of metals [2] and semicondutors with small band-gaps, the presence of insulating layers atop materials can obfuscate understanding predicated solely on work functions [7]. Moreover, contact charging of insulating materials themselves, such as polymers [28, 29], likely requires other charge-carrier species. One compelling hypothesis is that unequal transfer of cations and anions between materials results in sustained, asymmetric charge accumulation on surfaces [3]. This mode requires that materials must either natively possess or otherwise acquire mobile ions, raising questions as to what ions are present. Water-ions-hydronium (H\({}_{3}\)O\({}^{+}\)) and hydroxide (OH\({}^{-}\))-are viewed as potential charge-carriers underlying contact charging of insulating materials [3, 30]. Water is almost ubiquitously present, in real-world and experimental systems alike, having been detected across diverse chemical surfaces and a broad range of conditions [31, 32, 33, 34, 35, 36, 37]. Moosaic patterns of charge on polymer surfaces following contact have been attributed to the presence of water patches [38]. Effects of relative humidity on electrostatic charging also highlight a potential role of water and its ions [30, 37, 28, 39]. Furthermore, there are existing correlations between water-related properties and contact charging of polymers, such as acid/base dissociation constants [40], Lewis acidity or basicity of polymers [41], and zeta potentials of non-ionic polymers [42, 3]. While such work establishes a potential role of water and associated ions in many circumstances, why water-ions should concentrate on a certain material after contact with another is unclear. Various theoretical and conceptual frameworks have been constructed to explain water-ion transfer as a mechanism for contact charging of polymers. For example, a lattice Figure 1: Overview of hypothesis and systems. (A) Schematic depicting how the free energy of water-ions (H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\)) may vary between two polymer surfaces. Differences in free energy result in a thermodynamic driving force for preferential partitioning of ions between surfaces. (B) A thermodynamic framework to predict the direction of contact charging. The free-energy difference \(\Delta F_{AB}^{+-}\) determines whether a charge-separated pair is more stable in State II with free-energy \(F_{AB}^{+-}\) (H\({}_{3}\)O\({}^{+}\) near surface \(A\) and OH\({}^{-}\) near surface \(B\)) or State III with free energy \(F_{AB}^{-+}\) (OH\({}^{-}\) near surface \(A\) and H\({}_{3}\)O\({}^{+}\) near surface \(B\)). The free energies of each state can be formulated as \(\Delta F_{AB}^{+-}=F_{A}^{+}+F_{B}^{-}\) and \(\Delta F_{AB}^{-+}=F_{A}^{-}+F_{B}^{+}\) where each of \(F_{A}^{+}\), \(F_{A}^{-}\), \(F_{B}^{+}\), and \(F_{B}^{-}\) can be computed from molecular simulation of water droplets containing a water-ion atop isolated polymer slabs. (C) Summary of specific systems studied. The chemical structure of the constitutional repeat unit, internal reference name, and BigSMILES string of the six polymers studied are shown at the top. In addition to three amorphous slabs per polymer, additional crystalline slabs of N66, PE, and PVC are studied as well as (3) amorphous PVA slabs comprising isotactic chains; these are respectively denoted as N66\({}^{*}\), PE\({}^{*}\), PVC\({}^{*}\), and PVA\({}^{\dagger}\) (middle). For each polymer, simulations are run using water droplets comprised of \(N_{\rm w}=2000\), 1000, 500, 250, or 125 water molecules (bottom). This results in \((6\times 3+3+1\times 3)\times 5=120\) systems studied. (D) Triboelectric matrices generated from triboelectric series from the literature. From top to bottom, these are referenced as M1 (_3_), M2 (_43_), and M3 (_44_). By convention, for a given material pairing, surface \(A\) is indicated by the row-label and surface \(B\) by the column-label. Results are color-coded such that ‘blue’ indicates positive charge on \(A\) and negative on \(B\), while ‘red’ indicates negative charge on \(A\) and positive on \(B\). Pairings not found in the reference triboelectric series are indicated by a cross ‘\(\times\)’. Molecular renderings in panel B are produced using OVITO (_45_). The elements are colored such that carbon is gray, fluorine is blue, chlorine is green, oxygen is red, and hydrogen is white. The color-coding associated with polymer names in panels C and D are used throughout the text. model introduced by Grosjean et al. [46] quantitatively accounts for mesoscale spatial correlations that might explain contact charging between polymer surfaces of the same chemistry. Jaeger and coworkers examined the role of surface hydrophilicity on charging, finding consistency with models premised on OH\({}^{-}\) diffusion between adsorbed water patches with asymmetric coverage on the contacting surfaces [47, 33]. Nevertheless, these models generally lack nanoscopic attributions to specific molecular-level underpinnings. Although molecular simulation techniques, such as density-functional theory and _ab initio_ molecular dynamics, have been deployed to unravel complex nanoscale phenomena of contact charging in systems comprised of crystalline minerals, MXenes, oligomers, and water [48, 49, 26, 50, 51], studies involving polymers are nascent. In this study, we employ molecular dynamics (MD) simulations to investigate whether thermodynamic driving forces for water-ion transfer can feasibly impact contact charging of insulating polymers. We hypothesize that polymer surfaces present distinct nanoenvironments for water molecules and water-ions that result in chemical-potential differences, which govern asymmetric transfer of ions between surfaces upon contact. To test this hypothesis, we utilize thermodynamic integration [52] to extract relative free energies of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) on polymers of varying hydrophilicity [53]. These free energies, which are sensitive to polymer chemistry and underlying molecular interactions, provide a basis to predict the direction of ion-transfer between polymer surfaces. Such predictions enable construction of a triboelectric series based entirely on thermodynamic driving forces, which intriguingly illustrates good agreement with experimental triboelectric series. Further simulations that directly probe ion partitioning between two surfaces illustrate similar trends. This consistency establishes the viability of thermodynamically driven water-ion transfer in contact charging of polymers. Furthermore, the methodology highlights molecular-level nuances that may hold other implications for contact charging and general understanding of water-polymer interfacial interactions. ## Results and Discussion ### Hypothesis of thermodynamically driven water-ion transfer The possibility of contact charging as a process driven by the relative ion-surface affinities has been considered since at least the 1950s [54], although molecular evidence is scarce. Here, we consider whether the free energies of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) within droplets on different polymer surfaces (Fig. 1a) are predictive of contact charging (Fig. 1b). The posited mechanism of charging is that _(i)_ water droplets on surfaces contain H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) with chemical potentials that depend principally on surface chemistry but also other factors (e.g., preexisting ion concentration, humidity, electric fields, etc.), _(ii)_ water-ions can diffuse between surfaces when they are sufficiently close, and _(iii)_ the relative abundance of water-ions on two surfaces following diffusion events is biased by the relative chemical potentials. Fig. 1b illustrates contrasting scenarios of water droplets present on surfaces, \(A\) (blue) and \(B\) (red), that guide our calculations. In reference State I, droplets are neutral on both surfaces. In State II, contact yields a charge-separated pair where H\({}_{3}\)O\({}^{+}\) resides on \(A\) and OH\({}^{-}\) resides on \(B\); the free energy of State II relative to State I is \(F_{AB}^{+-}\). In State III, contact yields a charge-separated pair, which is the reverse of State II; the free energy of State III relative to State I is \(F_{AB}^{-+}\). These free energies are obtained as \(F_{AB}^{+-}=F_{A}^{+}+F_{B}^{-}\) and \(F_{AB}^{-+}=F_{A}^{-}+F_{B}^{+}\) where \(F_{S}^{\alpha}\) indicates the free energy of adding an ion of type \(\alpha\in[+,-]\) to surface \(S\in[A,B]\) (Fig. 1b, bottom). The difference \(\Delta F_{AB}^{+-}\equiv F_{AB}^{+-}-F_{AB}^{-+}\) reflects a thermodynamic driving force for contact charging. In particular, \(\Delta F_{AB}^{+-}<0\) indicates greater likelihood for surface \(A\) to become positively charged and surface \(B\) negative compared to the opposite, while \(\Delta F_{AB}^{+-}>0\) indicates greater likelihood for surface \(A\) to become negatively charged and surface \(B\) positive. Consequently, we suppose \(\Delta F_{AB}^{+-}\) predicts the direction of charge-transfer between contacting surfaces if the charge-carrier species are H\({}_{3}\)O\({}^{+}\) and/or OH\({}^{-}\) and populations are thermodynamically controlled. Note that \(\Delta F_{AB}^{+-}=(F_{A}^{+}+F_{B}^{-})\) - \((F_{A}^{-}+F_{B}^{+})\) relates to the exchange \(A^{-}+B^{+}\to A^{+}+B^{-}\), but also, \(\Delta F_{AB}^{+-}=(F_{A}^{+}-F_{B}^{+})\) - \((F_{A}^{-}-F_{B}^{-})\) reflects a difference in relative partitioning between surfaces of the ions. As such, contact charging can arise even if both ions favor the same surface given disparity in transfer free energies. To test this hypothesis, we consider six commodity polymers (Fig. 1c): polytetrafluoroethylene (PTFE), polyethylene (PE), polyvinyl chloride (PVC), poly(methyl methacrylate) (PMMA), Nylon 66 (N66), and polyvinyl alcohol (PVA). These polymers are relevant to prior contact charging experiments [55, 28, 44, 56, 43, 33, 57], and our recent work illustrates distinct wetting behavior arising from chemically and morphologically specific water-polymer surface interactions [53]. As in Ref. [53], we consider amorphous surfaces (for all six polymers), crystalline surfaces (denoted N66\({}^{*}\), PE\({}^{*}\), and PVC\({}^{*}\)), and surfaces featuring different tacticity (PVA\({}^{\dagger}\) denoting isotactic PVA); calculations are performed for various droplet sizes. The combination of surface chemistry, morphology, and droplet sizes is expected to yield many distinct nanoenvironments that influence water-ion free-energies. Ultimately, \(\Delta F_{AB}^{+-}\) is computed for all pairwise combinations to predict thermodynamic preference for water-ion transfer (see Materials and Methods). The hypothesis is evaluated by comparison to triboelectric series, which organize materials according to their relative propensity to acquire charges during contact charging [3]. Conventionally, triboelectric series are represented in a one-dimensional progression based on relative propensity to acquire positive/negative charge, although results do not always neatly and consistently organize in this manner. Here, we represent the outcomes of contact-charging experiments between pairwise combinations as a "triboelectric matrix." Fig. 1d illustrates three triboelectric matrices converted from previously reported triboelectric series that feature the polymers in this study; these are labeled 'M1' [3], 'M2' [43], and 'M3'. [44]. The matrix elements are color-coded to indicate the result of a contact-charging experiment between two materials; as convention, we assign \(A\) as row and \(B\) as column. In particular, 'blue' indicates that surface \(A\) becomes relatively more positive than surface \(B\), and'red' indicates the opposite. For example, all three series indicate that contacting N66 with PVC results in N66 accumulating positive charge and PVC negative. These three matrices provide relatively consistent expectations, although there are select pairs that differ (i.e., N66-PMMA, PVC- PTFE). Less complete triboelectric series can be formulated from other triboelectric series and display overall similar trends (see SI Appendix Fig. S1). ### Consistency of free-energy trends and contact charging Fig. 2a depicts a triboelectric matrix derived from \(\Delta F_{AB}^{+-}\) values obtained from MD simulations. To first order, the matrix is organized by material (\(6\times 6\) matrix), and results are further resolved for each \(A\)-\(B\) into a \(5\times 5\) sub-matrix based on water-droplet size; color intensity reflects the magnitude of thermodynamic driving force. Compared to Fig. 1d, the simulation results broadly align with the direction of charging observed in M1, M2, and M3. In comparison to M1, simulation predictions agree with nine of fifteen material combinations, while three pairs yield inconclusive results or depend on droplet size, and three pairs exhibit opposite trends. However, when compared to M2 and M3 (which lack data for PVA), the agreement improves, as simulations predict PVC acquires negative charge over PTFE (as in M2) and N66 acquires negative charge over PMMA (as in M3). Thus, the thermodynamically informed predictions capture general trends in contact charging between polymers of different chemistry. The few disparities between simulation predictions and empirical charging results arise in material pairings that also demonstrate experimental variability. For PVC-PTFE, M1 and M3 (and other series, see SI Appendix Fig. S1) suggest that PTFE exhibits a strong tendency to acquire negative charge. However, our previous study on polymer hydrophobicity [53] indicates that water structuring and dynamics are relatively more similar between PTFE and PE than with PVC. These prior observations align with our current free-energy results, showing a vanishing \(\Delta F_{AB}^{+-}\) for PE-PTFE and consistent behavior between PE-PVC and PTFE-PVC, and the experimental outcome reported via M2. Therefore, results involving PTFE may be sensitive to experimental conditions, potentially related to mechanisms not captured by simulations, such as bond breaking [26], or minor inaccuracies in molecular models. For N66-PMMA, M1 and M3 differ, with the latter aligning with the thermodynamic predictions. Lastly, several inconsistent or inconclusive combinations involve PVA; the aqueous solubility of PVA poses an experimental challenge and is also a notable factor in our previous study [53]. Considering the substantial agreement for many material pairings and the technical challenges encountered with others, we conclude that thermodynamically driven water-ion transfer can plausibly influence polymer-polymer contact charging. ### Role of water-surface interactions Analysis of the polymer-water interface provides nanoscale insights into the trends of water-ion free energies. Fig. 3a compares how water, H\({}_{3}\)O\({}^{+}\), and OH\({}^{-}\) distribute in the vicinity of chemically distinct, amorphous polymer surfaces. Figure 2: Results of free-energy calculations for amorphous polymers. (A) The thermodynamic driving force for water-ion transfer between surfaces \(A\) and \(B\) presented as a triboelectric matrix. The matrix is resolved \(6\times 6\) by material; each pair of materials is further resolved \(5\times 5\) accounting for differing droplet sizes. Droplet sizes (\(N_{\rm w}=\) 125, 250, 500, 1000, and 2000) increase left-to-right and top-to-bottom. An approximate linear triboelectric series generated from the matrix simulation is shown for reference below the matrix results. (B) Results of thermodynamic integration calculations to extract \(F_{S}^{\alpha}\), the free energy of adding an ion of species \(\alpha\) to surface \(S\). Results for adding H\({}_{3}\)O\({}^{+}\) are shown at the left and OH\({}^{-}\) are at the right. Error bars reflect statistical uncertainties reported as the standard error of the mean calculated from independent thermodynamic integration trajectories. Relative to OH\({}^{-}\), H\({}_{3}\)O\({}^{+}\) tends to reside closer to the polymer-water interface, orienting its oxygen atom to maximize hydrogen-bond donation to water (SI Appendix, Fig. S2). Surfaces lacking hydrogen bonds, such as PE, PTFE, and PVC, allow easy access for H\({}_{3}\)O\({}^{+}\) to the interfacial layers, explaining the similar free energy values (\(F_{S}^{+}\)) observed in Fig. 2b. However, H\({}_{3}\)O\({}^{+}\) is relatively more stable (lower \(F_{S}^{+}\)) in proximity to hydrogen-bonding polymers (PMMA, N66, and PVA). The stronger interfacial interactions with PMMA, N66, and PVA also explain the apparent insensitivity of \(F_{S}^{+}\) to droplet size (Fig. 2b), as the preferred nanoenvironment of H\({}_{3}\)O\({}^{+}\) remains relatively consistent as droplet size increases. Notably, H\({}_{3}\)O\({}^{+}\) is predominantly excluded from the interfacial layer of PVA, the most hydrophilic polymer, aligning with its higher \(F_{S}^{+}\) compared to PMMA and N66. This highlights an intriguing interplay between ion-polymer interactions and competing water interactions, such that ion chemical potential is not a monotonic function Figure 3: Structural analysis of free-energy trends for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) across polymers. (A) Comparison of spatial distribution of water molecules, H\({}_{3}\)O\({}^{+}\), and OH\({}^{-}\) in proximity to the polymer-water interface. (B) Simulation snapshots comparing OH\({}^{-}\) interactions in proximity to PMMA and PVC surfaces. Hydrogen-bonding interactions are indicated by light-blue lines. Some surrounding molecules are omitted for clarity. (C) Comparison of free energies for ion addition based on morphological changes to polymer slabs. Comparisons are made between amorphous-to-crystalline (denoted ‘*’) PE, PVC, and N66 and amorphous atactic-to-isotactic (denoted ‘+’) PVA. Results on the left are for surfaces with \(N_{\mathrm{w}}\) = 2000 water molecules with bars grouped by material, data for H\({}_{3}\)O\({}^{+}\) to the left of that for OH\({}^{-}\). Results on the right are for all droplet sizes; results between PE and PE\({}^{*}\) are statistically indistinguishable and not shown for clarity. Error bars reflect statistical uncertainties reported as the standard error of the mean calculated from independent thermodynamic integration trajectories. of hydrophilicity. Although OH\({}^{-}\) predominantly situates in secondary interfacial layers or the bulk of water droplets, its trends also correlate with hydrophobicity and hydrogen-bonding behavior. The nearly equivalent \(F_{S}^{-}\) between PE and PTFE reflects consistency in OH\({}^{-}\) distribution, which derives from their similarity in hydrophobicity and contact angles [53]. Free-energy trends among N66, PVA, and PMMA align with hydrogen-bonding behavior. While N66 and PVA offer stabilizing interactions that lower \(F_{S}^{-}\), PMMA only functions as a hydrogen-bond acceptor, effectively excluding OH\({}^{-}\) from the interfacial layer of water, resulting in higher \(F_{S}^{-}\)[53]. In contrast to PMMA, water in proximity to PVC orients its oxygen atoms towards the surface because of the strong attraction of chlorine atoms [53], which allows water molecules to readily form hydrogen bonds with OH\({}^{-}\) in the second water layer (Fig. 3b). Thus, distinct nanoenvironments for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) arise from the hydrophobicity and hydrogen-bonding behavior of the polymer surfaces, largely explaining trends in \(F_{S}^{+}\) and \(F_{S}^{-}\). To further explore the sensitivity of \(F_{S}^{+}\) and \(F_{S}^{-}\) to interfacial interactions, we assess the role of nanoscale polymer surface morphology, which can influence hydrophobicity and hydrogen-bonding behaviors. Fig. 3c shows the difference in \(F_{S}^{+}\) and \(F_{S}^{-}\) between amorphous and crystalline surfaces (for PE, PVC, and N66) and between atactic and isotactic amorphous surfaces (for PVA). Overall, the simulations capture some sensitivity of \(F_{S}^{+}\) and \(F_{S}^{-}\) to surface morphology, but the extent depends on polymer chemistry. The transition from PE to PE\({}^{*}\) has no notable effect, as water structuring near PE\({}^{*}\) remains similar to that of PE, resulting in nearly equivalent nanoenvironments for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) and correspondingly indistinguishable free energies. However, for PVA, PVC, and N66, \(F_{S}^{+}\) or \(F_{S}^{-}\) can shift on scales relevant for charging predictions in Fig. 2a. Increased intra-chain hydrogen bonding and reduced hydrogen bonding with water for PVA\({}^{\dagger}\)[53] permits more favorable water-structuring around OH\({}^{-}\), thereby increasing its stability. In N66\({}^{*}\), the crystalline structure similarly reduces hydrogen bonding with water and results in a more hydrophobic surface, creating a less favorable nanoenvironment for H\({}_{3}\)O\({}^{+}\) within the interfacial layer. In PVC\({}^{*}\), enhanced chain interactions diminish interfacial water structuring, subsequently weakening interactions with OH\({}^{-}\) in secondary water layers. These findings underscore the importance of polymer-water interactions in water-ion free energies and indicate how surface heterogeneities and semicrystallinity may subtly influence water-ion transfer and contact charging. ### Connections to other charging phenomena Although thermodynamic driving forces for ion transfer are most significant when considering different surfaces, Fig. 2b shows that the free energy of water-ions is also influenced by droplet size, and Fig. 3c illustrate sensitivty to surface heterogeneities. The former effect is evident in the internal color variation within the diagonal material squares in Fig. 2a. Notably, for more hydrophilic polymers (PMMA, N66, and PVA), the thermody namic driving forces are comparable to those for chemically distinct surfaces (off-diagonal squares of Fig. 2a); Fig. 3c also conveys non-trivial differences that exceed 5 kcal/mol. These findings may have implications for contact charging of chemically identical materials [58]. If water exists on polymer surfaces as droplets of varying sizes [37] or the surfaces vary in crystallinity/patterning, these results suggest that those variabilities could create additional thermodynamic driving forces for ion redistribution and subsequent contact charging. Considering that relative humidity likely influences the distribution of droplet sizes on a surface, resulting differences in water-ion chemical potentials might account for certain humidity effects on contact charging. It is notable that the free energy of H\({}_{3}\)O\({}^{+}\) appears less sensitive to droplet size compared to OH\({}^{-}\), particularly for hydrophilic polymers. Although the present work does not thoroughly analyze the implications of droplet or surface heterogeneities in the context of aforementioned phenomena, such factors could be considered in future work. ### Validation by two-surface simulations In the preceding analysis, calculating \(\Delta F_{AB}^{+-}\) involved simulating a water droplet containing a single ion above isolated polymer surfaces. As a more stringent test of these predictions, we conduct simulations with both H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) present between distinct polymer surfaces and assess preferential partitioning. Fig. 4a illustrates the simulation setup wherein a water bridge (\(N_{w}=4000\)) containing a H\({}_{3}\)O\({}^{+}\)/OH\({}^{-}\) pair forms between surfaces \(A\) (top) and \(B\) (bottom) separated by distance \(d\). The propensity for surfaces to acquire specific charges is measured via the free energy \(F_{AB}(p_{z})\) where the collective variable \(p_{z}=z_{\text{H}_{3}\text{O}^{+}}-z_{\text{OH}^{-}}\) is the dipole of the ionic pair in \(z\)-direction. As a collective variable, \(p_{z}\) reports the relative positioning of water ions with respect to the two surfaces: more positive \(p_{z}\) indicates H\({}_{3}\)O\({}^{+}\) is closer to surface \(A\) and OH\({}^{-}\) is closer to \(B\), more negative \(p_{z}\) indicates the opposite, and small \(p_{z}\) suggests little to no asymmetric affinity. Similar to \(\Delta F_{AB}^{+-}\), we examine the change in free energy when the dipole is flipped: \(\Delta F_{AB}(p_{z})\equiv F_{AB}(p_{z})-F_{AB}(-p_{z})=-\frac{1}{\beta}\ln K_ {AB}^{+-}(p_{z})\) where \(K_{AB}^{+-}\) represents a pseudo-equilibrium constant for the exchange process \(A^{-}+B^{+}\xrightleftharpoons[r]{K_{\text{AB}}}A^{+}+B^{-}\). Expected scenarios for \(K_{\text{AB}}(p_{z})\) are depicted in Fig. 4b. For example, if \(K_{AB}(p_{z})>1\), H\({}_{3}\)O\({}^{+}\) should preferentially partition towards \(A\), with the expectation that \(A\) becomes relatively positive and \(B\) negative. The free energy \(F_{AB}(p_{z})\) is computed using umbrella sampling and the weighted histogram analysis method [59]; further details regarding the calculation and formulation of \(K_{AB}(p_{z})\) are in 'Materials and Methods.' Results of the two-surface simulations align well with the expectations from \(\Delta F_{AB}^{+-}\) (Fig. 2a) and the structural analysis (Fig. 3). Fig. 4b displays \(K_{AB}(p_{z})\) for different pairs of materials, with row labels corresponding to surface \(A\) and column labels corresponding to surface \(B\). For PE-PTFE, \(K_{AB}(p_{z})\sim 1\), which is consistent with prior discussion on the similarity of water/ion nanoenvironments. In PVA-PTFE and PVA-PE, for which results from single-surface calculations (Fig. 2b) were mixed and dependent on droplet size, \(K_{AB}(p_{z})<1\) indicating that OH\({}^{-}\) prefers PVA over the more hydrophobic PTFE and PE. This preference arises mainly from the recruitment of water towards the more hydrophilic surface (SI Appendix, Fig. S4) rather than surface-specific interactions. The remaining pairs yield \(K_{AB}(p_{z})>1\), indicating enhanced thermodynamic stability of H\({}_{3}\)O\({}^{+}\) closer to surface \(A\) (row) and for OH\({}^{-}\) to be closer to \(B\) (column) than the reverse situation. Thus, the two-surface simulations provide valuable validation for the overall thermodynamic framework and offer more direct support of thermodynamically driven water-ion transfer as a mechanism of contact charging. ## Conclusions Molecular dynamics simulations were used to investigate thermodynamically driven water-ion transfer as a mechanism of contact charging between insulating polymers. The ubiquity of water, correlations with hydrophobicity, and importance of humidity inform a specific hypothesis: distinct nanoenvironments for water proximate to polymer surfaces generate chemical-potential gradients that govern asymmetric transfer of water-ions upon contact (Fig. 1a,b). To investigate this hypothesis, we calculated free energies of water-ions in water droplets on chemically and structurally distinct polymer surfaces; these were subsequently used to predict the thermodynamically preferred direction of contact charging between various commodity polymers (Figs. 2a and 3c). Remarkably, these predictions align well with many results of experimental triboelectric series (Fig. 1d), and subsequent simulations that directly examine partitioning of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) between two surfaces offer further support (Fig. 4). The molecular resolution afforded by the simulations importantly reveals key interactions and properties, such as surface hydrophobicity and hydrogen-bonding capabilities, that underlie relative affinities of ions to specific surfaces (Figs. 3a,b) While other contact-charging mechanisms should not be disregarded, these results emphasize the plausibility of thermodynamic driving forces with well-defined molecular underpinnings in contact charging between insulating materials, such as polymers. The findings offer valuable insights into the complex phenomenon of contact electrification and highlight opportunities to explore further implications across scientific and technological domains. Coupling molecular simulation with free-energy calculations can be extended to explore other aspects of contact charging, including the role of humidity [36, 39, 60, 26], temperature [33], electric field [36], and local geometry [57, 39]. Additionally, there are potential implications for contact charging between chemically identical materials, particularly regarding variations in free-energy due to differences in droplet sizes and surface morphology, though further investigation is required to ascertain their precise relevance. Moreover, future study could explore kinetic factors like asymmetric ion diffusion [47] and their interplay with thermodynamic considerations, such as ion distribution within a droplet or free-energy barriers formed during material contact. Lastly, molecular simulations of the kind used here can provide chemically specific parameters for macroscopic models of contact charging, enabling quantitative comparisons with experiments and enhanced understanding. Figure 4: Explicit partitioning of a H\({}_{3}\)O\({}^{+}\)/OH\({}^{-}\) pair between two polymer surfaces. (A) Simulation snapshot illustrating the general system setup for calculations. A water bridge of \(N_{\rm w}=4000\) water molecules forms between two polymer slabs positioned a distance \(d\) away, allowing for diffusion of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) between two surfaces, \(A\) and \(B\). The relative positioning of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) with respect to the polymer surfaces can be monitored using \(p_{z}\), the ionic dipole in the \(z\)-direction. The specific system shown corresponds to PMMA (top) and PVC (bottom) positioned at \(d=25\) Å. (B) Interpretation of the exchange constant \(K_{AB}^{+-}(p_{z})\). If \(K_{AB}^{+-}(p_{z})\) ¿ 1, H\({}_{3}\)O\({}^{+}\) exhibits more preference for A than OH\({}^{-}\) (\(A^{+}B^{-}\)); if \(K_{AB}^{+-}(p_{z})\) ¿ 1, H\({}_{3}\)O\({}^{+}\) exhibits more preference for B than OH\({}^{-}\) (\(A^{-}B^{+}\)); and if \(K_{AB}^{+-}(p_{z})\)\(\sim\) 1, there is no clear preference (\(A^{\circ}B^{\circ}\)). (C) Results of \(K_{AB}^{+-}(p_{z})\) for different pairs of materials. Results are for \(d=25\) Å except for pairs annotated with ‘**,’ which use \(d=40\) Å to better characterize thermodynamic preference (see SI Appendix, Fig. S3). The shaded regions reflect statistical uncertainties reported as standard error of the mean calculated from bootstrap resampling. ## Materials and Methods ### Molecular Dynamics Simulations All MD simulations were conducted using the LAMMPS simulation package (version 3, Mar 2020) [61]. Polymers were described by parameters from the all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) force field [62, 63], while water was described using the extended simple point charge model (SPC/E) [64, 65]. The water ions were modeled using a non-polarizable force-field designed to be used in conjunction with the SPC/E water model and parameterized to reproduce experimental solvation free energies and activities of H\({}_{3}\)O\({}^{+}\)-Cl\({}^{-}\) and Na\({}^{+}\)-OH\({}^{-}\) salt solutions [66]. Preparation of polymer-water systems followed methodology of our previous work [53], with the addition of either H\({}_{3}\)O\({}^{+}\) or OH\({}^{-}\) at the center-of-mass of the water droplet as required. Simulation cells were periodic in the \(x\) and \(y\) directions but non-periodic in \(z\); Ewald summation was accomplished via the approach of Yeh and Berkowitz [67] with extension to non-neutral systems by Ballenegger et al. [68]. After initial preparation, systems were simulated for 20 ns to generate initial configurations. Subsequently, trajectories of 40 ns were run to analyze the distribution of ions and water near polymer interfaces. More detailed information regarding simulation procedures and calculations are provided in the SI. ### Single-surface Free-energy Calculations The free energy associated with adding an ion of type \(\alpha\) to a water droplet on surface \(S\), \(F_{S}^{\alpha}\), was calculated using thermodynamic integration (TI). TI was practically implemented using 12-point Gauss-Legendre quadrature for each ion, following the approach of Ref. [53], which calculates the excess chemical potential of water. Simulations at each quadrature node started from the final configuration of the 20-ns equilibration trajectory. Each simulation was run for 6 ns, of which the last 5 ns were used to estimate ensemble averages. ### Two-surface Free-energy Calculations The free energy as a function of ionic dipole within a water bridge between surfaces \(A\) and \(B\), \(F_{AB}(p_{z})\), was calculated using umbrella sampling with statistical re-weighting via the weighted histogram analysis method [59]. Two-surface systems were prepared by combining two previously equilibrated polymer-water systems, mirroring the coordinates of one system across the \(xy\)-plane and shifting it vertically by a specified distance \(d\), which was set as the average distance between polymer interfaces. Data was collected across 36 windows that each employ a harmonic biasing potential on \(p_{z}\). The biasing potentials utilized spring constants of 47.8011 kcal/mol and equilibrium positions at -35 to 35 A in 2 A increments. To prevent pairing of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) at small \(p_{z}\), the force-field interaction between oxygen atoms on H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) was adjusted to ensure that the two ions would not bind (SI Appendix, Fig. S5). This modification focused analysis on ion affinity for surfaces without conflation from ionic attraction, which was not the primary focus here, and also outside the realm of application of the force-field, which does not describe recombination into neutral water species. Consequently, \(F_{AB}(p_{z})\) is conditional on the ions remaining separate species. For all calculations, simulations are first run for 10 ns to equilibrate the surface-water-surface geometry. Biasing potentials were subsequently imposed for each window, and trajectories were run for 15 ns. Trajectories for windows with \(|p_{z}|<10\) A were extended for an additional 7.5 ns to enhance convergence. Initially, calculations were performed at \(d=25\) A for all surfaces. However, for some pairings (N66-PE, N66-PTFE, N66-PVC, PETFE, PMMA-PTFE, and PVC-PTFE), the resulting \(F_{AB}(p_{z})\) was relatively flat because ions could readily interact with both surfaces. For these surfaces, additional calculations were conducted at \(d=40\) A to better distinguish surface affinities between H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\); calculations at greater separations yielded similar results (see SI Appendix, Fig. S3). ## References * [1] P. Shaw, _Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character_**94**, 16 (1917). * [2] J. Lowell, A. Rose-Innes, _Advances in Physics_**29**, 947 (1980). * [3] L. McCarty, G. Whitesides, _Angewandte Chemie International Edition_**47**, 2188 (2008). * [4] P. Iversen, D. J. Lacks, _Journal of Electrostatics_**70**, 309 (2012). * [5] F. Galembeck, _et al._, _RSC Adv._**4**, 64280 (2014). * [6] Z. L. Wang, A. C. Wang, _Materials Today_**30**, 34 (2019). * [7] D. J. Lacks, T. Shinbrot, _Nature Reviews Chemistry_**3**, 465 (2019). * [8] A. K. Kamra, _Nature_**240**, 143 (1972). * [9] N. O. Renno, J. F. Kok, _Space Science Reviews_**137**, 419 (2008). * [10] C. Cimarelli, K. Genareau, _Journal of Volcanology and Geothermal Research_**422**, 107449 (2022). * [11] D. M. Pai, B. E. Springett, _Reviews of Modern Physics_**65**, 163 (1993). * [12] J. H. Anderson, _Journal of Imaging Science and Technology_**44**, 534 (2000). * [13] Y. Lu, _et al._, _Nature Communications_**13**, 1401 (2022). * [14] X. Cao, _et al._, _Nano-Micro Letters_**15**, 14 (2022). * [15] S. Matsusaka, H. Maruyama, T. Matsuyama, M. Ghadiri, _Chemical Engineering Science_**65**, 5781 (2010). * [16] X. Liu, S. Sundaresan, _Powder Technology_**401**, 117272 (2022). * [17] D. Song, P. Mehrani, _Powder Technology_**316**, 166 (2017). * [18] Y. Cheng, L. Lee, W. Zhang, C.-H. Wang, _Industrial & Engineering Chemistry Research_**53**, 14166 (2014). * [19] M. Nifuku, H. Katoh, _Powder Technology_**135-136**, 234 (2003). * [20] S.-H. Shin, _et al._, _ACS Nano_**9**, 4621 (2015). * [21] Z. Liu, _et al._, _Nano Letters_**22**, 4074 (2022). * [22] X. Tao, _et al._, _Small Methods_**7**, 2201593 (2023). * [23] J. H. Clint, T. S. Dunstan, _Europhysics Letters (EPL)_**54**, 320 (2001). * [24] C. Liu, A. J. Bard, _Nature Materials_**7**, 505 (2008). * [25] Y. Nan, J. Shao, M. Willatzen, Z. L. Wang, _Research_**2022**, 9861463 (2022). * [26] P. S. Gil, D. J. Lacks, _Physical Chemistry Chemical Physics_**21**, 13821 (2019). * [27] R. K. Pandey, H. Kakehashi, H. Nakanishi, S. Soh, _The Journal of Physical Chemistry C_**122**, 16154 (2018). * [28] S. Hersh, D. Montgomery, _Textile Research Journal_**25**, 279 (1955). * [29] R. Elsdon, F. R. G. Mitchell, _Journal of Physics D: Applied Physics_**9**, 1445 (1976). * [30] R. F. Gouveia, F. Galembeck, _Journal of the American Chemical Society_**131**, 11381 (2009). * [31] A. L. Sumner, _et al._, _Physical Chemistry Chemical Physics_**6**, 604 (2004). * [32] Y. Awakuni, J. H. Calderwood, _Journal of Physics D: Applied Physics_**5**, 1038 (1972). * [33] I. A. Harris, M. X. Lim, H. M. Jaeger, _Physical Review Materials_**3**, 085603 (2019). * [34] M. L. Gee, T. W. Healy, L. R. White, _Journal of Colloid And Interface Science_**140**, 450 (1990). * [35] W. Camacho, A. Valles-Lluch, A. Ribes-Greus, S. Karlsson, _Journal of Applied Polymer Science_**87**, 2165 (2003). * [36] Y. Zhang, _et al._, _Physical Review X_**5**, 1 (2015). * [37] I. O. Ucar, H. Y. Erbil, _Applied Surface Science_**259**, 515 (2012). * [38] H. T. Baytekin, _et al._, _Science_**333**, 308 (2011). * [39] R. D. Cruise, K. Hadler, S. O. Starr, J. J. Cilliers, _Journal of Physics D: Applied Physics_**55**, 185306 (2022). * [40] A. Diaz, R. Felix-Navarro, _Journal of Electrostatics_**62**, 277 (2004). * [41] X. Zhang, L. Chen, Y. Jiang, W. Lim, S. Soh, _Chemistry of Materials_**31**, 1473 (2019). * [42] L. S. McCarty, A. Winkleman, G. M. Whitesides, _Journal of the American Chemical Society_**129**, 4075 (2007). * [43] H. Zou, _et al._, _Nature Communications_**10**, 1427 (2019). * [44] J. Henniker, _Nature_**196**, 474 (1962). * [45] A. Stukowski, _Modelling and Simulation in Materials Science and Engineering_**18**, 015012 (2009). * [46] G. Grosjean, S. Wald, J. C. Sobarzo, S. Waitukaitis, _Physical Review Materials_**4**, 082602 (2020). * [47] V. Lee, N. M. James, S. R. Waitukaitis, H. M. Jaeger, _Physical Review Materials_**2**, 035602 (2018). * [48] X. Shen, A. E. Wang, R. M. Sankaran, D. J. Lacks, _Journal of Electrostatics_**82**, 11 (2016). * [49] R. Fu, X. Shen, D. J. Lacks, _The Journal of Physical Chemistry C_**121**, 12345 (2017). * [50] J. Wu, X. Wang, H. Li, F. Wang, Y. Hu, _Nano Energy_**63**, 103864 (2019). * [51] H. Gao, _et al._, _Advanced Functional Materials_**33**, 2213410 (2023). * [52] D. Frenkel, B. Smit, M. A. Ratner, _Understanding Molecular Simulation: From Algorithms to Applications_, vol. 50 (AIP Publishing, 1997). * [53] H. Zhang, S. Sundaresan, M. A. Webb, _The Journal of Physical Chemistry B_**127**, 5115 (2023). * [54] P. S. H. Henry, _British Journal of Applied Physics_**4**, S6 (1953). * [55] A. Coehn, _Annalen der Physik_**300**, 217 (1898). * [56] A. E. Wang, _et al._, _Physical Review Materials_**1**, 035605 (2017). * [57] X. Liu, J. Kolehmainen, I. Nwogbaga, A. Ozel, S. Sundaresan, _Powder Technology_**375**, 199 (2020). * [58] G. Grosjean, S. Waitukaitis, _Physical Review Letters_**130**, 098202 (2023). * [59] S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, P. A. Kollman, _Journal of Computational Chemistry_**13**, 1011 (1992). * [60] O. Tilmatine, T. Zeghloul, K. Medles, L. Dascalescu, A. Fatu, _Journal of Electrostatics_**115**, 103651 (2022). * [61] A. P. Thompson, _et al._, _Computer Physics Communications_**271**, 108171 (2022). * [62] W. L. Jorgensen, D. S. Maxwell, J. Tirado-Rives, _Journal of the American Chemical Society_**118**, 11225 (1996). * [63] S. W. Siu, K. Pluhackova, R. A. Bockmann, _Journal of Chemical Theory and Computation_**8**, 1459 (2012). * [64] H. Berendsen, J. Grigera, T. Straatsma, _Journal of Physical Chemistry_**91**, 6269 (1987). * [65] S. Chatterjee, P. G. Debenedetti, F. H. Stillinger, R. M. Lynden-Bell, _The Journal of Chemical Physics_**128**, 124511 (2008). * [66] D. J. Bonthuis, S. I. Mamatkulov, R. R. Netz, _The Journal of Chemical Physics_**144**, 104503 (2016). * [67] I. C. Yeh, M. L. Berkowitz, _Journal of Chemical Physics_**111**, 3155 (1999). * [68] V. Ballenegger, A. Arnold, J. J. Cerda, _The Journal of Chemical Physics_**131**, 094107 (2009). ### Acknowledgments The authors declare no competing interests. H.Z. and M.A.W. acknowledge support from a Princeton Innovation Grant via the Project X Fund. We also acknowledge support from the "Chem- istry in Solution and at Interfaces" (CSI) Center funded by the U.S. Department of Energy through Award No. DE-SC0019394. Simulations and analyses were performed using resources from Princeton Research Computing at Princeton University, which is a consortium led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. M.A.W., H.Z., and S.S. designed research; H.Z. performed research; H.Z. and M.A.W. analyzed data; all authors discussed results; H.Z. and M.A.W. wrote the paper; M.A.W, H.Z., and S.S. edited the paper. ### Description of SI Appendix Comparison to Triboelectric Matrices Sourced from Literature Data; Distance Analysis of Two-surface Free-energy Calculations; Comparison of Ion Distributions in Proximity to PVA; Additional Simulation Details; Additional Description of Single-surface Free-energy Calculations; Additional Description of Two-surface Free-energy Calculations. #### SI Datasets free_energy_one_surface.csv; free_energy_two_surface.csv # Supporting Information (SI) Appendix for Evidence of thermodynamic driving forces in the contact charging of insulating polymers Hang Zhang, Sanakaran Sundaresan, Michael A. Webb Corresponding author: Michael A. Webb, [email protected] **This PDF file includes:** Supporting Information Text: * Comparison to Triboelectric Matrices Sourced from Literature Data * Comparison of Ion Distributions in Proximity to PVA * Distance Analysis of Two-surface Free-energy Calculations * Additional Simulation Details * Additional Description of Single-surface Free-energy Calculations * Additional Description of Two-surface Free-energy Calculations Supporting Information Figures: * Figs. S1 to S5 Supporting Information Dataset Descriptions: * SI Dataset S1 * SI Dataset S2 ### Comparison to Triboelectric Matrices Sourced from Literature Data Triboelectric series were collected from ten publications from 1898 to 2022 that featured at least a subset of the six polymers investigated in this work. The reported triboelectric series, which are a common representation of contact charging experiments, were then formulated as a matrix as introduced in the main text. Fig. S1, summarizes all the data. The figure overall conveys some level of inconsistency in experimental settings even for the same reported materials; however, there are also some broadly conserved trends. For example, most of the top-right corner and bottom-left appear consistently colored for the experimental data, and these trends are further reflected in the predictions made by the free-energy calculations. ### Comparison of Ion Distributions in Proximity to PVA In Fig. 3A, the distribution of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) appear similar along the single dimension relative to the polymer interface. However, Fig. S2 illustrates that the two ions do exhibit differences when resolving positioning relative to both the polymer interfaces and water interfaces. In Fig. S2, positioning along the diagonal but away from the origin indicates residence resides toward the center of the droplet, as it is simultaneously distanced from both interfaces. Meanwhile, positioning in a lower horizontal band towards \(y=0\) suggests the ion is close to a water interface; however, it need not be close to a polymer interface (moving right), implying that the ion is closer to a water-vapor interface. Therefore, the simulations indicate that H\({}_{3}\)O\({}^{+}\) displays some preference to reside near the water-vapor interface, while OH\({}^{-}\) is always found in the interior of the water droplet. This behavior is generally preserved across all surfaces. ### Distance Analysis of Two-surface Free-energy Calculations To guide selection of distances for the two-surface free-energy calculations, a set of preliminary simulations at varying distances (\(d=15,25,40,55\) A) for PMMA-PVC pairings. PMMA-PVC were selected based on the predictions to acquire the most positive/negative charge from the single-surface thermodynamic integration calculations; the span of distances allowed for the formation of a water bridge between the two surfaces without any direct contact of PMMA and PVC atoms. Fig. S3A provides the corresponding free-energy profiles as a function of ionic dipole. At the small separation of \(d=15\) A, preference for either ion to specific surfaces is not clearly evident. At larger separations (\(d=25,40,55\) A), relative affinities become statistically discernible, with all separations yielding qualitatively similar interpretations. Consequently, a first group of simulations amongst all surface pairs were performed with \(d=25\) A (Fig. S3). For a subset of pairs (N66-PE, N66-PTFE, N66-PVC, PE-PTFE, PMMA-PTFE, and PVC-PTFE), relative surface affinities were not obvious at \(d=25\) A, and additional simulations were run at \(d=40\) A. For PVA-PE and PVA-PTFE, simulations at larger \(d\) were not feasible due to the hydrophobicities of PE and PTFE relative to PVA (Fig. S4), which resulted in all water residing on PVA and no stable water bridge. For all polymer pairs, a 10 ns equilibrium simulation was performed. The preliminary simulations used 7.5 ns of simulation in each biasing window. The first group of simulations were run for 15 ns for each biasing window. An additional 7.5 ns simulation are performed for \(|p_{z}|<10\) A to get better sampling of the small \(|p_{z}|\) region. For pairs that exhibited flatter free-energy profiles, an additional 7.5 ns of simulation were run to assess convergence. The second group of simulations were performed for 15 ns in each biasing window. ### Additional Simulation Details All MD simulations were conducted using version 3 Mar 2020 of the LAMMPS simulation package [12]. The polymer-water systems were prepared using a methodology similar to our previous work [13], except that water-ions were also embedded at the center-of-mass of the water droplets as needed. Periodic boundary conditions were applied in the \(x\) and \(y\) dimensions, while the \(z\) dimension was extended with fixed boundaries featuring repulsive walls that generated forces in a direction perpendicular to the wall. Polymers were described with parameters obtained from the all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) force field [14, 15], while water was described using the extended simple point charge model (SPC/E) [16, 17]. The water ions were represented using a nonpolarizable force field that was parameterized to reproduce thermodynamic properties, such as solvation free energies in water [18]. Real-space non-bonded interactions were truncated at 10 A. Long-range electrostatics were handled using the particle-particle-particle-mesh Ewald summation method [19] with a convergence accuracy of \(10^{-5}\); this method was modified to accommodate the slab geometry with a non-periodic \(z\) dimension [20]. Equations of motion were evolved using a velocity-Verlet integration scheme with a 1 fs timestep. A rigid geometry was maintained for all water and ion molecules using the SHAKE algorithm [21]. Unless otherwise specified, temperature was controlled at 300 K using a Nose-Hoover thermostat [22] with a damping constant of 100 fs. Following system preparation, 20 ns equilibrium simulations were conducted. Subsequently, 20 ns production simulations were performed for all systems, and an additional 20 ns of simulations were conducted for \(N_{\text{w}}=2000\) for structural analysis. Interfaces were identified according to the approach of Willard and Chandler; [23] these calculations were facilitated by the Pyth package [24] using the same settings as in our previous work [13]. Figure S3: Dependence of relative ion-surface affinities on distance between surfaces. (A) Comparison of relative ion partitioning between surfaces as a function of ionic dipole, \(K_{AB}^{+-}(p_{z})\), as surfaces are separated at distances of \(d=15,25,40,\) and \(55\) Å. (B) Summary of all \(K_{AB}^{+-}(p_{z})\) for all pairs of surfaces separated by \(d=25\) Å. The surface pairs denoted with “**” are further simulated at \(d=40\) Å due to their relatively weak dependence on \(p_{z}\); these results are shown in the main text. The shaded regions indicate statistical uncertainties that reflect the standard error of the mean as estimated from bootstrap resampling. Figure S4: Effect of surface hydrophobicity on two-surface free-energy calculations. (A) Configurational snapshot of water bridge forming between PTFE and PVA separated by \(d=25\) Å. (B) Configurational snapshot of water bridge forming between PE and PVA. Both PTFE and PE are substantially more hydrophobic than PVA. Consequently, the majority of water molecules in the system are recruited towards the PVA surface, which affects the accessible volume of ions. Both snapshots are rendered using OVITO [11]. The atoms are colored such that carbon is gray, fluorine is green, hydrogen is white, oxygen in water and ion is red, and oxygen in PVA is pink. ### Additional Description of Single-surface Free-energy Calculations Thermodynamic Integration (TI) was used to compute the free energy of adding one ion to the water droplet. Prior equilibrated configurations were used as the initial configuration for simulations used for TI: \[F_{+/-}(N_{\rm w})=\int_{0}^{1}\left\langle\frac{dU(\lambda,\vec{q})}{d\lambda} \right\rangle_{\lambda}d\lambda=\sum_{i=1}^{12}w_{i}\left\langle\frac{dU( \lambda,\vec{q})}{d\lambda}\right\rangle_{\lambda_{i}}. \tag{1}\] In Eq. (1), \(\langle\cdot\rangle_{\lambda}\) denotes an ensemble-average obtained using \(\lambda\), which is the thermodynamic path variable such that \(\lambda=0\) corresponds to a state with only water droplet and the polymer surface and \(\lambda=1\) corresponds to the state with a water droplet, a water ion, and a polymer surface. As shown, the integral is numerically approximated using 12-point Gauss-Legendre quadrature with \(\lambda\in\{\)0.00922, 0.04794, 0.11505, 0.20634, 0.31608, 0.43738, 0.56262, 0.68392, 0.79366, 0.88495, 0.95206, 0.99078\(\}\). For the configurational potential energy with \(\lambda\), \(U(\lambda,\vec{q})\), we utilized a soft-core potential [25] for pairwise Coulombic and Lennard-Jones potential energy contributions involving the ion molecule: \[U_{\rm LJ}(r_{ij},\sigma_{ij},\varepsilon_{ij},\lambda)=4\lambda\varepsilon_{ ij}\Bigg{\{}\frac{1}{\left[0.5(1-\lambda)^{2}+\left(\frac{r_{ij}}{\sigma_{ij}} \right)^{6}\right]^{2}}-\frac{1}{0.5(1-\lambda)^{2}+\left(\frac{r_{ij}}{ \sigma_{ij}}\right)^{6}}\Bigg{\}} \tag{2}\] and \[U_{\rm coul}(r_{ij},q_{i},q_{j})=\lambda\frac{q_{i}q_{j}}{\left[10(1-\lambda) ^{2}+r_{ij}^{2}\right]^{1/2}}. \tag{3}\] The utilization of soft-core potentials allows for Eq. (2) and Eq. (3) to possess the same \(\lambda\), as opposed to performing TI in two stages (e.g., first handling \(U_{\rm LJ}\) terms and then \(U_{\rm coul}\) terms). Because the simulation box is heterogeneous, we find this preferable for sampling efficiency as utilizing only Lennard-Jones interactions in the absence electrostatics causes the ion to predominantly explore the vast "vapor" phase. We note that the free energy depends explicitly on \(N_{\rm w}\) and also there exists a subtle finite-size effect based on our heterogeneous system construction. Detailed discussion of this finite-size effect can be found in our previous work; [13] however, the effect is inconsequential in the construction of our free-energy differences. ### Additional Description of Two-surface Free-energy Calculations Two-surface systems were prepared by flipping and adding one equilibrated polymer-water system with one ion type to another with the opposite ion. For each pair of polymers, one of three amorphous systems for one polymer was randomly chosen and paired with a randomly chosen system for the other polymer. The distances of two surfaces, \(d\), was then set based on average surface interface position. The \(F_{AB}(p_{z})\) was calculated using umbrella sampling with statistical re-weighting via the weighted histogram analysis method [26]. Data was collected across 36 windows that each employ a harmonic biasing potential on \(p_{z}\). The biasing potentials utilize spring constants of 47.8011 kcal/mol and equilibrium positions at -35 to 35 A in 2 A increments. Sampling was facilitated using version 2.8.1 of PLUMED [27]. We note that the classical force field for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) was not parameterized [18] to handle possible recombination of ionic species into neutral water molecules. To focus sampling on configurations for which the ions are separate charged species and within the realm of applicability of the force field, we modified the non-bonded interaction between oxygen atoms on H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) to be repulsive at distances less than approximately 4 A. This is practically achieved by increasing \(\varepsilon\) in the Lennard-Jones potential to \(1.0\)kcal/mol (Fig. S5). The net effect of this modification is that ions do not form unphysical hydrogen bonds, which would otherwise arise using the original parameters. Consequently, \(F_{AB}(p_{z})\) conditionally depends on the ions being separate charged species that are separated by approximately 3 A. Formally, \(F_{AB}(p_{z})\) is biased by this modified potential, but Fig. S5 shows that this effectively negligible beyond 4 A, and so re-weighting was not performed with respect to this bias. ## Dataset S1 free_energy_one_surface.xlsx Summary of results from thermodynamic integration of ion addition to water droplets atop polymer surfaces. ### Dataset S2 free_energy_two_surface.xlsx Summary of results from umbrella sampling on the ionic dipole within water channels formed between two polymer surfaces.
2305.00436
Sustainability Competencies and Skills in Software Engineering: An Industry Perspective
Achieving the UN Sustainable Development Goals (SDGs) demands adequate levels of awareness and actions to address sustainability challenges. Software systems will play an important role in moving towards these targets. Sustainability skills are necessary to support the development of software systems and to provide sustainable IT-supported services for citizens. While there is a growing number of academic bodies, including sustainability education in engineering and computer science curricula, there is not yet comprehensive research on the competencies and skills required by IT professionals to develop such systems. This study aims to identify the industrial sustainability needs for education and training from software engineers' perspective. We conducted interviews and focus groups with experts from twenty-eight organisations with an IT division from nine countries to understand their interests, goals and achievements related to sustainability, and the skills and competencies needed to achieve their goals. Our findings show that organisations are interested in sustainability, both idealistically and increasingly for core business reasons. They seek to improve the sustainability of processes and products but encounter difficulties, like the trade-off between short-term financial profitability and long-term sustainability goals. To fill the gaps, they have promoted in-house training courses, collaborated with universities, and sent employees to external training. The acquired competencies make sustainability an integral part of software development. We conclude that educational programs should include knowledge and skills on core sustainability concepts, system thinking, soft skills, technical sustainability, sustainability impact and measurements, values and ethics, standards and legal aspects, and advocacy and lobbying.
Rogardt Heldal, Ngoc-Thanh Nguyen, Ana Moreira, Patricia Lago, Leticia Duboc, Stefanie Betz, Vlad C. Coroama, Birgit Penzenstadler, Jari Porras, Rafael Capilla, Ian Brooks, Shola Oyedeji, Colin C. Venters
2023-04-30T09:34:07Z
http://arxiv.org/abs/2305.00436v2
# Sustainability Competencies and Skills in Software Engineering: An Industry Perspective ###### Abstract Context: Achieving the UN Sustainable Development Goals (SDGs) demands a shift by industry, governments, society, and individuals to reach adequate levels of awareness and actions to address sustainability challenges. Software systems will play an important role in moving towards these targets. Sustainability skills are necessary to support the development of software systems and to provide sustainable IT-supported services for citizens. **Gap:** While there is a growing number of academic bodies, including sustainability education in engineering and computer science curricula, there is not yet comprehensive research on the competencies and skills required by IT professionals to develop such systems. **Research goal:** This study aims to identify the industrial sustainability needs for education and training from software engineers' perspective. For this we answer the following questions: (1) what are the interests of organisations with an IT division with respect to sustainability? (2) what do organisations want to achieve with respect to sustainability, and how? and (3) what are the sustainability-related competencies and skills that organisations need to achieve their sustainability goals? **Methodology:** We conducted a qualitative study with interviews and focus groups with experts from twenty-eight organisations with an IT division from nine countries to understand their interests, goals and achievements related to sustainability, and the skills and competencies needed to achieve their goals. **Results:** Our findings show that organisations are interested in sustainability, both idealistically and increasingly for core business reasons. They seek to improve the sustainability of processes and products but encounter difficulties, like the trade-off between short-term financial profitability and long-term sustainability goals or an unclear understanding of sustainability concepts. To fill these gaps, they have promoted in-house training courses, collaborated with universities, and sent employees to external training. The acquired competencies should support translating environmental and social benefits into economic ones and make sustainability an integral part of software development. We conclude that educational programs should include knowledge and skills on core sustainability concepts, system thinking, soft skills, technical sustainability, building the business case for sustainability, sustainability impact and measurements, values and ethics, standards and legal aspects, and advocacy and lobbying. sustainability, software engineering, software sustainability, sustainable software, education, software competencies, sustainable development goals, skills. ## 1 Introduction Digitalisation is pervasive and can either help or hinder the United Nations Sustainable Development Goals (SDGs)1[1, 2]. Organisations understand that but struggle in implementing sustainability in their service portfolio and their business practices [3, 4]. Consequently, there is a need to understand which competencies and skills industry requires, and how they can be integrated into their practices. These new competencies and skills must be acquired through adequate learning programmes and courses addressing the different sustainability dimensions, i.e. environmental, economic, social, technical, and individual [5]. For software engineers2, this ranges from the more technical aspects supporting Green IT and software sustainability to more social and individual ones facilitating software-driven processes in society. Footnote 1: [https://sdgs.un.org/goals](https://sdgs.un.org/goals) Academia has made efforts to introduce sustainability in regular computer science programmes, as well as suggesting the skills and competencies needed by their students [6, 7, 8]. According to these studies, future software engineers need to develop a sustainability mindset and acquire sustainability competencies able to produce sustainable IT-based systems or systems that both support more sustainable processes and monitor the achieved sustainability goals [9]. However, industry is still unclear on which sus tainability skills different sectors require to achieve their sustainability goals. Recent non-academic literature highlights the role and importance of skills for sustainability. For instance, even the British Plastic Federation [10] mentions that the sustainability skills of employees are key for any strategy oriented to achieving a sustainable business. Similarly, Terrafinitii [11], a sustainability consultancy, highlights that effective sustainability performance demands sustainability skills and competencies -- not only from sustainability professionals but also in other roles within the organisation. Hence, we not only need to identify which skills are more relevant in delivering sustainability in a particular organisational unit but also related units must recognise sustainability as a goal of the company's core business. What prompts this study is that we, the authors, are under the impression that across industry there is (1) only a partial understanding of sustainability and there is (2) a limited understanding of how to address the lack of related competencies. Additionally, across academia, there is (3) a lack of understanding of the needs of industry related to sustainability and (4) a need for a concrete teaching curriculum that could lead to the high-quality sustainability education which software engineers require. This work aims to investigate the industrial sustainability needs for education and training from a software engineering perspective. To achieve this, we addressed the following three research questions: RQ1: _What are the interests of organisations with an IT division with respect to sustainability?_; RQ2: _What do organisations want to achieve with respect to sustainability, and how?_; and RQ3: _What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?_ To this end, we interviewed sustainability and IT experts from twenty-eight (28) organisations in nine (9) different countries. Our main contributions are: * A far-reaching overview of the organisations' perspective on sustainability, including (i) their general interest in sustainability; (ii) the sustainability goals they want to achieve; (iii) their achievements towards these goals and the difficulties faced in achieving them; (iv) the sustainability skills and competencies they already possess in-house and those that are missing; and (v) solutions to acquire the missing. * Initial insights on the gaps in current academic and non-academic training programmes for software engineers, and our recommendations to address those gaps for those who design the new programmes to enable future software engineers to achieve sustainability skills and competencies. The rest of the paper is structured as follows: Section 2 provides a comprehensive background of the concept of sustainability. Section 3 elaborates on the employed research method. Section 4 presents the results regarding competencies and skills. Section 5 interprets the study's findings. Section 6 offers recommendations for training programs to address identified gaps in competencies and skills. Section 7 provides an analysis of the threats to validity. Section 8 offers a review of related work. Lastly, Section 9 concludes the study and highlights potential future research directions. ## 2 Background This section starts with some background on the general notion of sustainability and follows with specific overviews of sustainability in IT and then Software Engineering. Although the principles of sustainability have been known to numerous human cultures throughout history, their first scientific usage was most likely in H.C. von Carlowitz's principles of sustainable forestry from 1713 [12] (summarised in [13]). As Hilty and Aebischer [14] comment, as the understanding at the time was that forests have one purpose, to produce wood, Carlowitz's basic principle is quite straightforward: _"do not cut more wood than will grow in the same period of time"_. Of course, we know today that a forest accomplishes many further functions (such as producing oxygen, filtering air and water, preserving biodiversity, recreational and aesthetic values, and many more), which makes the sustainability perspective much more complex. The paradigm, however, is unchanged: As Venters et al. [15] discuss, the verb "to sustain" and the noun "sustainability" come from the Latin "sustenere", which was used for both "to endure" and "to uphold" something. Hence, "sustainability" refers to the capacity of a system to endure for a certain amount of time [15]. Within the conceptualisation of sustainability put forward by the Brundtland Commission in 1987 [16], the system in question is Earth itself and the period of time, while not exactly specified, includes many generations into the future. The Brundtland definition thus encompasses two aspects: distributive justice (_"the essential needs of the world's poor, to which overriding priority must be given"_[16]), but also intergenerational justice, for which the preservation of the biosphere is a prerequisite. The relationship between the IT sector (or digitalisation in general) and sustainability has been conceptualised in various ways and under different names. Early concerns with the environmental footprint of the IT sector itself are usually referred to as "Green IT", while the purposeful deployment of IT to reduce the environmental footprint in other economic or societal sectors is often called "Green by IT" [17]. Other terms used to describe the latter are, for example, ICT4EE (ICT for energy efficiency), "Energy Informatics" [14] or "I(C)T enabling effect" [18]. Numerous further names describe the relationship between digitalisation and sustainability in general, which includes both the concepts of "Green IT" and "Green by IT", but also the further dimensions of sustainability, in particular, the social one. Such names include "Digital Sustainability", "Sustainable Computing" or "ICT for Sustainability (ICT4S)" [14]. For the "software and sustainability" domain, there are also two views, which are quite similar to those of the broader "IT and sustainability" field [19]: one looking at the sustainability of software itself (foremost, thus, a technical notion of sustainable software), the other at deploying software engineering for sustainability (SE4S) beyond the software systems themselves [15]. Acknowleding both views, the "Karlskrona Manifesto for Sustainability Design" extends the well-known three dimensions of sustainability by another two: technical (to account for the desired long-term use of software) and individual (addressing personal freedom, dignity and fulfillment) for a total of five dimensions [5]. While the individual dimension is not always represented, most literature in the field accounts for both the technical as well as the three established dimensions (environmental, economic, and social) [20]. As is the case in general with sustainability, the dimensions are not entirely independent and there are often trade-offs among them [21]. And while current software engineering practice gives high value to the technical and economic dimensions, the social and environmental ones (and thus the crucial components of the sustainability concept as understood by the Brundtland commission) are often ignored [20]. ## 3 Study Design and Research Method To answer our research questions, we used a mixed-methods approach [22], combining individual interviews and focus group interviews in a semi-structured format. For the sake of brevity, both individual interviews and focus group interviews are referred to as interviews hereafter. Our study process is illustrated in Figure 1. In summary, we extensively discussed our research goals and steps (study design and planning), creating a set of PowerPoint slides to guide our conversations in all interviews and focus groups (data collection). Additionally, we did a pilot study that provided a baseline structure for the subsequent interviews. All were recorded and transcribed and the relevant information was retrieved with the support of a code book (data extraction). Finally, the coded data was analysed and the results are presented. ### _Goals and Research Questions_ To address our eventual goal (i.e., design education programmes that teach the required sustainability competencies and skills for future software engineering), we first need to understand what are the needs of the field, i.e., from industry3. Accordingly, we formulate the following overarching research question (RQ): _"What are the industrial sustainability needs for education and training from software engineers' perspective?"_. Footnote 3: In general, with the term “industry” we mean practice from both the private and public sectors. We break down RQ into three research sub-questions which guide our data collection: RQ1: _What are the interests of organisations with an IT division with respect to sustainability?_ The sustainability focus depends on the specific business domain and priorities. In this respect, the sustainability perspective depends on their specific interests and stakes. RQ1 helps us define the possible scope of future education programmes. RQ2: _What do organisations want to achieve with respect to sustainability, and how?_ Sustainability can add significant value to both private and public organisations. However, to achieve this aim, sustainability must be tailored and embedded in the DNA of the organisation itself, for example, its business goals, values, and vision of the future market. Accordingly, this research question investigates the target achievements (what the organisations aim to achieve with respect to sustainability), the influence of software/ICT on these achievements, as well as the difficulties they face and expect. RQ2 helps us define and prioritiise the various foci of future education programmes (e.g., creation of innovation, acquisition of new markets, compliance with regulation). RQ3: _What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?_ To different extents, organisations are becoming aware of the sustainability-related competencies and skills that they have already in-house or that they miss in order to achieve their goals. This research question investigates the gaps in the IT workforce and, if applicable, the strategy organisations have in place or envisage to acquire the missing competencies and skills. RQ3 helps us define future education programmes' types and contents (e.g., mono- versus interdisciplinary, higher education versus professional training). Figure 2 shows the relationship between RQs and the themes derived from the interviews. In Section 4, we will report in detail the findings related to each theme. ### _Data collection and analysis_ #### 3.2.1 Data collection To collect data, we conducted nine individual interviews and seven focus group interviews in a semi-structured format with industry practitioners. We contacted and recruited the participants by using our professional contacts. Individual interviews were employed due to the familiarity of researchers and interviewees available in their networks. We conducted several focus group interviews to catalyse discussions among interviewees. Our selected ICT organisations have supported or participated in sustainability initiatives or have an ICT department involved in sustainability actions as part of the strategy of the company. We selected organisations from different countries and domains, as listed in Table I, to diversify the perspectives regarding sustainability. The organisations are anonymised to maintain confidentiality. In total, we interviewed 28 experienced IT/sustainability practitioners from 28 distinct organisations in different industrial domains belonging to 9 countries. We followed the statistical classification of economic activities in the European Community [23] to classify the business sector of the organisations. To classify the organisation sizes, we followed the OECD scheme proposed in [24]. Our participating organisations cover a wide spectrum of areas from software to telecommunications and resource supply. While most of them are private, nearly a third of the organisations (9/28) are from the public sector. Our participants have significant industry experience and have different roles and positions in their organisations, as shown in Table II. The second column shows the business model with respect to the sustainability of their organisations, which is elaborated in more detail in Section 4.1. The majority of the participants are seniors with more than ten years of professional experience, and many have a computer science background or degree. We used online teleconferencing tools (e.g. Microsoft Teams, Zoom, Skype) to interview the participants. At the beginning of the interview, we took around five minutes to explain the goals of the interview. The prepared interview questions4 were then asked one by one. The interviews were conducted from March to September 2021 and recorded with the consent of the interviewees. Individual interviews lasted between 30 minutes to 2 hours, while focus group interviews took a bit longer time as more discussion arose. The recorded interviews were transcribed manually or automatically by using, for example, Microsoft Office 365 (i.e., tool), depending on the researchers' preference. For automatic transcriptions, the responsible researchers spent time manually correcting transcription mistakes in the tool to ensure the quality of the research. #### 3.2.2 Data extraction and analysis To analyse the interviews, we employed the thematic data analysis approach proposed in [25]. To facilitate the data analysis, we utilised Saturate App5, which is a web-based platform that supports collaborative qualitative analysis. It allows many researchers simultaneously to perform activities related to coding, labelling, and categorisation. The data analysis process was carried out as follows: Firstly, the transcripts were imported to Saturate App. Then, one researcher created the first version of a codebook in an Excel spreadsheet based on the interview questions. During the _data extraction pilot_ stage, we performed initial coding for the interview-based data collected from their interviews. They also validated and extended the codebook as needed until it was deemed stable by all coders. Finally, during the _data extraction_ stage, ten researchers involved in this study were divided into three sub-groups, each having at least three members. Each sub-group analysed one research question defined in Section 3.1. The groups conducted several workshops to validate and refine the coding done in the first stage so that the original coding for all interviews was verified and agreed on by several researchers. \begin{table} \begin{tabular}{l l l l l} \hline \hline **ID** & **County** & **Sector** & **Type** & **Size** \\ \hline 1 & Colombia & Technology consultancy & Private & \(<\)50 \\ 2 & Finland & Software consultancy & Private & \(<\)250 \\ 3 & Finland & Software & Private & \(<\)50 \\ 4 & Finland & Software consultancy & Private & \(<\)250 \\ 5 & Germany & Technology & Public & \(<\)50 \\ 6 & Germany & Technology & Private & \(<\)50 \\ 7 & Germany & Technology & Private & \(<\)50 \\ 8 & Netherlands & Software consultancy & Private & \(<\)250 \\ 9 & Netherlands & Public administration and & Public & 250+ \\ & \multirow{2}{*}{10} & \multirow{2}{*}{Netherlands} & \multirow{2}{*}{Software consultancy} & Private & 250+ \\ & & & & \\ 11 & Norway & ICT industry representative & Public & 250+ \\ 12 & Norway & Energy provider & Public & 250+ \\ 13 & Norway & Mobility provider & Public & 250+ \\ 14 & Norway & Software consultancy & Private & 250+ \\ 15 & Norway & Software consultancy & Private & \(<\)50 \\ 16 & Norway & Waste management & Public & 250+ \\ 17 & Norway & Technology & Public & 250+ \\ 18 & Portugal & Software & Public & \(<\)250 \\ 19 & Portugal & Software and Technology & Private & \(<\)50 \\ 20 & Portugal & Software and Technology & Private & \(<\)50 \\ 21 & Portugal & Software consultancy & Private & \(<\)50 \\ 22 & Spain & Water supplier & Public & 250+ \\ 23 & Spain & Marketing and Advertising & Private & \(<\)250 \\ 24 & Spain & Mobility provider & Private & 250+ \\ 25 & Sweden & Networking and Telecom- & Private & 250+ \\ & \multirow{2}{*}{26} & \multirow{2}{*}{Sweden} & \multirow{2}{*}{Telecommunication} & Private & 250+ \\ 27 & UK & Technology & Private & 250+ \\ 28 & UK & Technology & Private & \(<\)50 \\ \hline \hline \end{tabular} \end{table} TABLE I: Organisations (anonymised) interviewed per country Fig. 1: Study design and execution Fig. 2: Themes with respect to research questions The codebook has two purposes. Firstly, it formalises what the researchers have analysed from the data during the _data extraction pilot_ stage. Secondly, it is used as a guideline in the _data extraction_ stage, guiding the researchers who validate and correct the results initiated in the previous step. In the codebook, at the highest level, the coded data were organised according to our three research questions. The main topics are interests, target achievements, and competencies. The codes belonging to these main topics were further organised into three levels depending on their abstraction. The deeper level a code goes, the more detail it represents. The codebook6 is shared as supplemental material to help readers better understand the findings that we report in Section 4 where the codes are highlighted in **bold**. The codebook is organised as follows. Horizontally, it is divided in accordance with the main topics mentioned above (i.e., interests, target achievements, and competencies). The codes belonging to each main topic are vertically distributed into three levels. Each code is accompanied by a definition, a description of when the code is applicable, and a coded text example with the respective source. Figure 3 illustrates the overall structure of the codebook and Table III shows sample final results of our data analysis phase, which can also be found in the supplied codebook. The examples contain quotes taken from the interviews and how they are coded during data analysis. The first set of two examples is for RQ1, which helps us identify why sustainability becomes an interest for our interviewed organisations in terms of economical aspects. The second set shows the organisations' goals in relation to sustainability (RQ2). The last set for RQ3 indicates skills that employees of our interviewed organisations possess in order to help them achieve the established sustainability goals. Footnote 6: Codebook access link: [https://bit.ly/3wgt0dp](https://bit.ly/3wgt0dp) ## 4 Results This section describes the demographics of participants and the findings with regard to our research questions. ### _Demographics: role with respect to sustainability_ In this section, we report the demographics concerning the business visions of our interviewed organisations and their perceptions of sustainability. #### 4.1.1 Organisations' role with respect to sustainability We classified our interviewed organisations as producers or consumers (or both) of sustainability solutions in IT. The _producers_ are the organisations who produce tools or software solutions to support sustainability initiatives. The organisations that use these products are classified as _consumers_. Some organisations may play both roles. The "Business model with respect to sustainability" column of Table II shows the classifications of the organisations who use one of these models or both. While the majority of our interviewed organisations, 16 out of 28 organisations (55.6%), from nine different countries are solely producers, only three organisations (11.1%) from three countries are solely consumers. Nine organisations (33.3%) from three countries play both roles. Finally, 9 out of 28 organisations operate in the public sector, while the rest are private organisations. In our sample, many organisations develop sustainability solutions in-house rather than rely on other organisations. In many cases, the sector to which a company belongs does not impact the role adopted. Furthermore, the producer role is more common in software and technology organisations. Therefore, we can see that digitalisation plays a key role in providing consumers with sustainable solutions. Finally, we observe that organisations adopting both roles belong to the public sector (e.g., energy, water management) acting as end-users that demand sustainable solutions but also develop sustainability solutions in their IT department that are used by the environmental department. #### 4.1.2 Organisations' perception of sustainability We did not explicitly ask for the organisations' perception of sustainability during the individual interviews and focus group interviews as we did not want to have a confirmation bias, especially in the focus groups. However, we did extract the organisations' perceptions of sustainability that emerged when analysing the qualitative data. Overall, the main focus of organisations when discussing sustainability is on the environmental dimension. For eleven organisations, their statements can be interpreted that they perceived sustainability as environmental issues such as carbon emissions, climate change, and energy consumption. Eight organisations did also mention the economic dimension as part of sustainability. From these, three organisations referred to the financial impact of their products on their businesses (_"[...] whenever we are making the systems, especially architectural or technological decisions, we consider the economical impact a lot"_ - Org. 2) and also talked about economic sustainability in the sense of circular economy. The social and technical dimensions have been considered as part of sustainability by seven organisations. For the social dimension, the focus is on their own workforce (e.g., providing yoga classes), the customer (e.g., improving customer satisfaction) and society (e.g., fighting against poverty) as a whole. For the technical dimension, the interviewees' statements suggest that sustainability is related to a quality attribute of IT products and services, such as reusability, robustness, security, etc. Finally, 12 organisations explicitly consider sustainability as related to more than one dimension and surprisingly, only four organisations mentioned the SDGs as relevant to them. In conclusion, we observe that a prominent focus of organisations when discussing sustainability is on the environment. Economic, technical, and social issues are also popular topics. ### _RQ1: What are the interests of organisations with an IT division with respect to sustainability?_ For this RQ we asked the organisations about their interests in sustainability from four perspectives: their business, their customers, their shareholders, and their stakeholders. #### 4.2.1 Interests with respect to business When discussing the reason why sustainability is interesting for their **business**, it was not surprising to observe that economic reasons play an important role and are followed by moral and social matters. With regard to **economic** reasons, sustainability helped our interviewed organisations open new business opportunities, increase their competitiveness, give them the license to operate, reduce costs, and acquire and retain talent more easily. In particular, sixteen organisations affirmed an \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Interviewed organisation** & **Quote** & **RQ** & **Code** \\ \hline 3 & _“Young people want to join us because they want to work for technologies which are basically improving sustainability.”_ & RQ1 & for-business / economical / attract-talents \\ 27 & _“If we have skills to optimise our algorithms to reduce our cloud costs, it is a financial benefit for us._ & RQ1 & for-business / economical / economic-return \\ \hline 1 & _“When you see the flow of value and start identifying where waste goes, you can deliver a better and cleaner system.”_ & RQ2 & goals / for-own-process / understand-to-help-customers \\ 18 & _“We are seeing IT more as a means to get insights on how to create sustainability, to be able to steer the effects and decisions you make.”_ & RQ2 & goals / for-own-process / add-value \\ \hline 2 & _“We have talents with long backgrounds in software development and good knowledge of building top-quality software.”_ & RQ3 & have-in-house / technical / quality software have-in-house / technical / business \\ 15 & _“We have a lot of employees who are chatted in IT. They are also business advisors.”_ & RQ3 & have-in-house / technical / quality software have-in-house / technical / business \\ \hline \hline \end{tabular} \end{table} TABLE III: Example results of data analysis interest in sustainability because it creates new **business opportunities** or helps mitigate potential threats. Overall, our interviewed organisations have one of the following three profiles: sustainability is their main business (e.g. they offer solutions for circular economy or sustainability reporting), they are in an industry that is being highly impacted by sustainability demands (e.g. mobility), or their customers are demanding it. Nine organisations viewed sustainability as a matter of **competitiveness and survival**. While some use sustainability to differentiate themselves from competitors, others are aware that other organisations are investing in sustainability-related initiatives and do not want to be left behind. For example, Org. 6 stated that _"[sustainability] is another point of differentiation"_ and Org. 23 mentioned that _"all the organisations that are emerging and that are effectively working, are those that have that sustainable consistency at an environmental and social level"_. Finally, for some organisations, it is a matter of making sure that they will continue to **utilise the resources** they need to function; for example, Org. 22 is fighting climate change because _"we are going to stop having the main resource of our factory, which is the water"_. Three of the organisations explicitly stated that implementing sustainable practices can bring them **economic advances**. Org. 27, for example, has a predictive algorithm for product demand that helps its clients minimize food waste and seeks to optimise its algorithms to reduce cloud and energy costs. For six of the organisations, sustainable practices were adopted to comply (or help others to comply) with **regulations**. Org. 22, for example, shared that _"everything to do with climate- and environmental policies have become structural"_. Finally, three organisations saw that sustainability is vital to **attracting talent**. These organisations feel that highly skilled professionals want to work for firms where they can share their values and put time and effort into something meaningful. Sustainability was also related to **moral** concerns. Eight organisations invested in sustainability because they truly believe in it, sometimes being directly related to **aligning to the company's values**. Two illustrations of this belief are _"our goal as a company has been focused on providing something to society and not just doing profit"_ (Org. 18) and _"our mission or reason for existence is that our business is coming from sustainability"_ (Org. 3). Unsurprisingly, when talking about their interest in sustainability, **environmental** concerns such as reduction of waste, water and energy use, and carbon emissions were the most present (mentioned by fifteen organisations). These concerns were related to both the purpose and the operation of the business. Finally, four organisations explicitly stated that sustainability (or specific dimensions of it) was **not of concern** to them. For example, ecological sustainability was _"almost the last perspective for us in day-to-day life"_ (Org. 2). #### 4.2.2 Interests with respect to customers Differently from the business perspective (mainly focused on the economics), the **customers** of our interviewed organisations are reported to be most attracted to sustainability by moral values. In particular, they align their businesses to sustainability due to ecological and societal concerns. Economics, e.g., business opportunities returns, is the second most popular reason for investments. Here it is worth highlighting that several of the interviewed organisations adopted a business-to-business model, therefore, their customers were other organisations rather than individuals. Among the **moral** reasons driving our interviewed organisations' customers to sustainability, sixteen organisations shared that their customers wanted to **protect the environment** by reducing carbon emissions and electricity use. At the same time, **social matters** were of concern to nine organisations. Especially, COVID-19 was the most frequently mentioned issue as the pandemic forced organisations to adapt their businesses for survival. For example, Org. 23 shared that most products requested by its customers in the last two years were related to the pandemic. In addition, four organisations viewed **value-alignment** as another reason for customers' interest in sustainability because this is an important concept in society. Regarding **economic aspects, investment returns** and **business opportunities** are the two most popular reasons. Three organisations mentioned that sustainability is a core business value of their customers: _"Sustainability and circular economy are [our customer]'s core business."_ (Org. 3). At the same time, the focus on sustainability has **evolved**, so our interviewed organisations and their customers had to proactively adapt their businesses to the new trends. Interestingly, seven organisations mentioned that despite having an interest in sustainability and in products addressing sustainability-related aspects, they still struggle to win customers due to **no interest**. Org. 21, for example, admitted that _"I do love to design more services for sustainability, but I don't get any requests, and I really struggle to sell it."_ #### 4.2.3 Interests with respect to shareholders As compared to the interest with respect to business and customers, the interest with respect to **shareholders** seems less important as it has been mentioned by only thirteen organisations. In particular, economic interests have been mentioned by nine organisations followed by societal concerns (four organisations). Three organisations mentioned that the interest of their shareholders in sustainability had changed over time. The **economic** interest is what organisations see as important for their shareholders. They mainly argue that their shareholders consider sustainability as a **business opportunity** to increase their financial performance and their market share. See, e.g., Org. 24: _"If we don't adapt the business and create new KPIs and processes related to sustainability, the risk is high for the shareholders because we can lose some parts of the market."_ Four organisations do consider **societal** concerns as an important aspect for their shareholders. Org. 18 put it as follows: _"[shareholders] have a vision not just of making a profit but as a vision of contributing to a better society (...)."_ Additionally, the interviewees state that the shareholders' interest in sustainability **evolves** due to compliance constraints (e.g., EU taxonomy for sustainable activities7) and societal concerns (e.g., social responsibility). Footnote 7: [https://bit.ly/3xYpBAF](https://bit.ly/3xYpBAF) #### 4.2.4 Interests with respect to stakeholders The responses from the organisations show that sustainability interest from **stakeholders** is highly influenced by media news on sustainability, especially about **environmental concerns** like reducing emissions and fighting climate change: "_(...) if you take into account the ecological point of view, you'll have less \(\mathrm{CO_{2}}\)"_ (Org. 7). There are also several drivers for the **evolution** of stakeholders' interests in sustainability, such as the United Nations Framework Convention on Climate Change (UnFCCC), which launched the Race to Zero campaign and influenced several organisations working towards sustainability. Three organisations are working on building an ecosystem with partners who share the same sustainability values because sustainability **valuelignment** is an important element for their stakeholders. In addition, **employees** are key stakeholders who drive sustainability interest within six organisations because they want to feel a sense of contributing positively towards sustainability: "_They are aware of the sustainability issues, and they count to have a positive impact on these issues."_ (Org. 18). Also, organisations want to create employee satisfaction through different aspects of sustainability to attract talent based on the company's identity and activities towards sustainability. In addition for eight organisations, stakeholders' interests revolve around **societal concerns**, such as human rights, as well as individuals taking action and being accountable for their activities towards sustainability. ``` ROI summary ``` 1Intersts with respect to business: The economic benefit brought by sustainability was the main driving force for the organisations' interest. Concerns about environmental impacts were also very present. ``` 2Intersts with respect to customers: Our interviewed organisations felt that their customers had environmental and social concerns that they had to respond to. ``` 3Intersts with respect to shareholders: Economic benefits are what the organisations see as most important for their shareholders. ``` 4Intersts with respect to stakeholders: External drivers such as the media and the development of international frameworks highly influence the interest of stakeholders in the sustainability of the organisations. Another main driver is employees' personal interest. ``` ### _Rq2: What do organisations want to achieve with respect to sustainability, and how?_ To answer this RQ, we asked the organisations about what they want to achieve with sustainability in their business, how their ICT products/services support achieving these goals, and what difficulties they faced, or expect to face, in achieving these goals. #### 4.3.1 Established sustainability goals The sustainability **goals** of the interviewed organisations focus primarily on their processes, followed by their need to create or improve their products, and finally on the external factors impacting their goals. Interestingly, social goals (e.g., human rights, and inclusion) are not their top priority at the moment. RQ1 findings show that the organisations and their customers have shown a strong interest in social issues when referring to sustainability. Particularly, social matters are the second highest interest of our organisations and of their customers. However, these aspects are not taken into account when organisations define sustainability goals. It indicates that although social matters are good reasons to draw organisations' attention to sustainability, they still do not create viable opportunities for them to set related business goals. In relation to the internal working **process**, fourteen organisations highlighted how **process improvement** had contributed to addressing their sustainability goals, including automation and optimization. For example, Org. 1 highlighted the importance of values and mindset: "_shift [our business partners]'s mind to the value mindset from the project mindset._" Five organisations stressed the importance of **collaboration and leading by example** to inspire, influence, and motivate others to follow their lead. Six organisations reflected on their personal and professional decisions motivated by internal organizational and personal values to identify opportunities and take responsibility in relation to what motivates and positively influences their **decision making process**. Org. 18 stressed that as a company, they "_have always had the goal of having a positive contribution to society as a whole_". Four organisations commented that the organisational environment, belief, awareness, and communication linked to values were critical to **changing culture**. However, this needs to "_be accompanied by a sustainable pace_" (Org. 1). Three organisations had introduced **new processes** to address sustainability internally but wonder how they could optimise their processes. Concerning the organisations' planned **products**, seven organisations highlighted opportunities to develop **new products** to create new markets. For example, the core business of Org. 15 is reviewing their clients' products in order to suggest new sustainable business strategies: "_We try to take a position in the market not as a regular IT supplier but more as a partner. We [and our customers] aim not only to make financial benefits but also benefit the environment and the social side._" Five organisations highlighted how they are improving product quality for both their clients and their organization by demonstrating how sustainability can be integrated as a core element. When clients lacked the required knowledge or expertise, these organisations were able to provide models and examples of good practice. Finally, regarding the **external** factors affecting the company goals, three organisations discussed the importance of their customers making the **right decision** for the larger community and global sustainability. Org. 2 even took this as far as challenging the need for a customer's development proposal, resulting in business loss when the potential customer decided to cancel the project. Also, three organisations highlighted the importance of looking beyond the boundary of the company to engage with sustainability in their **supply chains**. Org. 23 emphasised that "_we are going to review our supplier code of ethics. We want our suppliers to sign it and take responsibility because if we do it well, we have to go hand in hand with people who do it well."_ However, they ponder how to achieve this, and need tools to help them in making the right decisions. #### 4.3.2 Plan and experience for achieving the goals This section presents findings on executing sustainability goals and reported experiences. 3.2.1 Plan: Among the **steps** to achieve the established sustainability goals, organisations mention actions related to external factors with an impact on their goals, as well as the changes required to their internal process and product. With regard to **external** factors, the most cited concern was seeking **collaboration**. Seven organisations prized collaborations with external entities (e.g., clients, municipalities, NGOs, and universities) and international alliances to push their limits, making them more ambitious in their goals. The UN Global Compact8 was acknowledged as a good way to create synergies, gain strength, and be inspired by the work of others. The interviewed organisations also held thorough discussions on internal **process** transformation, focusing first on process design, second on tools, certification, and measurement, and third on implementation, evaluation, and internal collaboration. Footnote 8: [https://unglobalcompact.org](https://unglobalcompact.org) The **design** of sustainability processes was cited by five organisations, that use agile and incremental improvements, the ABCD process9, and the SSD framework. Org. 1, advocating for agile approaches, highlighted the importance of seeing a system as a matrix connecting its different parts to be aware of the effects of a given action on the system's value chain. Four organisations highlighted the need for **tools** for sustainable systems. The tools mentioned are Care to Create, a flourishing business canvas10 to capture economic and social value, SSD framework for strategic sustainable development, and software for environmental accounting. Four other organisations discussed **certification**, sharing their uncertainties and concerns on measurement processes to achieve sustainability, particularly related to the lack of consistent methodologies to implement it. While two organisations referred to BRECAM and B Corp, one mentioned the importance of all types of certifications they adopted to prevent anti-corruption and comply with data protection. Another company reported having their Environmental programme approved and a Social Responsibility programme already draughted. Also, crucial to four organisations are the **measurement** processes to achieve sustainability, and related to this is the lack of consistent methodologies to implement such measurements, as stated by (Org. 10) _"there's not a formal training program on how to develop sustainable solutions (...). [there's no] consistent methodology that guidelines how to implement it. (...) it will come to be certain because we're in a highly regulated company."_ Org. 17 uses BRECAM and other certifications to measure and track the achievement of their goals. Footnote 9: ABCD is part of the Framework for Strategic Sustainable Development (SSD). Source: [https://www.naturalstep.ca/abcd](https://www.naturalstep.ca/abcd) Three organisations mentioned the importance of process **evaluation**, either by setting clear objectives (e.g., becoming \(\mathrm{CO}_{2}\) neutral), implementing and testing them or by defining end-to-end sustainable propositions to make contributions to the community. Org. 1 also complained that when a company is in financial trouble, the quickest decision is firing people instead of analysing the system and thinking of a way to add value to it. They called for a change of mentality where employees do not simply follow instructions but are able to raise their concerns. Lastly, three other organisations proposed different ways to work in **collaboration** with their clients and partners to promote sustainability, from organising workshops to joining Global Compact and offering their clients solutions with extra sustainability features. 3.2.2 Experience: While achieving sustainability goals, the organisations collaborated with external business partners and reformed the internal organisation. Among the adopted actions, reducing energy consumption and cutting carbon emissions are the two most mentioned. Regarding **external** factors, three organisations reported that they sought **collaboration** with business partners to bridge the sustainability gaps within their organisations. Asking for training to enrich the workforce's knowledge about sustainability and raise clients' awareness of sustainability is one of the most popular choices: _"We have organised some workshops with an environmental expert and also invited clients to speak precisely about the importance of sustainability."_ (Org. 23). Another solution is to purchase the services the organisations need to achieve their sustainability goals. For example, Org. 11 shared that they were working with a software consultancy company to produce a reporting toolbox tracking the amount of carbon emitted by its systems. Regarding **internal** reform, organisations have adopted different solutions, such as **reducing carbon footprint** (seven organisations) and employing software technology to **automate** the working process (three organisations). To reduce carbon emissions, Org. 23 was particularly proud of its involvement in reforestation efforts to offset its carbon footprint. Org. 4 has recently provided bicycles for staff and encouraged cycling to work; the initiative has received high appreciation from its employees and media. Regarding automation, Org. 28 states: _"Test automation helps build sustainable software because you have confidence that a particular unit operates in a certain way."_ Five organisations have **designed** their internal working processes in several ways to align themselves with the established sustainability goals. See the recommended procedure of Org. 1: _"So the first part is to identify value, to take care of value. Once you see that, you start changing things with little experiments."_ Three organisations also reported their internal **culture** coincidentally changed on the journey to accomplish the goals. Org. 25 stated that aiming for sustainability goals initially created more work and caused objections, but over time, the passion has been growing among the staff in several departments. Finally, for those who sell **products** related to sustainability, such as power-saving utilities and carbon emission reporting software, three organisations mentioned that they invested in new infrastructure to **reduce energy consumption**. In particular, Org. 24 stated: _"We sell different products and services that help reduce emissions."_ #### 4.3.3 Encountered difficulties The difficulties in achieving the established sustainability goals are categorised into external and internal. More internal difficulties were reported. 4.3.3.1 External difficulties: Economic barriers were the most frequently mentioned, followed by policy issues. Regarding **economic** barriers, four organisations emphasized the difficulty in finding **customers** willing to pay for more sustainable products or services. Org. 19 specifically mentioned that customers' procurement teams were _"only focused on price"_ and they had to sell the ideas to other decision-makers outside of procurement. Org. 16 aimed to be an enabler for the **circular economy** but was hindered by a lack of consistent models from other organisations. The economic barriers are not just in relation to customers; another company reflected on the difficulty of securing **investors**, who are focused on rapid growth and scaling up. **Policy** issues were highlighted for not creating the conditions for sustainable products and services to compete. Policymakers represent a key difficulty, according to five organisations. If the policy context does not require or incentivise sustainability improvements in products and services, organisations struggle to compete against less sustainable alternative suppliers. Several organisations identified a gap between the aspirations politicians and the **regulations** in force. Org. 9 felt this might be due to politicians' lack of understanding of digitalisation, who then _"proposed unreasonable laws"_. The impact of these policy failures was reflected in the second most serious concern, that customers were not prepared to pay more for these sustainable products and services. Four organisations cited external **technological** barriers, such as a lack of charging infrastructure for electric vehicles. 4.3.3.2 Internal difficulties: People and processes are overwhelmingly represented in the organisations' answers, while technologies are barely mentioned. Specifically, they appear 25, 23, and 2 times, respectively. It shows that non-technological difficulties are prevalent in the industry. Regarding **people** barriers, 14 organisations pointed out the lack of **understanding** of sustainability concepts as one of their most significant challenges. This may be due to the extent and vagueness of the concepts themselves and the insufficient knowledge of their employees. Some further explained that the complex conceptualisation of sustainability at a company level also makes searching for sustainability skills in new potential employees a challenging task, as the existing workforce is not qualified to assess the needed skills or their fulfillment by applicants. Org. 4, for example, stated: _"Terminology/concept is still vague, especially what kind of skills and competencies you already have in-house."_ Furthermore, ten other organisations identified the **culture** of their employees, its complexity and inertia, as one of the important internal challenges. Org. 25 states that _"We had many key stakeholders and people at the bottom. We had some commitment on the high management, but it didn't connect because it was blocked in sub-optimisations of middle management who try optimising their own budget or their own question."_ The culture is often oriented toward different KPIs (Key Performance Indicators) and conflicts with sustainability goals. The short-term priorities and missing skills of **decision-makers** have also been mentioned by seven organisations. Org. 18 noted that _"managers usually have more short-term goals and it's not easy to sell them a long-term plan that won't give profits to the company for years."_ With regard to difficulties arising from the internal working **process**, we find a **financial** trade-off between short-term financial profitability and long-term sustainability goals one of the most frequently mentioned difficulties (encountered by 15 organisations). The issue occurs in both our interviewed organisations and their customers. Nine organisations encountered an issue regarding the ability to carry out sustainability-relevant **measurements**. For example, Org. 9 admitted that _"[their employees] don't know how to measure sustainability, or how to advance the policy agenda on sustainability using IT or digitalisation"_. Org. 4 highlighted the challenges to calculate the CO\({}_{2}\) footprint of cloud services. At first sight, these external economic barriers and internal short-term financial gains may seem to contradict the results of RQ1, which states that the economic benefits brought by sustainability are the main driving force for the organizations' interest in it. This is a common conflict. Sustainability demands long-term investment. However, it can be difficult to convince internal and external stakeholders to sacrifice fast economic growth in favour of the long-term economic benefits brought by sustainability. **RQ2 summary** **Established sustainability goals:** The interviewed organisations highlighted the need for improving their design processes and products to support sustainability and stressed the importance of a change in culture to positively contribute to society and help their customers make the right decisions. **Execution plan and experience to achieve the goals:** To achieve the established sustainability goals, the organisations focus on seeking collaboration with their business partners and other external entities, transforming their internal working processes, and developing tools to support interconnectivity, interdependence, and adaptability. The experiences reported include knowing how to collaborate with external stakeholders effectively, reducing carbon emissions, and applying automation when possible. **Encountered difficulties:** The difficulties reported are caused by internal and external factors. The major internal factors are those related to the trade-off between short-term financial profitability and long-term sustainability goals, an unclear understanding of sustainability concepts and goals, and the culture of the employees, which is often oriented towards KPIs in conflict with sustainability goals. With regard to external factors, economic barriers and inadequate policies are the two most frequently mentioned. _Rq3: "What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?"_ To answer this RQ, we asked the respondents about the skills and competencies available and missing within their organisations, as well as their approaches to acquiring those identified as lacking. In our context, skills are the specific learned abilities needed to perform a given job well. On the other hand, competencies are the person's knowledge and behaviours that lead them to be successful in a job. #### 4.4.1 Skills and competencies available in-house From the interview data, we observe that there is a wide variety of skills and competencies required by our interviewed organisations to achieve their established sustainability goals. Only a subset of the skills and competencies were claimed to be available in our respondents' organisations but not all. In addition, the sets of available skills and competencies are not the same among the organisations. These skills and competencies have been categorised into sustainability-related skills (e.g., organisations state that they have knowledge of the sustainability-related regulations in their domain), soft skills (e.g., organisations see that they have good problem-solving skills in sustainability challenges), and technical skills (e.g., organisations see they have the ability to create technically sustainable solutions). Organisations had two different perspectives on **sustainability-related** skills. They either thought about sustainability in a holistic, higher-level manner or focused on some specific details of sustainability. From a high-level perspective, many of our interviewed organisations believed that sustainability knowledge is not that important for IT staff. In particular, seven organisations thought that IT staff do **not need** to acquire sustainability knowledge, and four other organisations required **little background** from their employees. These organisations do not mean that they do not emphasise sustainability but they distinguish IT people from sustainability experts. When the organisations focused on specific sustainability-related skills, they mentioned different sustainability dimensions, application domains, and tools and approaches to achieve them. The organisations seem to have an understanding of both **social** and **environmental** dimensions: _"[Our staff] have the environmental knowledge and have involved in GRI11 and CDP12. They can use these competencies to help our customers"_ (Org. 14). These dimensions also link closely to the **regulations** that the organisations have to obey, as mentioned by Org. 14: _"we've been working with our customers and see how EU regulations have evolved"_. Footnote 11: GRI: an international independent standards organisation that helps businesses, governments and other organisations understand and communicate their impacts on issues such as climate change, human rights, and corruption. Source: [https://www.globalreporting.org/](https://www.globalreporting.org/) Footnote 12: CDP: an international non-profit organisation that helps organisations and cities disclose their environmental impact. Source: [https://www.cdp.net/en](https://www.cdp.net/en) The organisations pointed out several **soft skills** they think are valuable while aiming for sustainable solutions. Some of these skills are rather traditional, like **problem-solving** and **collaboration**, while others, such as **commonsense**, **reflection and influencing** relate to the aim for effects on sustainability. The problem-solving and collaboration skills link closely to the sustainability-related skills presented above. The most referenced (seven organisations) category of soft skills was **influencing**. It shows how organisations recognise their skills to influence the customer and the outcomes. For example, Org. 2 mentioned, _"...their daily work has influenced customers' teams about what should be done"_. The last set of skills that the organisations seek to have in-house is of a **technical** nature. Most of the categories link with IT-related skills and were clearly stated by our interviewed organisations, for example, **software quality**, **user-centricity accessibility**, **architecture**, **data management**, and **systems thinking** skills. While the first four skills are more familiar to the ICT community, the meaning of the systems thinking skill can be explained by the following statement of Org. 18: _"we are software engineers, so systems are quite familiar to us, so we know the different parts of it, how they interact, and how they join together."_ Surprisingly, our interviewed organisations emphasised most (five references) the business skills they have in-house and considered part of the technical skillset. This finding is highlighted by Org. 15: _"Most of our developers are also business advisors. We are now adding more competent business advisors having an IT background."_ #### 4.4.2 Skills and competencies missing in-house Similar to the analysis for skills and competencies available in-house, in this section, we cluster the missing skills and competencies into three categories: sustainability-related skills and competencies, soft skills, and technical skills. With regard to **sustainability-related skills and competencies**, our interviewed organisations mentioned that they lacked many of them, such as the right talent who have knowledge of sustainability and can transfer that to new IT business opportunities, the IT staff who are both excellent at technical skills and have sustainability knowledge, and the talented programmers who can deliver energy-efficient code. In particular, six organisations recognised the importance of sustainability knowledge but faced difficulties in hiring the **right talent** with suitable sustainability-related skills. Five organisations mentioned that their staff lacked a **multi-disciplinary** skill set. For example, Org. 18 expected their IT staff to have some sustainability knowledge and stated that _"...would be good if we can have some of those professionals that could combine sustainability and good background on ICT."_ Three organisations wanted their IT systems to be **energy-efficient** and **environment-friendly** but do not have developers who have these relevant programming skills. Regarding missing **soft skills** that are crucial for organisations with respect to sustainability, communication, and systemic thinking were frequently mentioned. In particular, poor **communication** skill is an issue experienced by four organisations. This problem is visible for the people who work directly with customers (e.g., the marketing department): _"We have been facing the challenges that at least we think that we have the perfect idea that the customer would benefit from, but we are having difficulties selling that to the customers."_ (Org. 4). The company also experienced that communication was ineffective among its IT staff. **Systemic thinking** is an ability to have a holistic view of factors and interactions that could contribute to a better possible outcome. However, three organisations could not identify this competence in their IT workforce. Org. 17 stated that _"if we had a framework or skills on how to put it all together in the bigger picture, we could have optimised our solutions for the entire system, not just specific code segments, or applications."_ Software engineering is a technological field, but our respondents mentioned several missing **technical skills**. Specifically, six organisations reported the lack of **metrics** to measure the impact of their IT products on sustainability. For instance, Org. 2 stated: _"We don't have good means to measure the sustainability level of certain software entities or our customers."_ Data has been increasingly collected in recent years, but three organisations did not equip themselves well with **data management** skills. These organisations faced some difficulties in complying with GDPR (General Data Protection Regulation) in terms of data handling. #### 4.4.3 Solutions to acquire what is missing Based on the identified skills and competencies missing in-house, we further investigated how the organisations are acquiring them. Overall, the acquisition strategies can be classified into two types: internal (i.e., carried out entirely within the company) or external (i.e., when the skills and competencies are provided from a source external to the company). In relation to the **internal** approaches, the most common (mentioned by 20 organisations), unsurprisingly, is by providing and/or organising **in-house training**: _"...what we do to make the change in the organisation, I think we do both retraining and changing the behaviour in the organisation itself"_ (Org. 10). **Hiring** is also a widespread internal strategy, being mentioned by 13 organisations. Organisations use recruiting as an instrument to bring in new employees with suitable sustainability-related skills and competencies. This process also involves internal training in order to adapt newcomers to the organisation's culture and working process. There are several hiring targets being adopted by the interviewed organisations, including looking for specific pre-defined competencies (e.g., _"We hire people that have some specific set of skills and also have a passion or interest in sustainability"_ - Org. 20), people with the right mindset for the organisations. In addition, to address the communication issues, new hires are expected to _"...establish and maintain good discussions with customers and stakeholders"_ (Org. 2). Establishing **mentorship** programmes that engage experienced employees in sharing their own experience, knowledge, and know-how with other staff members; and conducting **internal events** for knowledge sharing are two other solutions mentioned by two organisations. When it comes to **external** approaches, collaborating with universities, sending employees to participate in courses about sustainability, and hiring consultants are popular solutions. Firstly, we found that a significant number of organisations (11) expected to acquire the missing competencies either via new hires contributing the right background from their **university** education or by means of research collaborations with universities. For example, Org. 7 stated: _"To get this knowledge into our own company, we really need research or try to get information from universities."_ Secondly, four other organisations frequently paid for **external courses** to train their own workforce. This strategy assumes that either suitable courses are available or external training organisations offer customisable course packages. Lastly, four organisations preferred to hire **consultants** on sustainability when sustainability-related competencies or skills are needed: _"For externally-reused software, cloud services, or to address specific sustainability goals, [...] we would partner up with somebody or buy consultancy hours"_ (Org. 16). This is another external strategy where missing competencies and skills are acquired temporarily and typically project-specific. **RCS summary** **Skills and competencies available in-house:** In order to reduce the expectation level for the staff, many organisations separate IT departments from sustainability experts, so a sustainability background is not or little required for IT-skilled employees. However, specific soft skills (e.g., problem-solving, collaboration) and technical competencies (e.g., architecture, data management) are expected and available within the IT workforce to achieve the target sustainability goals. **Skills and competencies missing in-house:** Despite separating IT departments from sustainability experts to reduce the need for sustainability knowledge, many organisations still want to fill that gap for their IT staff. Improving communication efficiency and defining sustainability measurement metrics have been often mentioned as missing soft and technical skills within the organisations' workforces. **Solutions to acquire the missing skills and competencies:** The organisations have taken both internal and external approaches to fill sustainability knowledge gaps for their IT staff. Popular solutions are organising in-house training courses, collaborating with universities, sending employees to externally organised courses, and hiring sustainability consultants. ## 5 Interpretation of results This section discusses the skills gaps apparent from our results and that future education programs should address. Then, Section 6 proposes concrete topics to be considered in those educational programmes and on-job training and collaborative activities in industry. **(On RQ1) Interests in sustainability are diverse and evolving. Currently, professionals are not able to understand and relate multiple aspects of sustainability and translate these relations into concrete business when needed. Educational programmes should enable this competency while being flexible and ready for changes.** Our data show that economics is the main driver for our interviewed organisations and their shareholders to invest in sustainability. This is not surprising since they must survive in the market. However, as shown in Figure 4, there is pressure from customers and stakeholders to push for social and environmental sustainability impacts. So, one must try to turn these aspects into economic profit; therefore, ideally, one must change the value proposition. The business vision of an organisation must be decided based on multiple factors, its own business', customers', shareholders', and stakeholders' interests. As such, to prepare a better workforce for IT organisations, future educational programmes should help professionals relate concerns belonging to different sustainability dimensions and be able to translate such relations into concrete business plans. Around 40% of the organisations mentioned that their interests in sustainability evolved due to new demands from customers and regulations. Such evolution might pose difficulties to organisations, so future educational programmes should be flexible and ready to adapt to changes. At the same time, 14% of the organisations are not yet concerned with sustainability. In this case, education plays a role in creating awareness within businesses about the relevance and opportunities of sustainability within IT, accelerating the IT industry's interest in sustainability. From the data, we identify that around 10% of IT organisations are interested in sustainability but do not know how to take it into account. Educational programmes and specific industry training can help. **(On RQ2) The IT workforce needs sustainability-related competencies and a deeper understanding of how sustainability impacts the development processes and the resulting products. Thus, educational programmes should provide them with the tools to develop sustainability-aware systems and inspire others across their network of collaborating organisations to embrace sustainability initiatives.** Sustainability is activating organisations to change internally (e.g., how development and decision-making processes are revisited to address sustainability goals) and externally (e.g., how organisations offer their customers new or improved products and services with respect to sustainability). At the same time, organisations encounter a number of difficulties (further analysed below) that designers of future educational programmes for IT students should take into account when making improvements. As summarised in Figure 5, lack of funds and sustainability understanding, as well as the need to change the internal culture to favour sustainability more, were the three major internal difficulties reported by the organisations. Financial challenges may force organisations to downgrade sustainability objectives to survive, even though sustainability is something most of them want. This shows a need to create more value for sustainability-aware software products, and this may require a change of regulations to support sustainability-aware systems. Half of our interviewed organisations complained that their IT staff or colleagues need a better understanding of sustainability. At the same time, we observe that 20% of the organisations consider current policies as insufficient for driving sustainable development, and 16% struggle to persuade customers to pay more for sustainable products. This shows that even though customers want to buy more sustainable products, they are not necessarily willing to pay more. This is where politics can play a vital role in enforcing incentives, such as reducing taxes on green products and putting in place laws and adequate regulations. While non-technological issues seem to worry organisations more, they also drew our attention to the need for sustainability-related improved metrics, design processes and tools. **(On RQ3) More sustainability skills and competencies are needed due to their rising importance in our daily life. Future educational programmes must be built upon three pillars: technology, soft skills, and sustainability knowledge.** As educators, we believe that IT and sustainability are disruptive forces in today's society and will increasingly converge [26]. Education programmes that give IT professionals strong soft- and sustainability skills will ease this process, both because it will set a basic common ground for collaboration among experts in both fields and because it will encourage every technical system to be built with essential sustainability characteristics in place. Many organisations have recognised the need for sustainability skills and competencies for their IT staff. As shown in Figure 6, the majority of our interviewed organisations agree that their IT workforce lacked sustainability knowledge and understanding (75%), and technical skills to implement sustainability (65%). Also, one-quarter of the interviewees pointed to the need for additional soft skills. Based on our observations, weaknesses regarding sustainability skills and competencies of the current IT workforce can lead to (1) difficulty in understanding sustainability in the business context, including sustainability strate Fig. 4: Reasons for the interest in sustainability Fig. 5: Internal difficulties encountered by organisations gies, approaches, and tools to support sustainable business models, (2) difficulty in translating business requirements into IT products and services with sustainability considerations, and (3) poor communication and soft skills, which is a classic problem with software engineers and programmers. ## 6 Discussion An increasing number of Education Institutions (HEI) recognise the need to integrate Sustainability into their existing courses. The integration process is challenging [27], but there is guidance available, such as the UK QAA/HEA guidance on incorporating Education for Sustainable Development into the curricula. However, it is hard to identify guidance on developing skills for IT courses. To help, we developed a classification of the topics that should be considered when designing a curriculum for future courses in sustainability. This classification is the result of the interview findings. The classification is not meant to be complete, but in our opinion, it points to the most important topics based on our years of research and teaching in the area. In the following, we describe the importance of each topic, how it relates to our interview data and suggestions for how to consider it in future education. #### Core sustainability knowledge Sustainability must be seen as a prerequisite for any IT product or service. We observed in our interviews that there was no clear consensus about what was meant by this term. If IT professionals do not adequately understand the main concepts of sustainability, they are likely to: keep looking at it as a trade-off or a nice-to-have; struggle to collaborate with sustainability experts, and might not fully see the value of it; shy away from public debate on technology and sustainability, not motivating policy changes; see it as a complementary skill to IT. Here, education could play a fundamental role in providing knowledge of the basic principles, concepts, and models, particularly, definitions and scoping of sustainability, clarification of persistent misconceptions and myths around sustainability facts, current statistics, fundamental concepts (e.g. dimensions of sustainability), as well as different models to explain sustainability in a certain context (the doughnut model [28], the nine planetary boundaries [29], and orders of impact [14].) #### Systems thinking Systems thinking is a fundamental perspective change for being able to grasp sustainability issues. The focus on the overall big picture with its relationships between the stakeholders and systems, and their dynamics and balances -- instead of traditional divide and conquer approaches that are taught in engineering -- allows for the necessary shift in looking at a situation. Systems thinking is a powerful tool and mindset shift that enables a holistic view of sustainability across its different dimensions. Through the study, we observed that there were just a few organisations possessing this kind of competency within their workforces; for example, Org. 16 champions the value of recycling by producing food for fish from waste. In that case, the organisation goes beyond its main operation's boundary to seek ways to positively affect the natural environment. As educators, we are aware that systems thinking is not easy to teach and practice since it requires a good deal of domain knowledge. Yet, the potential consequences if this matter is not taken into consideration can be much greater, leading to cascading (potentially negative) impacts in different dimensions of sustainability. For example, if a country aims at building more and larger hyper-scale data centers addressing the demand for IT products and services, we must also consider the related societal implications for, among others, the (competing) demand for water for citizens, of renewable energy resources for cities, of land for agriculture - hence going beyond the specific ICT sector. #### Soft skills Communication is an essential soft skill for the IT workforce since managers, peers, and clients have to exchange ideas on a daily basis. It is especially important to know how to communicate in a positive way, e.g. as in non-violent communication [30] or compassionate communication that seeks to understand the motivation of the communication partner before attempting to get the point across. We believe soft skills should play a much more significant role than today in software engineering education programmes. This is confirmed by 12 of our 28 interviewees, who emphasised the importance of soft skills within their IT workforce. Although the importance of soft skills is increasingly recognized in engineering degrees, educators often struggle to fit the topic properly into their classes. This difficulty may arise from several reasons. Sometimes, engineering teachers do not have the right knowledge and tools to effectively teach soft skills to students. On other occasions, they may feel that the time they have to cover the technical curricula is already too short, resulting in a weak integration of soft skills into their teaching (e.g. without giving the students the required time to reflect on and practice their soft skills). We suggest designers of engineering programs should collaborate with other courses where soft skills play a more critical role, such as management, leadership, and social psychology, to find more effective ways to integrate them into the curricula. A complementary activity could be to invite experts from industry to teach soft skills to IT students. This kind of collaboration will not only give students more practical perspectives but can also strengthen relationships between academia and industry and motivate students on the importance of such skills. Fig. 6: Skills and competencies missing in organisations’ IT workforce #### Technical sustainability Technical sustainability refers to the capacity of the software system to endure in changing environments [5]. Software systems are sustainable if they can be cost-efficiently maintained and evolved over their entire life-cycle [31]. Technical sustainability can be achieved through software architecture as it lays the foundation for the successful implementation, maintenance and evolution of sustainable software systems in a continually changing execution environment by providing a mechanism for reasoning about core software quality requirements that contribute to sustainability as a first-class, composite software quality. Addressing software sustainability at the architectural level, allows the inhibiting or enabling of systems quality attributes, reasoning about and managing change as the system evolves, predicting system qualities, as well as measuring architecturally significant requirements. However, the ability to determine sustainability as a core software quality of a software system from an architectural perspective remains an open research challenge, and existing architectural principles need to be adapted and novel architectural paradigms devised. In addition, there is a pressing need for new tooling to fit today's emergent and dynamic environments, where software systems are explicitly designed for continuous evolvability and adaptability without creating prohibitive architectural, technical debt. The skills needed to develop and estimate technical sustainability require training in software architecture, tools, and metrics to evaluate technical sustainability for the diversity of application domains. Whilst engineers may understand the importance of technical sustainability, metrics and dashboards of technical debt are less in evidence. For instance, Org. 18 mentions they use some metrics, but they lack automatic processes to evaluate the quality of systems. This is precisely one of the aspects in which technical sustainability can help to produce more sustainable systems. #### Building the business case for sustainability Sustainability and the SDGs provide enormous business opportunities to organisations [26]. For example, Org. 2 stated quite frankly _"Why we are so interested? [...] it's money."_ In the short- to mid-term, companies can benefit from creating new systems to exploit these business opportunities, or from making their own systems more sustainable or, at least, more environmentally aware, appealing to increasingly sustainability-conscious customers and business partners. For example, the Environmental, Social, and Governance (ESG) framework is now used by stakeholders to evaluate how organizations manage risks and opportunities that relate to sustainability issues. Therefore, IT professionals need to understand better what drives the businesses they work for, the opportunities that focusing on sustainability open to businesses in general, and the threats faced by businesses causing harm to the environment and society. Understanding this might help them to champion the idea of sustainability internally and justify it in terms of economic, environmental, and societal reasons. Furthermore, in the mid- and long-term, companies should aim to create a positive, or at least non-negative impact, on all sustainability dimensions, independently of the purpose of their systems. This evolution of companies provides many opportunities for educators. The traditional practice-based courses such as capstones and hackathons could use these real-world challenges companies have and, as an outcome, provide possible solutions for companies. The challenges could be approached on different levels, from setting the company values and objectives to the design of the actual internal (software) development processes. In general, educators may collaborate with companies to increase practitioners' awareness of the possible sustainability impacts of their products and activities. #### Sustainability Impacts and Measurements How companies improve their businesses by adopting sustainable practices requires specific Key Performance Indicators (KPIs) and metrics that can measure SDG targets or GRI indicators. While some domains (e.g. energy, transportation) count with standardized metrics to evaluate the sustainability of a solution, others do not. Consequently, many organizations have difficulties estimating sustainability and they need to define their metrics and sustainability indicators. To do so, IT staff and managers need to be trained on current metrics/KPIs and on how to create their own, such that organizations can be clear about their sustainability achievements. There are approaches and tools for assessing, quantitatively and qualitatively, sustainability impacts. They include techniques like scenario mapping, future backcasting, the SDG impact assessment tool [32], sustainability assessment tools like the SAF Toolkit [33], [34], and sustainability awareness frameworks like SusAF [35]. Universities are being increasingly ranked according to sustainability (e.g. Times Higher Education Impact Rankings), so as organisations adopt different sustainability goals they need different kinds of sustainability metrics13. Footnote 13: [https://www.timeshighereduction.com/impactrankings](https://www.timeshighereduction.com/impactrankings) From our analysis of companies, we found evidence of this lack. For instance, Org. 2 highlights the lack of such metrics to understand the direct impacts of the product/system adopting sustainable solutions. In addition, Org. 27 observes that there is a lack of awareness on whether the solutions adopted are sustainable enough, partly because they do not have metrics. These and other technical challenges aimed to calculate the carbon or energy footprint (e.g. Org. 4) of sustainable solutions are why they demand specific training on well-defined KPIs to estimate the impacts of different sustainability initiatives to justify the efforts and expenses. #### Values and Ethics Ultimately, values and ethics are fundamental concerns to making the world a fair and equitable place [36]. While ethics are culturally agreed-upon moral principles, "values make no moral judgment" [37, p. 113]. Currently, our society relies on software systems to communicate worldwide and operate utilities providing the basics for human life (e.g. complex medical machines, nuclear plants, and electrical grids). Such systems focus on functionality and quality attributes, such as usability, availability, and security, but they do not respect our core values, such as social justice, transparency, or diversity [38]. Sustainable systems, however, should be aligned with core human values. In this context, it is important that IT professionals are guided by a clear code of ethics, such as the ACM code of ethics14 to produce socially and environmentally responsible systems. We call for ethics to be a standard part of software engineering. Footnote 14: [https://ethics.acm.org/code-of-ethics/software-engineering-code/](https://ethics.acm.org/code-of-ethics/software-engineering-code/) In what concerns values, user-centred design, user-experience design, and values-sensitive design tackle more than typical software qualities, but they are still far from addressing core human values [39]. Value-driven methods, known in HCI and information systems, can be used in business analysis and requirements engineering, but they offer little guidance for the later stages of development. Some emerging works take a human-values view (e.g., GenderMag [40] used to discover gender bias in software; or Alidoosti et al. [41] incorporating ethical values in software design), but more is still required to address human values systematically. The good news is that software development methods could be adapted to handle human values. For example, the Karlskrona Manifesto on Sustainability Design15, or participatory design techniques can be taught to ensure that end-user values are taken into account. Over 57% of the interviewed organizations reveal that their customers and stakeholders want to protect the environment and almost 30% are interested in focusing on sustainability due to moral concerns and social matters, resulting in the need for sustainability-value alignment of their business. Footnote 15: [https://www.sustainabilitydesign.org/karlskrona-manifesto/](https://www.sustainabilitydesign.org/karlskrona-manifesto/) #### Standards and Legal aspects The topic of legal aspects includes standards that may be required to be taken into account or that allow for certification, like the ISO 14000 family on the environment or the ISO 26000 standard on corporate social responsibility, or the LEED standard for buildings. It also includes issues of compliance and compliance assessment, the development of sustainability cases (think safety cases but for sustainability), and the impacts of newer and upcoming laws from privacy and information safety to software liability and their impacts on sustainability. One of the reasons for the slow adoption of sustainability initiatives in business has been the lack of mandatory requirements for action or reporting. Some interviewees mentioned specific regulations such as Waste Electrical and Electronic Equipment (WEEE), General Data Protection Regulations (GDPR) and Employment law (Org. 9, 11, 20, 23). There is a need for educators to be clear about the difference between what is required by law/regulation in different jurisdictions such as the EU Green Taxonomy reporting or employment protection, and what is voluntary that businesses may choose to do such as using the Global Reporting Initiative framework for sustainability reporting. #### Advocacy and lobbying This topic raises the question of how impartial and neutral researchers and educators should be versus how much involved in advocacy and lobbying. We are in favour of taking a stance while allowing discussion space for all perspectives on an issue. One should not wait for regulation to start acting on sustainability. Regulation is often a late follower of social trends and is highly influenced by them. The last decades have witnessed several social and business movements towards sustainability, such as Corporate Social Responsibility, Fairtrade, Slow Food, Greenpeace, Natural Capitalism, B-corporations, etc. Education can play an important role in promoting and shaping movements such as the above. Universities should train future IT professionals to combine their technical and sustainability expertise to become strong advocates for sustainability. Therefore, curricula should also include tools for effective and positive advocacy in organizations, media and legislation, as well as lobbying. The importance of offering expertise to policymakers has been highlighted by one of our interviewees (Org. 9), who said: _"a lot of policymakers don't have a clue on digitalization matters (sic.), and because of that, they don't know what they re doing while writing the law."_ ## 7 Threats to validity We discuss our reasoning concerning the validity and limitations of this study by following the scheme described in [42]. **Construct validity.** This aspect is related to whether during the interviews we asked the right questions according to our investigation objectives. To mitigate this threat, we formulated the questions by leveraging the background knowledge of the involved researchers, which indeed have experience in these types of research in software engineering in general, for at least ten years, and in software engineering related to sustainability for at least five years. **Internal validity.** This is related to how we executed the study internally and analysed the results to avoid unknown factors that may influence the study's outcome. To mitigate this threat, we have taken the following actions. First, to improve the instrument (i.e., interview guide) used in the study, we spent time discussing the interview questions to ensure they covered our stated research questions and avoid leading questions. Our interviewees were interviewed on a voluntary basis, and confidentiality was emphasised to encourage them to respond to the interview questions in the most truthful way. Secondly, during the data analysis, we adopted the following procedures consisting of two steps. In the beginning, the researchers who were the interviewers of one interview session paired with another researcher and both performed data coding for the whole transcript obtained. After that, all the researchers involved in this study were divided into three groups, with each being responsible for each research question stated in 3.1 and having at least three members. All group members responsible for one research question validated the coded data related to their section in all the transcripts. At this stage, some re-codings happened in collaboration with original coders to extract more details from the data. **External validity.** This is concerned with the limitations of how much this study can generalise conclusions. There are a few limitations associated with this study. First, although we achieved quite a spread of geography (mainly in Europe), company sizes, and business domains, they are not representative of the entire European IT economy. Had we interviewed other organisations, it is likely that the results would have differed to some extent. Although we asked the companies about their interest in sustainability, it is hard to find a common pattern and reasons for the variation of sustainability among organizations, we cannot establish a connection between particular types of companies to what kind of interest in sustainability they have. Second, our results can not only be subject to the inherent limitations of the codebook and the coding process but also to the biases of those interviewed and to the ontological uncertainty of the future. In particular, the frequency a code has been mentioned, while possibly representative of the perceived relative relevance within the industry nowadays, may not represent the true importance of each topic, which might only become apparent in the future. **Reliability.** This aspect concerns to what extent the data and the analysis depend on specific researchers. Since the researchers of this study were responsible for conducting 1-3 interviews/focus group interviews, we prepared a presentation containing all the interview questions and showed it during the meetings with our interviewees to ensure all the discussions flowed consistently. In addition, we supply the codebook as supplementary materials for validation purposes, which are helpful for replication. ## 8 Related Work A number of studies have investigated how software engineering professionals understand sustainability. For example, Groher and Weinrich [43] report on a qualitative interview study with ten interviews in nine organisations in Austria. They aimed to understand how practitioners understood sustainability and its importance, the factors influencing sustainability in software development, sustainability-related deficiencies in their projects, and how they improve sustainability in such projects. The results show that while practitioners find the topic of sustainability important, they seem to have a narrow view of sustainability, focusing mainly on technical attributes such as maintainability and extensibility. In addition, practitioners were concerned with organisational and economic issues, but the environmental dimension was not addressed. Our study differs from this one in several aspects. First, we interviewed 28 organisations spread across 9 countries, instead of just one, which potentially provides a broader and less culturally biased view of sustainability in the ICT industry. Most importantly, our interviewees had different profiles. While Groher and Weinrich interviewed technical leaders of ICT projects, we talked to senior management and sustainability experts within the company. That can be observed in the different perceptions of sustainability. In Groher and Weinrich's work, most interviewees related sustainability to maintainability, extensibility, product development, and long-lived systems [43], while in our study, sustainability was more broadly understood, with the different dimensions mentioned. A remarkable difference is that the environment was very rarely mentioned in the previous study, while it was one of the main concerns shown in ours. However, both studies coincide in that the economic benefit is the greatest motivation for these companies. Both studies also looked into difficulties or deficiencies in sustainability. In Groher and Weinrich's, participants mainly pointed to a "lack of effective means for communication and knowledge sharing" and suggested strategies such as "knowledge distribution, avoiding specialists, and building teams that can work across products". Our study coincided with the lack of understanding of sustainability concepts and goals, yet it highlighted the trade-off between short-term financial profitability and long-term sustainability goals as a major difficulty. Our study also pointed to external difficulties, such as economic barriers and inadequate policies, which were not mentioned in the previous study. De Souza et al. [44] discuss software sustainability as perceived by nine software developers from a university in the UK, and suggest a set of recommendations on how to improve the sustainability of software. They used short semi-structured interviews, each lasting an average of about 10 minutes. The main result is the distinction between "Intrinsic Sustainability", referring to intrinsic characteristics software should have (e.g., be documented, be tested, or be modular), and "Extrinsic Sustainability", referring to the environment in which the software is developed or used (e.g., be open, be actively maintained, or be infrastructure-independent). To address this, the authors proposed a set of recommendations as good practices for software development and maintenance that directly emerge from the characteristics interviewees associated with 'intrinsic' or 'extrinsic' sustainability but remain exclusively in the realm of technical sustainability. Our study differs from De Souza et al.'s [44] significantly. They interviewed software developers within a single academic organization. That meant that their participants were mostly experienced with research projects, which are of a very different nature from those of our study. Unsurprisingly, participants' views in that study were very much more related to technical sustainability, than ours. Interestingly, their questions were open and neutral, not really biasing the answers to such limited views. Finally, the coverage of the research questions as well as the depth of the interviews was very different, with ours specifically asking about goals, barriers and skills to sustainability and theirs on the relation of sustainability with software systems. This difference in the depth can also be seen in the lengths of the interviews, which typically lasted around 10 minutes in the previous study and 1-2 hours in ours. Karita et al. [45] report on a study performed with ninety-nine companies from the software industry in Brazil to investigate their awareness in four sustainability dimensions (environmental, economic, social and technical). The results indicate that sustainability in the context of Software Engineering is a new subject for practitioners, that they find the topic relevant, and that sustainability should be treated as a quality attribute. In contrast, this study aims further and, more concretely, at companies with such awareness to retrieve their actual interests, difficulties and achievements, and skills they have in-house and those that they miss. In Chitchyan et. al. [46], thirteen requirements engineers from eight different countries (Austria, Brazil, Germany, Spain, Switzerland, Turkey, the UK, and the USA) have been interviewed. The study investigated the perception of engineering practitioners towards sustainability as well as obstacles and mitigation strategies regarding the application of sustainable design principles in their engineering work. The study shows that on an individual level, perceptions of sustainability tend to be narrow, organisations are not aware of the potential achievements and benefits coming along with sustainable design and the standards and norms in Software Engineering are not conducive to sustainability. In contrast to our study, the work focuses on what hampers the adoption of sustainable design principles based on [5] daily work practices, and not on the broader questions of industry sustainability-related interests and needs, their planned achievements, and the thus-required skills. Other published work is more loosely related to ours, with the following worth highlighting. Betz, Lammert, and Porras [47] investigated the role of perception of software engineers; more specifically it investigates the self-attribution of software engineers and whether they implement sustainability issues in their daily work. Their results suggest that software engineers perceive that they are insufficiently involved in the design process and that they do not sufficiently take on responsibility for the software and its sustainability impacts. The authors observed an evolution in terms of communication with interdisciplinary experts, yet their software engineers still see themselves as a "purely executive force" [47], who shy away from responsibility regarding sustainability. This perception varies greatly from the ones in our study, which does recognize the need for sustainability skills and competencies for IT staff. Additionally, a domain-specific study conducted by Kasurinen et al. [48], investigated - among other points such as the development processes used -- the extent to which game developers are concerned about sustainability issues and Green IT. The results show that their interviewed gaming companies were more unstructured than general software development ones, not really incorporating sustainability in their daily work practices. Yet, our studies coincide with regard to the lack of a broader understanding of sustainability by IT professionals. In the related field of Information Systems (IS), Cooper and Molla [49] investigated the notion of the "absorptive capacity" of IS to enable environmental sustainability and how organisations can enable IS changes to address environmental issues. They conducted a survey with 148 IS senior managers and provided different taxonomies to acquire knowledge about sustainable IS and to what extent IS sustainable technologies are assimilated by organisations. The role of "absorptive capacity" is also discussed in [50] where the authors provide a systematic literature review on competencies for environmental sustainability and managerial skills required for organisations to transform knowledge into environmental capabilities. The work suggests a connection between environmental competencies and capabilities, and they provide a taxonomy between management and environmental competencies. ## 9 Conclusions and future work Our study has uncovered how sustainability is viewed and practised in 28 organisations from nine countries. The findings of this work include (i) how sustainability is of interest to these organisations, (ii) sustainability goals they want to achieve, (iii) difficulties they have encountered so far, (iv) skills and competencies needed to achieve the established goals, and (v) current practices to fill the perceived sustainability skill gap. Identifying those current practices and especially the gaps, gives us an indication of possible improvements to current university education programs with respect to sustainability for IT and related fields. This study represents the first step to improving the computing curricula to better meet the demands of industry. To accelerate that process, we have proposed essential topics relevant to sustainability in IT that should be taken into account when developing the curricula, based on our years of experience in teaching courses in sustainability for IT students. We also highlighted several open research opportunities, directions, and challenges: First, while significant, the organisations we interviewed provide only a partial geographical perspective; our analysis should be performed globally. We thus plan to survey on a global scale to obtain a more comprehensive picture and to be able to conduct quantitative analyses, for example, regarding the variation of sustainability among organisations. As it is easier to include more companies in a survey than in an interview series, this will also allow the mapping of individual business domains and different sustainability perspectives. Finally, the ultimate goal should be the rapid development of software engineering and computer science curricula which include sustainability concepts at their very core. We presented first ideas in Section 6. These curricula should certainly be holistic and not aim exclusively at the skills needed by industry. However, given the growing sustainability interest of the industry and its immense transformational power, the curricula should definitely take its sustainability needs into consideration. ## Acknowledgments The authors would like to thank all the interviewees who took part in the study.
2309.06392
A Fast Algorithm for Moderating Critical Nodes via Edge Removal
Critical nodes in networks are extremely vulnerable to malicious attacks to trigger negative cascading events such as the spread of misinformation and diseases. Therefore, effective moderation of critical nodes is very vital for mitigating the potential damages caused by such malicious diffusions. The current moderation methods are computationally expensive. Furthermore, they disregard the fundamental metric of information centrality, which measures the dissemination power of nodes. We investigate the problem of removing $k$ edges from a network to minimize the information centrality of a target node $\lea$ while preserving the network's connectivity. We prove that this problem is computationally challenging: it is NP-complete and its objective function is not supermodular. However, we propose three approximation greedy algorithms using novel techniques such as random walk-based Schur complement approximation and fast sum estimation. One of our algorithms runs in nearly linear time in the number of edges. To complement our theoretical analysis, we conduct a comprehensive set of experiments on synthetic and real networks with over one million nodes. Across various settings, the experimental results illustrate the effectiveness and efficiency of our proposed algorithms.
Changan Liu, Xiaotian Zhou, Ahad N. Zehmakan, Zhongzhi Zhang
2023-09-09T13:54:34Z
http://arxiv.org/abs/2309.06392v1
# A Fast Algorithm for Moderating Critical Nodes via Edge Removal ###### Abstract Critical nodes in networks are extremely vulnerable to malicious attacks to trigger negative cascading events such as the spread of misinformation and diseases. Therefore, effective moderation of critical nodes is very vital for mitigating the potential damages caused by such malicious diffusions. The current moderation methods are computationally expensive. Furthermore, they disregard the fundamental metric of information centrality, which measures the dissemination power of nodes. We investigate the problem of removing \(k\) edges from a network to minimize the information centrality of a target node \(v\) while preserving the network's connectivity. We prove that this problem is computationally challenging: it is NP-complete and its objective function is not supermodular. However, we propose three approximation greedy algorithms using novel techniques such as random walk-based Schur complement approximation and fast sum estimation. One of our algorithms runs in nearly linear time in the number of edges. To complement our theoretical analysis, we conduct a comprehensive set of experiments on synthetic and real networks with over one million nodes. Across various settings, the experimental results illustrate the effectiveness and efficiency of our proposed algorithms. Social networks, critical nodes, information diffusion, edge removal, combinatorial optimization. ## 1 Introduction Abroad range of dynamic processes on graphs have been analyzed to attain a more profound comprehension of diverse real-world phenomena, such as the spread of misinformation on online social media platforms, the proliferation of computer viruses over the internet, and the dissemination of diseases among individuals [1, 2, 3, 4]. As a result, there has been a burgeoning interest in investigating the influence of the underlying graph structure on various characteristics of these dynamic processes [5, 6, 7, 8]. Specifically, a considerable amount of attention has been focused on comprehending to what extent certain objectives can be achieved through the manipulation of the network structure. Examples of such manipulation strategies comprise eliminating nodes (such as blocking an account on an online social platform or administering a vaccine to an individual), adding edges (such as link recommendation in online social networks or constructing a physical link between two routers), or removing edges (such as restricting two individuals from meeting through quarantine measures or not exposing the posts from one user to another in an online platform) [7, 9, 10, 11]. Furthermore, the intended objective can vary widely, ranging from minimizing the number of nodes infected by a virus to maximizing the fairness of workload among routers across different internet service providers, or reducing the proportion of users exposed to misinformation [2, 4, 11, 12, 13]. The real-world networks exhibit heterogeneous nature with critical nodes being far more essential in network structure and function [7, 14], such as information diffusion and system stability [7, 15, 16]. This confers an unbridled degree of power to such nodes, which could potentially result in significant financial, sociological, and political damages. For instance, subsequent to the hack of The Associated Press Twitter account, a false report was disseminated claiming that "Breaking: Two Explosions in the White House and Barack Obama is injured". This rumor resulted in losses of 10 billion USD within a few hours and caused the United States stock market to crash within minutes [17]. As another example, infectious diseases cause 10 million deaths each year globally, accounting for 23% of the total disease related deaths, where critical nodes play a massive role in the extent of the spread [18]. Consequently, there has been an increasing interest in shedding light on managing and alleviating the impact of a set of target nodes, especially influential nodes [19, 20, 21, 22]. Another application space where the problem of mitigating some target nodes have gained substantial attention is the realm of privacy protection in networks. In this setup, the goal is to protect the privacy of users or conceal critical entities in the network by implementing structural anonymization [23, 24]. Please refer to Sections 5.1 and 5.2 for more details on these topics when appropriate. Two commonly employed graph operations to attain the aforementioned objectives are node and edge removal. Edge removal has garnered greater attention recently [25, 26, 22], since it is less intrusive (i.e., disrupts the original functionality and flow of the network less aggressively) and provides controlling power in a more granular level (note that usually removing a node is equivalent to removing all its adjacent edges). In this work, we focus on edge removal as well. Previous studies [23, 24, 27, 28] have investigated the problem of removing a fixed number of edges to achieve an objective with respect to a subset of target nodes. The objective could vary from minimizing the spreading power of the target nodes under a specific information diffusion model such as the Independent Cascade (with the goal of halting the propagation of a piece of misinformation) [12, 21, 22, 26, 29] to minimizing the centrality of the target nodes measured by degree [23] or closeness [24] (with the aim of concealment). Additionally, numerous algorithms, predominantly heuristics, have been put forth [12, 29]. The existing works have two major limitations. Firstly, despite a plethora of centrality indices proposed in the literature [14], these prior works do not consider information centrality [30, 31], in spite of its obvious advantages. For example, information centrality captures the way node-to-node transmission takes place (especially, on social networks) [32] and possesses higher discrimination power than others [9, 33]. Again for instance, information centrality has been applied to various fields, such as estimation from relative measurements [34] and leader selection for noisy consensus [35]. Secondly, they are computationally relatively expensive which renders them impractical in various real-world scenarios. Suppose a misinformation detection algorithm has spotted a potential rumor (as the one from above about White House) and the goal is to moderate the spreading power of the initiating node by temporarily removing some edges (i.e., not exposing the content from some users to some others), then having a very fast mitigation algorithm in place is very crucial. (Note that precautions are not usually a viable solution here, since these graphs undergo constant reformulation). Another example of this would be an internet service provider seeking to control the network traffic in response to a malfunctioning or malevolent router. In light of the above limitations, our study aims to fill this gap by devising an objective function capturing information centrality and provide an algorithm that can handle networks of million nodes. In our formulation of the problem of moderating critical nodes, the objective is to remove \(k\) edges to minimize the information centrality of a target node, while preserving network connectivity. (The constraint of network connectivity ensures the preservation of network's functionality [7, 16], a consideration also addressed in other works investigating problems related to edge removal [24, 28, 36].) We prove that this problem is NP-complete. Furthermore, while its objective function is monotonically decreasing, it is not supermodular, posing difficulties in solving it using traditional greedy algorithms. However, we still adopt the standard greedy algorithm in our setup since it has been frequently observed to deliver satisfactory results for many non-supermodular problems [37, 38]. Despite its simplicity and efficacy, it requires execution of matrix inversion operations, incurring a prohibitive computational cost and rendering it unfeasible for large networks. As the first step towards a faster algorithm, we use the random walk-based approximate Schur complement method [39] to present a faster algorithm, called ApproxiSC. To speed up the computation even further, we also leverage the sum estimation method [40, 41], which allows us to provide the algorithm FastICM which runs in nearly linear time in the number of edges, as our main contribution. The rest of our paper proceeds as follows. We first introduce some necessary preliminaries related to our work in Section 2. Then, we provide an exact formulation of our problem in Section 3 and give an overview of our main theoretical and experimental findings and the techniques used in Section 4. We discuss related works in Section 5. Then, we study the computational complexity of our problem and prove related properties of the corresponding objective function in Section 6. In Section 7, we present the deterministic greedy algorithm, followed by the fast greedy algorithms in Section 8. We report our performance experiments evaluating the efficiency and effectiveness of our algorithms in Section 9 and conclude the paper in Section 10. ## 2 Preliminaries In this section, we introduce some useful notations and tools to facilitate the description of our problem and algorithms. ### _Notations_ We use normal lowercase letters like \(a,b,c\) to denote scalars in \(\mathbb{R}\), normal uppercase letters like \(A,B,C\) to denote sets, bold lowercase letters like \(\mathbf{a},\mathbf{b},\mathbf{c}\) to denote vectors, and bold uppercase letters like \(\mathbf{A},\mathbf{B}\), \(\mathbf{C}\) to denote matrices. Let \(\mathbf{J}\) be the matrix of appropriate dimensions with all entries being ones. We use \(\mathbf{A}_{[S,F]}\) to denote the submatrix of \(\mathbf{A}\) with row indices in \(S\) and column indices in \(F\). We write \(\mathbf{A}_{ij}\) to denote the entry at row \(i\) and column \(j\) of \(\mathbf{A}\) and \(\mathbf{a}_{i}\) to denote the \(i\)-th element of vector \(\mathbf{a}\). We use \(\mathbf{A}_{-T}\) to denote the submatrix of \(\mathbf{A}\) obtained from \(\mathbf{A}\) by deleting rows and columns corresponding to elements in set \(T\), and use \(\mathbf{a}_{-T}\) to denote the vector obtained from \(\mathbf{a}\) by deleting elements in set \(T\). An \(n\times n\) matrix \(\mathbf{A}\) is positive semi-definite if \(\mathbf{x}^{\top}\mathbf{A}\mathbf{x}\geq 0\) holds for all \(\mathbf{x}\in\mathbb{R}^{n}\). For two positive semi-definite matrices \(\mathbf{A}\) and \(\mathbf{B}\), we use \(\mathbf{B}\preceq\mathbf{A}\) to denote that matrix \(\mathbf{A}-\mathbf{B}\) is a semi-definite matrix. Below, we introduce the notion of \(\epsilon\)-approximation. **Definition 2.1**.: _Given two positive semi-definite matrices \(\mathbf{A}\) and \(\mathbf{B}\) and a real number \(\epsilon\in(0,1)\), we say that \(\mathbf{B}\) is an \(\epsilon\)-approximation of \(\mathbf{A}\) (abbr. \(\mathbf{B}\approx_{\epsilon}\mathbf{A}\))_ \[(1-\epsilon)\mathbf{A}\preceq\mathbf{B}\preceq(1+\epsilon)\mathbf{A}.\] If \(\mathbf{A}\) and \(\mathbf{B}\) are degenerated to positive scalars \(a,b>0,b\) is called an \(\epsilon\)-approximation of \(a\) (abbr. \(b\approx_{\epsilon}a\)) if \((1-\epsilon)\,a\leq b\leq(1+\epsilon)\,a\). ### _Graphs and Related Matrices_ Consider a connected undirected graph \(\mathcal{G}=(V,E)\) where \(V\) is the set of nodes and \(E\subseteq V\times V\) is the set of edges. Let \(n=|V|\) and \(m=|E|\) denote the number of nodes and the number of edges, respectively. The Laplacian matrix of \(\mathcal{G}\) is the symmetric matrix \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), where \(\mathbf{A}\) is the adjacency matrix whose entry \(\mathbf{A}_{ij}=1\) if node \(i\) and node \(j\) are adjacent, and \(\mathbf{A}_{ij}=0\) otherwise, and \(\mathbf{D}\) is the degree diagonal matrix \(\mathbf{D}=\text{diag}(\mathbf{d}_{1},\cdots,\mathbf{d}_{n})\) where \(\mathbf{d}_{i}\) is the degree of node \(i\). We write \(\mathbf{e}_{i}\) to denote the \(i\)-th standard basis vector. We fix an arbitrary orientation for all edges in \(\mathcal{G}\), and for each edge \(e=(u,v)\in E\), we define \(\mathbf{b}_{e}=\mathbf{b}_{uv}=\mathbf{e}_{u}-\mathbf{e}_{v}\), where \(u\) and \(v\) are head and tail of \(e_{r}\) respectively. Then, \(\mathbf{L}\) can be rewritten as \(\mathbf{L}=\sum_{e\in\mathbf{B}}\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\). Matrix \(\mathbf{L}\) is singular and positive semidefinite with its Moore-Penrose pseudoinverse being \(\mathbf{L}^{\dagger}=\left(\mathbf{L}+\frac{1}{n}\mathbf{J}\right)^{-1}-\frac{1}{n}\mathbf{J}\). The transition matrix is \(\mathbf{P}=\mathbf{D}^{-1}\mathbf{A}\), which is a row-stochastic matrix. For any non-empty node sets \(F\subset V\) and \(T=V\backslash F\), we can partition the Laplacian matrix \(\mathbf{L}\) into 4 blocks: \[\mathbf{L}:=\left[\begin{array}{cc}\mathbf{L}_{[F,F]}&\mathbf{L}_{[F,T]}\\ \mathbf{L}_{[T,F]}&\mathbf{L}_{[T,T]}\end{array}\right]\] Then, the _Schur complement_ of graph \(\mathcal{G}\) onto node set \(T\), denoted by \(\mathcal{S}(T)\), is the matrix in closed form as \[\mathcal{S}(T)=\mathbf{L}_{[T,T]}-\mathbf{L}_{[T,F]}\mathbf{L}_{[F,F]}^{-1}\mathbf{L}_{[F,T]}.\] \(\mathcal{S}(T)\) is a Laplacian matrix of a graph with node set \(T\), and we use \(\mathcal{G}(\mathcal{S}(T))\) to denote the corresponding graph of \(\mathcal{S}(T)\). ### _Information Centrality_ Given a graph \(\mathcal{G}=(V,E)\) and two vertices \(x,y\in V\), the effective resistance, which is a form of Euclidean distance [42], \(\mathcal{R}_{xy}^{\mathcal{G}}\) between nodes \(x\) and \(y\) is defined as \(\mathcal{R}_{xy}^{\mathcal{G}}=\mathbf{b}_{xy}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{xy}\). We refer to the maximum value of pairwise effective resistance in a graph as the effective resistance diameter, and denote it by \(\phi\). Based on the physical definition of the effective resistance [42], \(\phi\) is less than the diameter of the graph, which is often small in real-life networks [43]. For any node set \(T\subset V\), the _Schur complement_ onto \(T\) can be viewed as a vertex sparsifier that preserves pairwise effective resistance [39, 44], which means \[\mathcal{R}_{xy}^{\mathcal{G}}=\mathcal{R}_{xy}^{\mathcal{G}(S(T))}, \tag{1}\] holds for any pair of nodes \(x,y\in T\). For a network \(\mathcal{G}=(V,E)\) and a node \(v\in V\), we use \(\mathcal{R}_{v}^{\mathcal{G}}\) to denote the sum of effective resistances between \(v\) and all nodes in \(V\backslash\{v\}\) (we will refer to \(\mathcal{R}_{v}^{\mathcal{G}}\) as the resistance distance of node \(v\) throughout the paper), i.e., \(\mathcal{R}_{v}^{\mathcal{G}}=\sum_{u\in V\backslash\{v\}}\mathcal{R}_{uv}^{ \mathcal{G}},\) which can be restated [45] in a matrix form as \[\mathcal{R}_{v}^{\mathcal{G}}=n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{ \dagger}\right). \tag{2}\] The information centrality \(\mathcal{I}_{v}^{\mathcal{G}}\) correlates to the resistance distance \(\mathcal{R}_{v}^{\mathcal{G}}\)[30, 32], and can be expressed by: \[\mathcal{I}_{v}^{\mathcal{G}}=\frac{n}{\mathcal{R}_{v}^{\mathcal{G}}}=\frac{n} {n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)} \tag{3}\] which is defined on connected networks. We will remove the superscripts of \(\mathcal{R}_{xy}^{\mathcal{G}}\), \(\mathcal{R}_{v}^{\mathcal{G}}\) and \(\mathcal{I}_{v}^{\mathcal{G}}\) when \(\mathcal{G}\) is clear from the context. ### _Supermodular Optimization_ Let \(X\) be a finite set, and \(2^{X}\) be the set of all subsets of \(X\). Let \(f:2^{X}\rightarrow\mathbb{R}\) be a set function on \(X\). Then, \(f\) is called monotone decreasing if for any subsets \(S\subset H\subset X\), \(f(S)>f(H)\) holds. Furthermore, we say function \(f\) is supermodular if for any subsets \(S\subset H\subset X\) and any element \(a\in X\backslash H\), it satisfies \(f(S\cup\{a\})-f(S)\leq f(H\cup\{a\})-f(H)\). The standard greedy algorithm has proven to be an effective solution for the cardinality-constrained set function problem in supermodular optimization, with a guaranteed \((1-1/e)\) approximation ratio. However, many crucial set functions do not satisfy the supermodularity requirement. Nevertheless, the greedy algorithm still frequently produces desirable results for a broad range of non-supermodular applications [37, 38]. ## 3 Problem Formulation In this section, we formulate the problem of minimizing the information centrality of a target node by removing edges. Consider a connected unweighted undirected network \(\mathcal{G}=(V,E)\) and a target node \(v\). For any edge set \(P\subset E\), define \(\mathcal{G}\setminus P=(V,E\backslash P)\) as the remaining graph resulted by removing edges in edge set \(P\). Then we define the set function of the information centrality, \(\mathcal{I}_{v}(P)=\mathcal{I}_{v}(\mathcal{G}\setminus P)\). Similarly, we can define the set function of the sum of the effective resistance \(\mathcal{R}_{v}(P)\). These two definitions are valid whenever the removal of edges in set \(P\) maintains the connectivity of the network. Rayleigh's monotonicity law [42] asserts that the effective resistance between any pair of nodes will increase when an edge is removed; hence, the resistance distance \(\mathcal{R}_{v}(P)\) is monotonically increasing, and the information centrality \(\mathcal{I}_{v}(P)\) is monotonically decreasing. Then the following problem arises naturally: How to optimally remove a subset \(P\subset E\) from the network subject to a cardinality constraint \(k\) specifying the maximum number of edges that can be removed so that \(\mathcal{I}_{v}\) is minimized while preserving the connectivity of the graph. Mathematically, the information centrality minimization problem can be stated as follows. **Problem 1**.: _(Information Centrality Minimization, InforCen-Min) Given a connected undirected network \(\mathcal{G}=(V,E)\), a predefined target node \(v\), and an integer \(k\), we aim to find the edge set \(P\subset E\) with \(|P|=k\), so that the information centrality \(\mathcal{I}_{v}(P)\) is minimized while simultaneously retaining the connectivity of the network. This set optimization problem can be formulated as:_ \[P^{*}=\operatorname*{arg\,min}_{P\subset E,|P|=k,\mathcal{G}\setminus P\text{ is connected}}\mathcal{I}_{v}(P). \tag{4}\] Simply removing edges incident to the target node seems effective to reduce its information centrality. However, removing nonadjacent edges can provide greater benefits in some cases. For example, consider the Dolphin [46] network with 62 nodes, 159 edges, and targeting the green node in Fig. 1. We find that none of the top-10 edges (colored in red) leading to the largest reduction in the target node's information centrality are adjacent to it. This suggests that the scope of edge removal should not be simplified. ## 4 Our Contribution We investigate the InforCenMin problem, both theoretically and experimentally, where the goal is to minimize the information centrality of a target node given the possibility to remove \(k\) edges while keeping the underlying graph connected. This is a very natural and intuitive formulation of the problem of moderating critical nodes in a network, which in particular takes the very important parameter of information centrality into account. We first prove that the problem is NP-complete, building on a reduction from the Hamiltonian cycle problem [47]. Furthermore, we show that while the objective function of our problem is monotonically decreasing, it does not enjoy supermodularity property, by providing an explicit counterexample. As a result, the traditional greedy approaches would not facilitate us with any theoretical guarantees. However, since such greedy approaches have proven to be a useful base for devising effective and efficient algorithms in the past [37, 38] (even when super-modularity does not hold) we rely on them as a starting point too. We first propose a simple algorithm where edges are deleted following the classic greedy approach. This algorithm, called ExactSM, runs in \(O(n^{3})\), which makes it impractical for very large networks. As the first step towards a faster algorithm, we use the random walk-based approximate Schur complement method [39, 48] to present a faster algorithm. In this algorithm, called ApproxSC, after each edge removal, the new value of the resistance distance and the connectivity status can be determined very quickly by carefully modifying the set of random walks. To further speed up the computation, as the next step we also leverage the sum estimation method [40, 41], which allows us to provide the algorithm FastICM that runs in nearly linear time in the number of edges. The sum estimation method permits us to approximate the resistance distance of the target node by sampling limited pairs of effective resistances. Our theoretical analyses confirm that the combination of the above techniques is well-suited to our problem. Specifically, the absolute error between the approximate resistance distance upon the removal of any edge and the corresponding exact value is at most \(\alpha n\), for a small error parameter \(\alpha\). Besides our theoretical analyses, we conduct a comprehensive set of experiments on real-world networks from Network Repository [46] and SNAP [49] and on synthetic graph models, namely Barabasi-Albert (BA) [50] and Watts-Strogatz (WS) [43]. We compare our algorithms against several other algorithms and observe that our algorithms significantly outperform others while producing results very close to optimal solution. Furthermore, our linear time FastICM algorithm enjoys an extremely fast run time in practice as well. In particular, it completes the task for networks with more than one million nodes in a few hours on a standard 32G-Linux box. Therefore, our algorithms not only allow a rigorous theoretical analysis but also outperform other algorithms in practice, in both aspects of effectiveness and efficiency. ## 5 Related Work In this section, we review the literature related to ours, including minimizing spread of misinformation in social networks, privacy protection of networks, edge removal strategies, and edge centrality measures. Specifically, prior work provided in Sections 5.1 and 5.2 would let us place our contribution in a bigger picture and draw the connection to some adjacent topics, while the results in Sections 5.3 and 5.4 are more closely related. ### _Minimizing Spread of Misinformation_ Consider the setup where a rumor starts spreading in a social network from a known set of nodes and following a predefined spreading dynamics such as the Independent Cascade model [51]. Then, the goal is to contain the spread by blocking \(k\) edges [21, 22]. A greedy algorithm is developed in [22] where in each iteration an edge with maximum containment ability is selected, and some heuristics are proposed in [21]. In [26], considering the cost for blocking each edge, some budget constraints are defined for critical edge identification. Some heuristic algorithms are then proposed to solve the problem. Applying the maximum influence arborescence method [29], an approximation method is proposed in [12]. These algorithms have several shortcomings. Firstly, they are usually tailored for a particular rumor spreading model such as the Independent Cascade model rather using a more generic notion of information centrality. Secondly, they disregard the important constraint of connectivity, and they might produce disconnected networks. Furthermore, they often fail to cover large networks due to their time complexity. ### _Network Privacy Protection_ One area which is closely related to our work is privacy protection in networks. Most works in this area focus on protecting the privacy of users by structural anonymization. The goal is to modify graph structure to anonymize the underlying network, using various methodologies such as \(k\)-Anonymity, and differential privacy-based approaches [52, 53]. One important objective here is to anonymize the key nodes in a network by reducing their centrality. Previous studies have investigated the problem of removing edges to decrease the centrality of some target node with regard to degree centrality [23], closeness centrality [24], and the likelihood that the target node appears in an absorbing random walk [28]. However, the notion of information centrality has not been analyzed in this framework. ### _Edge Removal Strategies_ Admittedly, as a practical approach of graph edit, edge removal operation has been extensively used for different Fig. 1: The Dolphin network with the green target node. application purposes, such as controlling disease spreading [10, 11], minimizing the number of spanning trees [54], and optimizing eigenvalues of related matrices [55, 56]. In social networks, removing edges can correspond to unfriending, not exposing the posts/comments, or maintaining social distance. In computer networks, removing edges is similar to cutting a fiber or bringing down a physical/virtual link temporarily. Many studies on edge removal require the final graph to remain connected. For instance, in [36], the authors have studied the problem of decreasing the greatest eigenvalue of the adjacency matrix by link removal while preserving network connectivity. The authors of [57] have investigated the problem of expanding the network diameter by eliminating edges such that the resulting graph remains connected. This is because connectivity is usually essential for the network to preserve its main functionality. ### _Edge Centrality_ The drop in the information centrality of a node caused by deleting an edge can be used as a measure of its importance. There have been many metrics proposed in the literature to assess the importance of a single edge or a group of edges. Individual edge centrality measures comprise, among others, edge betweenness [58], spanning edge centrality [59], and biharmonic distance related edge centrality [60]. Additionally, the importance of an edge can be determined based on the centrality of its endpoints, such as the sum or product of the degree, closeness, and betweenness centrality of the endpoints [61]. Group edge centrality measures are typically designed to quantify the effect of deleting these edges on specific objective functions, such as the inverse geodesic length [62], the total pairwise connectivity [63], and the forest index [64]. These existing edge centrality measurements are tailored for distinct use cases. Our proposed metric is defined based on the reduction in the information centrality of a target node. ## 6 Complexity Challenges In this section, we study the computational complexity of the InforCenMin problem. We first consider the decision version of the problem and prove that it is NP-complete in Theorem 6.1. **Problem 2**.: _(Information Centrality Minimization, Decision Version, InforCenMinD) Given a connected undirected graph \(\mathcal{G}=(V,E)\) with \(n\) nodes, \(m\) edges, and node \(v\) being the target node, an integer \(k\in\mathbb{N}^{+}\), a real number \(x\in\mathbb{R}^{+}\), decide whether or not there is a set \(P\) of \(k\) edges to be removed from \(\mathcal{G}\) such that \(\mathcal{I}_{v}\) is at most \(x\) in the connected subgraph \(\mathcal{G}\setminus P\)?_ **Theorem 6.1**.: _The InforCenMinD problem is NP-complete._ **Proof.** We demonstrate a polynomial time reduction from the Hamiltonian cycle problem, which is NP-complete [47]. We presume that edge deletion reserves the network's connectivity. Given that we can guess the \(k\) edges to be removed and compute \(\mathcal{I}_{v}\) in polynomial time, it is apparent that the problem is in NP. The smallest \(\mathcal{I}_{v}\) that could possibly be assigned to a node in a connected undirected network with \(n\) nodes is \(\mathcal{I}_{v}^{\min}=n/\sum_{i=1}^{n-1}i=\frac{2}{(n-1)(n-2)}\) (e.g., node \(v\) in Fig. 2). Graph \(\mathcal{G}\) contains a Hamiltonian cycle if and only if \(\mathcal{G}\) has a connected subgraph \(\mathcal{G}^{\prime}\) with \(n\) nodes, \(n-1\) edges and the information centrality of its end node being \(\mathcal{I}_{v}^{\min}\). So by choosing \(k=m-n+1\), and \(x=\mathcal{I}_{v}^{\min}\), we have a reduction from the Hamiltonian cycle problem to the InforCenMinD problem, proving its NP-completeness. \(\square\) A very common technique to tackle NP-hard problems, such as InforCenMin, is to prove that the objective function enjoys both monotonicity and super-modularity, which consequently would provide us with a Hill Climbing algorithm with a constant approximation guarantee [37, 38]. While as stated in Lemma 6.2, our objective function is monotone, it does not possess super-modularity property, proven in Lemma 6.3. **Lemma 6.2**.: _(Monotonicity) For two subsets \(S\) and \(H\) of edges satisfying \(S\subset H\subset E\), and \(\mathcal{G}\setminus H\) is a connected graph, we have_ \[\mathcal{I}_{v}(H)<\mathcal{I}_{v}(S).\] **Lemma 6.3**.: _(Non-supermodularity) \(\mathcal{I}_{v}(\cdot)\) is not supermodular._ **Proof.** To exemplify the non-supermodularity of the objective function (4), consider the network in Fig. 3 (a), a \(5\)-node graph with node \(1\) being the target node and \(e_{1}\) and \(e_{2}\) being edges to delete. We define two edge sets, \(S=\emptyset\) and \(H=\{e_{1}\}\). Then, we have \(\mathcal{I}_{v}(S)=1.8\), \(\mathcal{I}_{v}(S\cup\{e_{2}\})=1.4\), \(\mathcal{I}_{v}(H)=1.3\), and \(\mathcal{I}_{v}(H\cup\{e_{2}\})=0.7\). Thus, we have \[\mathcal{I}_{v}(S)-\mathcal{I}_{v}(S\cup\{e_{2}\})=0.4<0.6=\mathcal{I}_{v}(H) -\mathcal{I}_{v}(H\cup\{e_{2}\}).\] This result clearly contradicts the definition of supermodularity. Consequently, the set function of the InforCenMin problem is not supermodular. \(\square\) ## 7 Deterministic Greedy Algorithm The InForCenMin problem is inherently combinatorial. Its optimal solution can be computed using the following naive brute-force approach. For each set \(P\) of the \(\binom{m}{k}\) possible subsets of edges, determine the connectivity of \(\mathcal{G}\setminus P\), and calculate \(\mathcal{I}_{v}(P)\) in the reduced graph by inverting the Laplacian matrix. Finally, output the subset \(P^{*}\) of \(k\) edges whose deletion leads to greatest decrease in \(\mathcal{I}_{v}\) while keeping the connectivity of the graph. Since inverting the Laplacian matrix could take \(\Omega(n^{3})\) time and there are \(\binom{m}{k}\) possible Fig. 3: Two \(5\)-node toy networks. Fig. 2: A \(5\)-node path graph targeting at node \(v\). subsets, the algorithm's time complexity is in \(\Omega\left({{m\choose k}n^{3}}\right)\). Thus, albeit its simplicity, this method is computationally unaffordable even for small networks due to its exponential time complexity in \(k\). To tackle this issue, one may consider the heuristic approach of picking the top-\(k\) edges with the greatest individual effect on reducing the information centrality of the target node. However, due to the interdependence of edges, the cumulative effect of removing a group of edges is often not equivalent to the sum of individual edge effects. For example, see the network in Fig. 3 (b), where node 2 is the target node, and edges \(e_{2}\) and \(e_{3}\) are the top-2 edges whose removal has the greatest individual effect on the information centrality of node 2. Surprisingly, removing these top-\(2\) edges reduces the information centrality of node 2 to 0.71 while removing edges \(e_{1}\) and \(e_{2}\) would reduce it to \(0.56\). An alternative heuristic is the standard greedy algorithm which starts with an empty set \(P\). Then, in each iteration \(i\in\{1,2,\ldots,k\}\), it adds the edge that results in the largest decrease in information centrality of the target node, while preserving connectivity. However, this greedy approach is computationally expensive because of two obstacles present during each iteration: 1) determining whether removal of a certain edge would disconnect the graph could take \(\Omega(n)\) time; 2) the computation of the new information centrality through inversion of the matrix might take \(\Omega(n^{3})\) time. As a result, the total running time could amount to \(\Omega(kmn^{3})\), which is unaffordable for large networks. To overcome the first obstacle, some researches focusing on the dynamic connectivity problem resolve connectivity queries in \(O(\log n/\log\log n)\) time [65, 66]. However, these works remain in the realm of theory and can hardly be applied in practice. Moreover, the computational bottleneck caused by 2) is larger; thus, we first focus on reducing the time complexity of updating information centrality. Let \(\mathcal{I}_{v}^{\Delta}(e)=\mathcal{I}_{v}(\{e\})-\mathcal{I}_{v}(\emptyset)\) denote the margin gain of the information centrality by removing edge \(e\). We provide an efficient method for computing \(\mathcal{I}_{v}^{\Delta}(e)\) in the following lemma. **Lemma 7.1**.: _Let \(\mathcal{G}=(V,E)\) be a connected graph with Laplacian matrix \(\mathbf{L}\). Let \(e=(x,y)\in E\) be a candidate edge satisfying that \(\mathcal{G}\setminus\{e\}\) is connected. Then,_ \[\mathcal{I}_{v}^{\Delta}(e)=\frac{-(nb+n^{2}c)}{\left(na\mathbf{L}_{vv}^{\dagger}+ nc+a\mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)+b\right)\left(n\mathbf{L}_{vv}^{\dagger}+ \mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)\right)},\] _where \(a=1-\mathbf{b}_{e}^{\dagger}\mathbf{L}^{\dagger}\mathbf{b}_{e}\), \(b=\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{e}\) and \(c=\left(\mathbf{L}^{\dagger}\mathbf{b}_{e}\right)_{v}^{2}\)._ **Proof.** By definition, we have \[\mathcal{I}_{v}(\{e\})=\frac{n}{n(\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})_{vv}^{ \dagger}+\mathrm{Tr}\left((\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger} \right)}.\] By Sherman-Morrison formula [67], we have \[(\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger}=\mathbf{L}^{\dagger}+\frac{\mathbf{L}^ {\dagger}\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}}{1-\mathbf{b}_{e}^{\top}\mathbf{ L}^{\dagger}\mathbf{b}_{e}}. \tag{5}\] Note that \(\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{e}\) equals the effective resistance between nodes \(x\) and \(y\), whose value is 1 whenever removing this edge partitions the graph into two components and is less than 1 otherwise. Thus, the gain in information centrality can be written as \[\mathcal{I}_{v}^{\Delta(e)}= \frac{n}{n\big{(}\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\big{)}_{vv}^{ \dagger}+\mathrm{Tr}\left((\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger} \right)}\] \[-\frac{n}{n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{\dagger }\right)}\] \[= \frac{-(nb+n^{2}c)}{\left(na\mathbf{L}_{vv}^{\dagger}+nc+a\mathrm{Tr} \left(\mathbf{L}^{\dagger}\right)+b\right)\left(n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr} \left(\mathbf{L}^{\dagger}\right)\right)},\] completing the proof. \(\Box\) According to Lemma 7.1, if \(\mathbf{L}^{\dagger}\) is known, we can efficiently compute the marginal gain of the information centrality for one edge by rank-1 update in \(O(n)\) time. Then, we propose a deterministic greedy algorithm \(\textsc{ExactSM}(\mathcal{G},v,k)\). As outlined in Algorithm 1, the first step of this algorithm is to set the result edge set \(P\) to empty and compute \(\mathbf{L}^{\dagger}\) in \(O(n^{3})\) time (Line 1). Then we add \(k\) edges to \(P\) iteratively (Lines 2-11). In each iteration, for each candidate edge \(e\in E\), we determine the connectivity of the graph \(\mathcal{G}\setminus\{e\}\) in \(O(n)\) time (Line 4) and compute the marginal gain of the information centrality in \(O(n)\) time (Line 7). After obtaining \(\mathcal{I}_{v}^{\Delta}(\cdot)\) for each candidate edge, we select the edge that leads to the smallest marginal gain (Line 8), update the solution (Line 9) and graph (Line 10), and update \(\mathbf{L}^{\dagger}\) according to Equation (5) in \(O(n^{2})\) time. In summary, the total running time of Algorithm 1 is \(O(n^{3}+kmn+kn^{2})\). ``` Input : A connected graph \(\mathcal{G}=(V,E)\); a target node \(v\in V\); an integer \(k\leq m\) Output : A subset of \(P\subset E\) with \(|P|=k\) 1 Set \(P\leftarrow\emptyset\); compute \(\mathbf{L}^{\dagger}\) for\(i=1\) to \(k\)do 2for\(e\in E\)do 3if\(\mathcal{G}\setminus\{e\}\) is not connectedthen 4 Set \(\mathcal{I}_{v}^{\Delta}(e)=0\) 5else 6 Compute \(\mathcal{I}_{v}^{\Delta}(e)\) by Lemma 7.1 7 8 Select \(e_{i}\) s.t. \(e_{i}\leftarrow\operatorname*{arg\,min}_{e\in E}\mathcal{I}_{v}^{\Delta}(e)\) Update solution \(P\gets P\cup\{e_{i}\}\) Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\) Update \(\mathbf{L}^{\dagger}\leftarrow\mathbf{L}^{\dagger}+\frac{\mathbf{L}^{\dagger}\mathbf{b}_{e_{i}} \mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}}{1-\mathbf{b}_{e_{i}}^{\top}\mathbf{L}^{\dagger}\mathbf{b} _{e_{i}}}\) 9 10 11return\(P\) ``` **Algorithm 1**\(\textsc{ExactSM}(\mathcal{G},v,k)\) ## 8 Fast Randomized Greedy Algorithm The deterministic greedy algorithm, while faster than the brute-force approach, is not feasible for large networks due to the high computational cost of determining the information centrality marginal gain and graph connectivity. On the positive side, information centrality and the resistance distance are shown to be correlated in Equation (3). This permits us to leverage random walk-based approximate Schur complement method [39, 48] to present a faster algorithm, called ApproxiSC in Section 8.1. To speed up the computation even further, we then utilize the sum estimation method [40, 41], which allows us to present the algorithm FastICM in Section 8.2 which runs in nearly linear time in the number of edges. ### _A Simple Sampling Algorithm_ To efficiently calculate and update the information centrality, the main computational bottleneck is the fast calculation of effective resistance Equation (3). Several approximation methods have been proposed to estimate pairwise effective resistance in sublinear time [39, 68]. However, a naive approach would need to approximate \(n\) distinct pairwise effective resistance and then update them for \(m\) potential candidate edge. This seems to be computationally expensive. To address this issue, we approach the problem from a different angle, which facilitates us with a much faster mechanism to approximate the effective resistances. As mentioned in Section 2.3, the Schur complement can be considered as a vertex sparsifier preserving pairwise effective resistance. This motivates us that if we can efficiently compute and update the Schur complement, the aforementioned challenges could be resolved. However, calculating the Schur complement directly is time-consuming. Building upon the ideas of sparsifying random walk polynomials [69] and Schur complement [48, 70], we approximate it using a collection of random walks that can be maintained upon edge removal with a reasonable time complexity. The following lemma, borrowed from [39, 48, 69, 70], asserts that these walks provide an accurate estimate of the Schur complement. **Lemma 8.1**.: _[_39_]_ _Let \(\mathcal{G}=(V,E)\) be an undirected unweighted graph with a subset of nodes \(T\subset V\). Assume \(\epsilon\in(0,1)\), and let \(\rho=O\left((\log n)\epsilon^{-2}\right)\) be some sampling concentration parameter. Suppose that \(\mathcal{H}\) is an initially empty graph. For every edge \(e=(i,j)\in E\), repeat the following procedure \(\rho\) times:_ 1. _Simulate a random walk_ \(w_{1}\) _starting from node_ \(i\) _until it first hits_ \(T\) _at some node_ \(t_{1}\)_._ 2. _Simulate a random walk_ \(w_{2}\) _starting from node_ \(j\) _until it first hits_ \(T\) _at some node_ \(t_{2}\)_._ 3. _Combine these two walks (including e) as two sides to get a walk_ \(w=(t_{1}=u_{0},\cdots,u_{l}=t_{2})\)_, where_ \(\tilde{l}\) _is the length of the combined walk._ 4. _Add the edge_ \((t_{1},t_{2})\) _to graph_ \(\mathcal{H}\) _with weight_ \(1/\left(\rho\tilde{l}\right)\)_._ _Then, the Laplacian matrix \(\boldsymbol{L}_{\mathcal{H}}\) of the resulting graph \(\mathcal{H}\) satisfies \(\boldsymbol{L}_{\mathcal{H}}\approx_{\epsilon}\mathcal{S}(T)\) with probability of at least \(1-O(1/n)\)._ Based on Lemma 8.1, we can approximate the Schur complement for an arbitrary terminal set \(T\subseteq V\) and obtain a graph \(\mathcal{H}\) satisfying \(\boldsymbol{L}_{\mathcal{H}}\approx_{\epsilon}\mathcal{S}(T)\). Let \(\tilde{\mathcal{R}}_{xy}\) be the effective resistance between any pair of nodes \(x,y\in T\) on graph \(\mathcal{H}\), then based on the fact that matrix approximations also preserve approximations of their quadratic forms, we have \[\tilde{\mathcal{R}}_{xy}\approx_{\epsilon}\mathcal{R}_{xy}^{\mathcal{G}( \mathcal{S}(T))}=\mathcal{R}_{xy}^{\mathcal{G}}. \tag{6}\] #### 8.1.1 Approximation of Effective Resistance To approximate \(\mathcal{R}_{uv}\), a direct way is to set \(T=\{u,v\}\), and then estimate \(\mathcal{R}_{uv}\) as the reciprocal of the weight between \(u\) and \(v\) in the resulting graph \(\mathcal{H}\) from Lemma 8.1. However, even if the target node \(v\) is fixed, the approximation of the effective resistance between \(v\) and all other nodes is expensive as it may result in redundant walk sampling. To address this issue, we carefully modify the sampling steps in Lemma 8.1. First, we set \(T=\{v\}\), and sample an initial collection of walks using steps (1)-(3) in Lemma 8.1. For each node \(u\), we set \(T_{2}=\{u,v\}\), and traverse and shorten all the walks at the first position they hit \(T_{2}\), then add the weight of edge \((u,v)\) to the two-node graph \(\mathcal{H}\). This yields a new collection of random walks and an approximation of \(\mathcal{S}(T_{2})\). However, repeating this process for every node takes at least \(\Omega(mn(\log n)/\epsilon^{2})\) time, which is computationally infeasible for large networks. We notice that most nodes may appear in a small fraction of the sampled walks; thus, we do not necessarily need to traverse all the walks for each node \(u\). So we propose an approach where we traverse each sampled walk exactly once. More precisely, for any walk \(w\), we traverse it once and keep track of which nodes appear accompanied by the positions they first appear on both sides. If a node \(u\) is only encountered on one side of the walk, setting \(T_{2}=\{u,v\}\) will contribute an edge weight to the resulting graph \(\mathcal{H}\). For example, consider the walk in Fig. 4(a)-(b) which starts from the red edge and initially stops at \(v\). By setting \(T_{2}=\{u,v\}\), this walk contributes a weight of \(\frac{1}{8\epsilon}\) to edge \((u,v)\) in \(\mathcal{H}\). After summing up the weights contributed by all the walks, we can approximate \(\mathcal{R}_{uv}\) as the reciprocal of the weight of edge \((u,v)\) in \(\mathcal{H}\). In summary, we sample random walks according to Lemma 8.1 by setting \(T=\{v\}\), and approximate the effective resistances between node \(v\) and all nodes \(u\in V\backslash\{v\}\) by traversing the sampled walks once. According to Equation (6), we can approximate the resistance distance \(\mathcal{R}_{v}\) by \(\tilde{\mathcal{R}}_{v}=\sum_{u\in V\backslash\{v\}}\tilde{\mathcal{R}}_{uv}\), which satisfies \(\left|\mathcal{R}_{v}-\tilde{\mathcal{R}}_{v}\right|\leq\varepsilon\mathcal{ R}_{v}\leq n\epsilon\phi\), where \(\phi\) is the effective resistance diameter of the network [71]. Following Lemma 8.1, we need to sample \(O(m(\log n)/\epsilon^{2})\) walks. Another critical factor that we need to take into account is the length of sampled walks with an average value of \(l_{\text{avg}}=\sum_{u\in V\backslash\{v\}}2\boldsymbol{d}_{u}F_{u,v}/m\), where \(F_{u,v}\) represents the hitting time from node \(u\) to node \(v\). However, some walks may be excessively long, making our computations expensive. To address this issue, we adopt the \(l\)_-truncated random walk_ concept [72], where walks are accepted if they shorter than \(l\), and disregarded otherwise. Of course, this would result in less accurate solutions. However, we pursue to balance accuracy and efficiency in the choice of \(l\). We should stress that this extension is based on the observation that a walk's contribution to a related edge in \(\mathcal{H}\) decreases as its length increases, with a walk longer than \(l\) contributing less than \(1/(\rho l)\) to the corresponding edge. The following lemma ensures that the expected ratio of invalid walks can be made arbitrarily small for a suitably chosen value of \(l\). **Lemma 8.2**.: _[_73_]_ _Given a connected graph \(\mathcal{G}=(V,E)\), a node set \(T=\{v\}\), and a ratio of invalid walks \(\gamma>0\), if the maximum length \(l\) of random walks satisfies \(l=\log(m\gamma/\sqrt{n-1}\left\|\boldsymbol{d}_{-T}\right\|_{2})/\log(\lambda)\), where \(\lambda\) is the spectral radius of matrix \(\boldsymbol{P}_{-T}\), then the expected ratio of invalid walks is less than \(\gamma\)._ Based on above analysis, we propose an algorithm Initialization which returns an array \(\mathcal{R}\) containing the effective resistances between \(v\) and all nodes in a given set \(Q\) (with \(Q=V\backslash\{v\}\) in this section), together with a collection of random walks \(W\) for future computations. The outline of Initialization is presented in Algorithm 2. #### 8.1.2 Updating Effective Resistance The removal of an edge alters the effective resistances. Recalculating them by resampling walks from scratch is computationally inefficient. To address this issue, we propose an efficient algorithm, called DeleteEdge, which updates effective resistances by modifying the existing walks upon edge removal, as outlined in Algorithm 3. Before discussing the algorithm in more detail, we introduce a node-walk mapping data structure, which maps nodes to the walks that contain them to facilitate the subsequent computation of effective resistances, and is constructed as follows. * First, assign two pristine arrays to each node: the walk array and the position array, aimed at capturing the walks that the node participates in and their corresponding positions, respectively. * Systematically traverse any walk \(w\in W\), and scrutinize any node \(u\) that is encountered for the first time at position \(p\) on either side of the walk. Then append \(w\) and \(p\) to the walk array and the position array corresponding to \(u\). The following example illustrates how this data structure works. **Example.** Fig. 5 demonstrates two instances of the node-walk map. Specifically, node \(x\) is successively encountered at positions \(p_{1}\) and \(p_{2}\) on the left side of walk \(w_{1}\), so we append \(w_{1}\) and \(p_{1}\) to the walk array and the position array corresponding to \(x\). For walk \(w_{2}\), node \(x\) appears on both sides, leading us to record \(p_{2}\) and \(p_{3}\). In turn, for walk \(w_{3}\), we document position \(p_{4}\), where node \(x\) is first encountered on the right side. Similar steps can be taken to fill the walk array and position array for node \(y\). Utilizing the node-walk mapping data structure, we present an efficient method for updating the effective resistances upon removal of an edge \(e=(x,y)\). The core idea is to reduce the edge removal operation to adding nodes to set \(T\) as shown in Fig. 4 (c)-(d). This reduction is feasible because if nodes \(x,y\in T\), then removing edge \(e\) in the original graph \(\mathcal{G}\) equates to a decrement in edge \(e\)'s weight from the Schur complement graph \(\mathcal{G}(\mathcal{S}(T))\) by \(1\). In the following analysis, we fix node \(u\). We first sample the random walks with \(T=\{v\}\) and then set \(T_{4}=\{u,v,x,y\}\) and denote the resulting approximate Schur complement graph as \(\mathcal{H}_{u}\). To modify these walks, we utilize the node-walk map to expeditiously determine the walks and positions of nodes \(u,x,y\). The corresponding walks are then shortened and the edge weights in graph \(\mathcal{H}_{u}\) are updated. The final step involves subtracting the edge weight between \(x\) and \(y\) by \(1\). The sufficient condition for the reduced graph being Fig. 4: Pipeline of our algorithm. (a) Set \(T=\{v\}\) and sample an initial walk (colored in blue) starting from the red edge. (b) For an initial approximation of the effective resistance \(\mathcal{R}_{uv}\), set \(T_{2}=\{u,v\}\) and shortcut the initial walk at the first positions it hit \(T\) such that it contributes a weight of \(\frac{1}{8p}\) to edge \((u,v)\) (colored in yellow) in the approximate graph. (c) When trying to remove edge \(e=(x,y)\), set \(T_{4}=\{u,v,x,y\}\) and shortcut the walk at the first positions it hit \(T\). (d) After modifying all walks, reduce the weight of edge \((x,y)\) in the approximate graph by \(1\). Fig. 5: Illustration of the node-walk mapping data structure. connected is that node \(x\) remains accessible from node \(y\), which suffices to demonstrate that \(v\) is reachable from both \(x\) and \(y\), which can be readily assessed using the updated four-node graph. Consequently, the connectivity of the reduced graph and the new effective resistance between nodes \(u\) and \(v\) can be easily determined. Although this method is simple, it still could take \(\Omega(mn)\) time for updating all pairs of effective resistances for any edge. However, thanks to the initialization process, we store the information of the Schur complement onto each two-node set \(T_{2}=\{u,v\}\) for each \(u\in V\backslash\{v\}\). This would facilitate us to update the four-node graph efficiently. Specifically, we use the node-walk mapping data structure to locate the positions of nodes \(x\) and \(y\). Then, we shorten walks and update edge weights for all possible four-node sets \(T_{4}=\{q,v,x,y\}\) where \(q\) is a node co-occurring with \(x\) or \(y\) on any walk, and update the edge weights of \(\mathcal{H}_{q}\). More accurately, for edges \((q,v),(x,v),(y,v)\), the weights are derived from subtracting the weight of \(\mathcal{G}(\mathcal{S}(T_{2}))\) by the weight reduction due to walk truncation, and for edges \((q,x),(q,y),(x,y)\), the weights are computed by the newly specified walk segments. Finally, we subtract the edge weight between \(x\) and \(y\) by 1, check the connectivity and compute the effective resistance on the updated four-node graph. We note that the average occurrence of each node in all sampled random walks is \(O(mn^{-1}l(\log n)\epsilon^{-2})\), thus the overall time complexity is reduced to \(O(m^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\). To visualize the process of modifying walks, see the walk in Fig. 4, which is affected by the removal of edge \((x,y)\). This walk no longer contributes weight to edge \((u,v)\), but contributes weight to edge \((x,y)\). #### 8.1.3 A Sampling Algorithm for InforCenMin By integrating the techniques for initializing and modifying random paths to approximate effective resistances, we offer a fast algorithm ApproxiSC to address the problem InforCenMin. The pseudocode of this algorithm is summarized in Algorithm 4. Besides the input graph \(\mathcal{G}\), the cardinality constraint \(k\) and the target node \(v\), its input also contains an integer \(l\) that constrains the maximum length of random walks and an error parameter \(\epsilon\). We first set \(P\leftarrow\emptyset\). Then in each iteration, we initialize the random walks for the current graph by making a call to Initialization. Then, we use DeleteEdge to obtain a set of tuples consisting of an edge and the corresponding \(\tilde{\mathcal{R}}_{v}\) in the reduced graph. The edge that results in the largest margin gain of \(\tilde{\mathcal{R}}_{v}\) while preserving graph connectivity will be selected. This process is repeated until \(k\) edges have been added to \(P\). ``` Input : A connected graph \(\mathcal{G}=(V,E)\); an integer \(k<m\); a node \(v\in V\); the length constraint \(l\); an error parameter \(\epsilon\in(0,1)\) Output : An edge set \(P\subset E\) satisfying constraint \(k\) 1 Set \(P\leftarrow\emptyset\)for\(i=1\) to \(k\)do 2\(\mathcal{R},W\leftarrow\textsc{Initialization}(\mathcal{G},v,V,l,\epsilon)\)\(\{(e,\mathcal{R}_{v}(P\cup\{e\}))|e\in E\}\leftarrow\)DeleteEdge\((\mathcal{G},W,v,V,P)\) 3 Select \(e_{i}\) s.t. \(e_{i}\leftarrow\operatorname*{arg\,max}_{e\in E}\tilde{\mathcal{R}}_{v}(P\cup\{e\})\) Update solution \(P\gets P\cup\{e_{i}\}\) Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\)return\(P\) ``` **Algorithm 4**ApproxiSC(\(\mathcal{G},k,v,l,\epsilon\)) We next analyze the time complexity of ApproxiSC, which consists of two parts: approximating the effective resistances and updating them upon edge removal for all existing edges. In the updating process, we also need to check the connectivity of the resulting graph. Generating the random walks takes \(O(ml(\log n)\epsilon^{-2})\) time, initializing the effective resistances takes \(O(ml(\log n)\epsilon^{-2})\) time, and updating them takes \(O(m^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\) time. Furthermore, the connectivity can be checked relatively quickly using the derived graph \(\mathcal{H}\). In summary, the overall time complexity of our proposed algorithm ApproxiSC is in \(O(kml(\log n)\epsilon^{-2}+km^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\). ### _A More Efficient Algorithm_ The simple sampling algorithm ApproxiSC is much faster than the original deterministic algorithm we began with. Nevertheless, in this section we further reduce its time complexity by proposing a faster sampling algorithm, with a similar accuracy guarantee. #### 8.2.1 Fast Simulating Random Walks Each iteration of ApproxiSC involves simulating \(m\lceil(\log n)/\epsilon^{2}\rceil\)\(l\)-truncated random walks, which is computationally expensive as \(k\) grows. However, we observe adding an edge \(e\) to \(P\) only impacts walks that go through it. Hence, we can speed up the process by only modifying a small fraction of walks when \(e\) is deleted, while reusing the rest for subsequent approximations. Next, we elaborate on this more efficient approach. First, we sample random walks for initialization. Then in each iteration, after selecting an edge \(e\), we modify the affected walks. To efficiently locate these walks, we set up an edge-walk mapping data structure. This data structure, similar to the node-walk mapping data structure shown in Fig. 5, records the walks and the positions where \(e\) first appears, which can be built once the walks are sampled. For each affected walk \(w\), we truncate it at the first appearance of edge \(e\), and extend it with a new random walk from the truncated position to \(v\), or until the total length reaches \(l\). Both edge-walk and node-walk map, as well as the effective resistance array \(\mathcal{R}\), are then updated. Since the average number of walks that include edge \(e\) is \(O(mn^{-1}l(\log n)\epsilon^{-2})\), updating the walks is efficient. #### 8.2.2 Fast Approximation of the Resistance Distance ApproxiSC approximates effective resistances between the target node \(v\) and all other nodes in the network. However, it is only the sum of these resistances, \(\mathcal{R}_{v}\), that is of interest, rather than individual ones. Next, we show that evaluating \(\mathcal{R}_{v}\) can be achieved through computing the effective resistances between \(v\) and a smaller subset \(V\subseteq V\). Based on the techniques developed in previous sum estimation works [40, 41], we show that the sum of \(n\) bounded elements can be approximated by a sample of them in the following lemma. **Lemma 8.3**.: _Given \(n\) bounded elements \(x_{1},x_{2},\ldots,x_{n}\in[0,a]\), an error parameter \(\beta>an^{-1/2}\log^{1/2}n\), we randomly select \(t=O(a\sqrt{n(\log n)}/\beta)\) elements \(x_{c_{1}},x_{c_{2}},\ldots,x_{c_{k}}\), by Bernoulli trials with success probability \(p=an^{-1/2}\log^{1/2}n/\beta\) satisfying \(0<p<1\). We have \(\bar{x}=\sum_{i=1}^{t}x_{c_{i}}/p\) as an approximation of the sum of the original \(n\) elements \(x=\sum_{i=1}^{n}x_{i}\), satisfying \(|x-\bar{x}|\leq n\beta\)._ Based on the above lemma, let \(\phi\) be the effective resistance diameter, \(\alpha\) be an error parameter, \(\underline{\beta}=\frac{\alpha}{2}\), and \(\epsilon=\frac{\alpha}{2\phi}\). Let \(t=O(\phi\sqrt{n(\log n)}/\beta)\), \(\tilde{V}=\{x_{1},x_{2},\ldots,x_{t}\}\) be a randomly selected subset, and \(\hat{\mathcal{R}}_{v}=\sum_{w\in\hat{V}}\hat{\mathcal{R}}_{w}n/(\phi\sqrt{n( \log n)}/\beta)\), \(\hat{\mathcal{R}}_{v}\) can be approximated by \(\hat{\mathcal{R}}_{v}\), satisfying \[|\hat{\mathcal{R}}_{v}-\hat{\mathcal{R}}_{v}|\leq n\beta, \tag{7}\] and further \(\mathcal{R}_{v}\) can be approximated by \(\hat{\mathcal{R}}_{v}\) satisfying \[|\mathcal{R}_{v}-\hat{\mathcal{R}}_{v}|\leq n\alpha. \tag{8}\] By setting \(Q=\tilde{V}\), Initialization and DeleteEdge can be simplified by just approximating and updating effective resistances between node \(v\) and nodes in set \(\tilde{V}\). #### 8.2.3 Fast Algorithm for InforCenMin Equipped with the techniques of fast sampling of random walks and efficient computing of the sum of effective resistances, we are now prepared to propose a faster algorithm FastICM for Problem 1, outlined in Algorithm 5. FastICM performs \(k\) rounds (Lines 5-9) for iteratively selecting \(k\) edges. Given a network \(\mathcal{G}=(V,E)\) with an effective resistance diameter \(\phi\), we randomly select a node set \(\tilde{V}\subset V\) of \(O(\phi\sqrt{n(\log n)}/\beta)\) nodes, simulate \(m[\phi^{2}(\log n)/\alpha^{2}]\)\(l\)-truncated random walks, and approximate the effective resistances. It takes \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3})\) time for the three operations. Then, the algorithm takes \(O(km^{2}n^{-3/2}l^{2}\log^{3/2}n\alpha^{-3}\phi^{3})\) time to update the sum of effective resistances for each candidate edge in all \(k\) rounds. Thus, the overall time complexity of FastICM is \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3}(1+ kml/n))\). Theorem 8.4 characterizes the performance of FastICM. **Theorem 8.4**.: _For any \(k>0\), an error parameter \(\alpha\in(0,1)\), a maximum length \(l\) of walks, and the effective resistance diameter \(\phi\), FastICM runs in \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3}(1+ kml/n))\) time, and outputs a solution set \(P\) by greedily selecting \(k\) edges, such that for the edge selected in each iteration, the information centrality of the target node is maximally decreased._ ``` Input : A connected graph \(\mathcal{G}=(V,E)\); an integer \(k<m\); a node \(v\in V\); the maximum length of random walks \(l\); a real number \(\alpha\in(0,1)\); the effective resistance diameter \(\phi\) Output : An edge set \(\mathcal{P}\subset E\) satisfying constraint \(k\) 1 Set \(\beta=\frac{\alpha}{2}\) and \(\epsilon=\frac{\alpha}{2\phi}\); \(P\leftarrow\emptyset\) 2 Sample the node set \(\tilde{V}\subset V\) of \(t=O(\phi\sqrt{n(\log n)}/\beta)\)\(\mathcal{R},W\leftarrow\textsc{Initialization}(\mathcal{G},v,\tilde{V},l,\epsilon)\)for\(i=1\) to \(k\)do 3\(\{(e,\hat{\mathcal{R}}_{v}(P\cup\{e\}))|e\in E\}\leftarrow\)\(\{\) DeleteEdge\((\mathcal{G},W,v,\tilde{V},P)\) 4 Select \(e_{i}\). set \(e_{i}\leftarrow\arg\max_{e\in E}\hat{\mathcal{R}}_{v}(P\cup\{e\})\) 5 Update solution \(P\gets P\cup\{e_{i}\}\) 6 Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\) 7 Update the collection of walks \(W\) following the method stated in Section 8.2.1 8 return\(P\) ``` **Algorithm 5**FastICM\((\mathcal{G},k,v,l,\alpha,\phi)\) ## 9 Experiment In this section, we evaluate the efficiency and effectiveness of the proposed algorithms through experiments on diverse real-world and synthetic networks of varying types and sizes. ### _Experimental Setup_ In this section, we present the basic experimental settings, which encompass the machine configuration, datasets, baseline algorithms, and choices of parameters. **Machine Configuration.** Experiments, implemented in _Julia_, are conducted on a Linux server with 32G RAM and 4.2 GHz Intel i7-7700 CPU and with a single thread. The source code is publicly available on [https://github.com/hahaabc/fasticm](https://github.com/hahaabc/fasticm). **Datasets.** We test our algorithms on both real-world networks, from publically available datasets of Network Repository [46] and SNAP [49], and synthetic graphs. The name and some statistics of the experimented real-life networks are presented in Table I, sorted by the number of nodes. Various synthetic graph models have been introduced to mimic the real-world networks by capturing fundamental properties consistently observed in such networks, such as small diameter and scale-free degree distribution [50]. We use the well-established and popular BA [50] and WS [43] models. The parameters for these graphs have been chosen such that the average degree closely matches that of real-world networks of comparable size, as in Table I. (All experiments are conducted on the largest component of these graphs due to the connectivity requirement in our problem.) **Baselines.** To better illustrate the effectiveness of our two algorithms, we compare them against the following algorithms. We should stress that all these algorithms are enforced to satisfy the connectivity constraint. * Optimum: find an optimal edge set of size \(k\) using brute-force search. * Random: randomly choose \(k\) edges. * Betweenness: select \(k\) edges with the highest betweenness centrality. The betweenness of an edge here accounts for the number of shortest paths between the target node and other nodes which pass through that edge. * Spanning: select \(k\) edges with the highest spanning edge centrality [59], which is defined as the fraction of spanning trees of a graph that contain a certain edge. * Solver: pick \(k\) edges using Algorithm 1 equipped with the approximation technique in [9] plus the connectivity verification process. **Parameters.** For algorithms ApproxiSC and FastICM, we need to set some parameters. Firstly, smaller values of error parameter \(\epsilon\) in ApproxiSC and error parameter \(\alpha\) in FastICM provide more accurate results, while larger values result in higher efficiency. We set \(\epsilon=0.005\) and \(\alpha=0.05\) in our experiments because as it will be observed in our experiments in Section 9.4, they provide a suitable trade-off between accuracy and efficiency. The parameter \(\phi\) in FastICM is the effective resistance diameter of the graph. Since it is inherently difficult to determine its exact value, we instead use the diameter of the graph (reported in Table I), which gives an upper bound on \(\phi\). The length \(l\) of sampled walks can be determined by Lemma 8.2, which involves the spectral radius of matrix \(\mathbf{P}_{-\{v\}}\), and the ratio of invalid walks \(\gamma\). We set the ratio of invalid walks to a relatively small value, namely \(0.1\%\). (According to our experiments, \(0.1\%\) achieves a good trade-off between effectiveness and efficiency. Otherwise, there is nothing particularly unique about this value.) Computing the spectral radius takes much time for large networks, so we generously estimate it as \(\lambda=0.95\). Unless otherwise specified, the values of the other parameters are set by default when varying a parameter. ### _Effectiveness of Proposed Algorithms_ To evaluate the performance of our algorithms, we compare the quality of the returned edges by the Optimum and Random strategies against ours. The comparison is conducted on four small networks, consisting of two random graphs BA and WS with 50 nodes and two real-world networks, namely Karate and Dolphins. Due to the small size of these networks, we can compare the outcome of our algorithms against the optimal solution (obtained using brute-force search). We select 10 random target nodes. For every network and every \(k\) from \(1\) to \(5\), we run each algorithm 10 times, each time for one of the selected target nodes, and calculate the average final information centrality of target nodes. The lower the obtained value, the better the performance of the algorithm. The results, depicted in Fig. 6, demonstrate that the performance of the algorithms is consistent across all datasets and that our algorithms perform almost equally to the optimal solutions, reaching parity in the case of the Karate network. On the other hand, the Random scheme consistently underperforms, while the performance of Solver is similar to that of ours. As further evidence of the effectiveness of our algorithms, we test them on four larger real-world networks. Since finding optimal solutions through brute-force searching is not feasible due to computational limitations, we compare the results of our algorithms against Random, Betweenness, Spanning, and Solver algorithms. Similar to above, we randomly select 10 distinct target nodes to mitigate the impact of node positions on the results. We first determine the initial information centrality of each target node and then degrade it by removing up to \(k=10\) edges using our greedy algorithms and four others. After each edge removal, we calculate and record the updated information centrality. Finally, we average over the information centrality of all target nodes in each round and exhibit the results in Fig. 7. Our algorithms outperform the other algorithms on all tested networks. \begin{table} \begin{tabular}{c c c c|c|c c c} \hline \hline Network & \(n\) & \(m\) & \(Dim.\) & Network & \(n\) & \(m\) & \(Dim.\) \\ \hline karate & 34 & 78 & 5 & Erdos & 6,927 & 11,850 & 4 \\ Dolphins & 62 & 159 & 8 & Oregon & 10,900 & 31,180 & 9 \\ Bomb Train & 64 & 243 & 6 & ca-HepPh & 11,204 & 117,619 & 13 \\ Polbooks & 105 & 441 & 7 & Caida & 26,475 & 53,381 & 17 \\ Hamster & 921 & 4,032 & 8 & Twitter & 404,719 & 713,319 & 8 \\ Virgili & 1,133 & 5,451 & 8 & Delicious & 536,108 & 1,365,961 & 14 \\ ca-GrQc & 5,242 & 14,496 & 17 & FourSquare & 639,014 & 3,214,986 & 4 \\ as20000102 & 6,474 & 13,895 & 9 & YoutubeSnap & 1,134,890 & 2,987,624 & 20 \\ \hline \hline \end{tabular} \end{table} TABLE I: Some statistics of the experimented real-world networks. We denote the number of nodes and edges in the largest connected component by \(n\) and \(m\), respectively, and use \(Dim.\) to represent the diameter of a network. Fig. 6: Average information centrality of target nodes following edge removals, returned by different algorithms on four networks: BA (a), WS (b), Karate (c), and Dolphins (d). ### _Efficiency of Proposed Algorithms_ In the previous section, we observed that our algorithms consistently outperform other algorithms and produce near optimal solutions. Here, we focus on the run time analysis of these algorithms on different networks. For each network, 20 target nodes are selected randomly, for each of which, \(k=10\) edges are removed to minimize its information centrality using these algorithms. The average value of final information centrality for the targeted nodes and the average run time are reported in Table II for all four algorithms. As expected, the value of information centrality is pretty close for all three algorithms. However, ApproxiSC and FastICM are extremely more efficient than the other two algorithms, especially on larger networks. As explained before, Solver is slower than ApproxiSC and FastICM mainly due to its additional connectivity verification cost. FastICM can handle massive networks, such as FourSquare (with 639,014 nodes) and YoutubeSnap (with 1,134,890 nodes), in only a few hours while ExactSM and Solver fail to end before our cut-off time of 24 hours. ### _Impact of Error Parameters_ Recall that ApproxiSC and FastICM have the error parameters \(\epsilon\) and \(\alpha\), respectively. These error parameters balance efficiency and approximation guarantee. We examine their effect on two exemplary datasets, namely Hamster and Virgili. Intuitively speaking, increasing error parameters yields a looser approximation guarantee, which as a result may impact the quality of the selected edge set. Let \(\Delta\) be the difference between the final information centrality derived by ApproxiSC (analogously, FastICM) and ExactSM after edge removal. We report in Fig. 8 the resulting \(\Delta\) after removing 10 edges and the corresponding time consumed for selecting these edges. As expected, smaller \(\epsilon\) (analogously, \(\alpha\)) yield more accurate results, while larger values demonstrate greater efficiency. The results in Fig. 8 should also justify our default choices of \(\epsilon=0.005\) and \(\alpha=0.05\) in our experiments since they provide an acceptable balance between the time cost and approximation guarantee. (In these experiments, we have again used 10 randomly chosen target nodes.) ## 10 Conclusion and Future Work Inspired by the imperative of possessing effective mechanisms to mitigate the potential deleterious effects incurred by a compromised/malicious node within a network, we have delved into the problem of moderating critical nodes. We investigated the setup where the goal is to minimize information centrality of a target node by removing \(k\) edges while maintaining the network's connectivity. We proved the problem to be NP-complete and its objective function to be monotonically decreasing but non-supermodular. By advancement and development of several novel proof techniques such as random walk-based Schur complement approximation, we provided two fast approximation algorithms and their theoretical analysis. Furthermore, our extensive experiments on various real-world and synthetic networks demonstrated that our proposed algorithms provide solutions very close to optimal in most cases. One of our algorithms, which has a nearly linear run time, can cover networks with over one million nodes on a single machine. Therefore, our algorithms not only permit for a rigorous theoretical analysis, but also perform effectively and efficiently in practice. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Network} & \multicolumn{4}{c}{Information Centrality} & \multicolumn{4}{c}{Time (seconds)} \\ \cline{2-9} & FIM & ASC & ESM & SOL & FIM & ASC & ESM & SOL \\ \hline Polbooks & 2.667 & 2.661 & 2.603 & 2.621 & 3 & 6 & 8 & 20 \\ Hamster & 2.172 & 2.145 & 2.145 & 2.145 & 4 & 6 & 9 & 42 \\ Virgili & 2.704 & 2.704 & 2.686 & 2.687 & 4 & 9 & 16 & 73 \\ ca-GrQc & 1.614 & 1.600 & 1.598 & 1.600 & 70 & 83 & 517 & 371 \\ as2OOU2 & 1.349 & 1.330 & 1.336 & 1.339 & 144 & 194 & 31.75 & 298 \\ Erdős & 1.115 & 1.099 & 1.086 & 1.086 & 281 & 319 & 6,336 & 350 \\ Oregon & 1.612 & 1.612 & 1.586 & 1.603 & 814 & 912 & 19,357 & 1,568 \\ ca-HepPh & 2.621 & 2.610 & 2.605 & 2.610 & 412 & 621 & 7,570 & 1,075 \\ Calda & 1.379 & 1.364 & - & 1.374 & 2.047 & 2.580 & - & 4,789 \\ Twitter & 1.932 & 1.911 & - & - & 3,358 & 8,981 & - & - \\ Delicious & 2.012 & 1.931 & - & - & 7,287 & 25,823 & - & - \\ FourSquare & 1.491 & 1.401 & - & - & 19,461 & 68,981 & - & - \\ YoutubeSnap & 1.194 & 1.111 & - & - & 11,454 & 57,401 & - & - \\ \hline \hline \end{tabular} \end{table} TABLE II: The average final information centrality of 20 randomly chosen target nodes and running times for removing \(k=10\) edges using FastICM (FIM), ApproxiSC (ASC), ExactSM (ESM) and Solver (SOL) algorithms on several real-world networks. Fig. 8: Impact of error parameters \(\epsilon\) and \(\alpha\) in ApproxiSC and FastICM algorithms tested on two networks, Hamster(a), and Virgili (b). Fig. 7: Average information centrality of target nodes following edge removals for different algorithms on four networks: Bomb train (a), Virgili (b), ca-GrQc (c), and Erdős (d). We aspire this work to create the necessary grounds for further studies of moderating critical nodes in networks and to be the starting point for a long line of research on this topic. Below, we propose some potential avenues for future research to tackle the limitations of the present work. * **Connectivity.** We considered the constraint of keeping the underlying network connected while removing edges. Motivated by various real-world applications, it would be important to investigate various notations of connectivity, e.g., \(t\)-connectivity (for \(t\geq 2\)), conductance, or algebraic definitions of connectivity. * **Edge Costs.** In our setup, the cost of removing all edges is the same. However, in the real world, removing some edges might be more costly than the others. Therefore, it would be interesting to investigate the problem when each edge has a cost assigned to it. * **Multiple Target Nodes.** A natural generalization of the moderation problem is to analysis the setup where one aims to minimize the overall information centrality of several target nodes at once. * **Weighted Networks.** In practice, edges have weights associated with them. For example, in a social network, an edge weight corresponds to the strength of the social tie between the corresponding two nodes and in an internet network, it could correspond to the bandwidth of the link between two routers. We believe to generalize our algorithms to cover the weighted setup, the introduction of novel ideas and advancement of existing techniques are required. * **Correlation with diffusion models.** In the future, we will try to find how information centrality correlates with specific diffusion models (cascades, thresholds, etc.)
2309.04585
Asynchronous Distributed Optimization via ADMM with Efficient Communication
In this paper, we focus on an asynchronous distributed optimization problem. In our problem, each node is endowed with a convex local cost function, and is able to communicate with its neighbors over a directed communication network. Furthermore, we assume that the communication channels between nodes have limited bandwidth, and each node suffers from processing delays. We present a distributed algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. In our proposed algorithm, nodes exchange quantized valued messages and operate in an asynchronous fashion. More specifically, during every iteration of our algorithm each node (i) solves a local convex optimization problem (for the one of its primal variables), and (ii) utilizes a finite-time quantized averaging algorithm to obtain the value of the second primal variable (since the cost function for the second primal variable is not decomposable). We show that our algorithm converges to the optimal solution at a rate of $O(1/k)$ (where $k$ is the number of time steps) for the case where the local cost function of every node is convex and not-necessarily differentiable. Finally, we demonstrate the operational advantages of our algorithm against other algorithms from the literature.
Apostolos I. Rikos, Wei Jiang, Themistoklis Charalambous, Karl H. Johansson
2023-09-08T20:27:42Z
http://arxiv.org/abs/2309.04585v1
# Asynchronous Distributed Optimization via ADMM ###### Abstract In this paper, we focus on an asynchronous distributed optimization problem. In our problem, each node is endowed with a convex local cost function, and is able to communicate with its neighbors over a directed communication network. Furthermore, we assume that the communication channels between nodes have limited bandwidth, and each node suffers from processing delays. We present a distributed algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. In our proposed algorithm, nodes exchange quantized valued messages and operate in an asynchronous fashion. More specifically, during every iteration of our algorithm each node (i) solves a local convex optimization problem (for the one of its primal variables), and (ii) utilizes a finite-time quantized averaging algorithm to obtain the value of the second primal variable (since the cost function for the second primal variable is not decomposable). We show that our algorithm converges to the optimal solution at a rate of \(O(1/k)\) (where \(k\) is the number of time steps) for the case where the local cost function of every node is convex and not-necessarily differentiable. Finally, we demonstrate the operational advantages of our algorithm against other algorithms from the literature. ## I Introduction The problem of distributed optimization has received extensive attention in recent years. Due to the rise of large-scale machine learning [1], control [2], and other data-driven applications [3], there is a growing need to solve optimization problems that involve massive amounts of data. Solving these problems in a centralized way is proven to be infeasible since it is difficult or impossible to store and process large amounts of data on a single node. Distributed optimization is a method that distributes data across multiple nodes. Each node performs computations on its stored data and collaborates with others to solve the optimization problem collectively. This approach optimizes a global objective function by combining each node's local objective function and coordinating with the network. The advantage is reducing computational and storage requirements for individual nodes. However, frequent communication with neighboring nodes is necessary to update optimization variables. This can become a bottleneck with increasing nodes or data. To address this issue, recent attention from the scientific community focuses on developing optimization algorithms with efficient communication. This leads to enhancements on scalability and operational efficiency, while mitigating issues like network congestion, latency, and bandwidth limitations. **Existing Literature.** Most works in the literature assume that nodes can process and exchange real values. This may result in communication overhead, especially for algorithms requiring frequent and complex communication (see, e.g., [4, 5, 6, 7, 8, 9, 10]). In practical applications, nodes must exchange quantized messages to efficiently utilize network resources like energy and processing power. For this reason, recent research focuses on communication-efficient algorithms (e.g., [6, 7, 11, 12, 13, 14, 15]), but they often assume perfectly synchronized nodes or bidirectional communication, limiting their applicability. Addressing communication overhead remains a key challenge, necessitating the development of communication-efficient algorithms that can operate over directed networks asynchronously. Therefore, continued research in this area is crucial to overcoming this bottleneck and enhancing the performance of distributed optimization methods. **Main Contributions.** Existing algorithms in the literature often assume that nodes can exchange precise values of their optimization variables and operate synchronously. However, transmitting exact values (often irrational numbers) necessitates an infinite number of bits and becomes infeasible. Moreover, synchronizing nodes within a distributed network involves costly protocols, time-consuming to execute. In this paper, we present a distributed optimization algorithm, which aims to address these challenges. More specifically, we make the following contributions. **A.** We present a distributed optimization algorithm that leverages the advantages of the ADMM optimization strategy and operates over a directed communication graph. Our algorithm allows nodes to operate in an asynchronous fashion, and enables efficient communication as nodes communicate with quantized messages; see Algorithm 1. **B.** We prove that our algorithm converges to the optimal solution at a rate of \(O(1/k)\) even for non-differentiable and convex local cost functions (as it is the case for similar algorithms with real-valued states). This rate is justified in our simulations in which our algorithm exhibits com parable performance with real-valued communication algorithms while guaranteeing efficient (quantized) communication among nodes; see Section VI. Furthermore, we show that the optimal solution is calculated within an error bound that depends on the quantization level; see Theorem 1. ## II Notation and Preliminaries **Notation.** The sets of real, rational, integer and natural numbers are denoted by \(\mathds{R},\mathds{Q},\mathds{Z}\) and \(\mathds{I}\mathds{N}\), respectively. The symbol \(\mathds{Z}_{\geq 0}\) (\(\mathds{Z}_{>0}\)) denotes the set of nonnegative (positive) integer numbers. The symbol \(\mathds{R}_{\geq 0}\) (\(\mathds{R}_{>0}\)) denotes the set of nonnegative (positive) real numbers. The symbol \(\mathds{R}_{\geq 0}^{n}\) denotes the nonnegative orthant of the \(n\)-dimensional real space \(\mathds{R}^{n}\). Matrices are denoted with capital letters (e.g., \(A\)), and vectors with small letters (e.g., \(x\)). The transpose of matrix \(A\) and vector \(x\) are denoted as \(A^{\top}\), \(x^{\top}\), respectively. For any real number \(a\in\mathds{R}\), the floor \(\lfloor a\rfloor\) denotes the greatest integer less than or equal to \(a\) while the ceiling \(\lceil a\rceil\) denotes the least integer greater than or equal to \(a\). For any matrix \(A\in\mathds{R}^{n\times n}\), the \(a_{ij}\) denotes the entry in row \(i\) and column \(j\). By \(\mathds{1}\) and \(\mathds{I}\) we denote the all-ones vector and the identity matrix of appropriate dimensions, respectively. By \(\|\cdot\|\), we denote the Euclidean norm of a vector. **Graph Theory.** The communication network is captured by a directed graph (digraph) defined as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). This digraph consists of \(n\) (\(n\geq 2\)) agents communicating only with their immediate neighbors, and is static (i.e., it does not change over time). In \(\mathcal{G}\), the set of nodes is denoted as \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\), and the set of edges as \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\cup\{(v_{i},v_{i})\mid v_{i} \in\mathcal{V}\}\) (note that each agent has also a virtual self-edge). The cardinality of the sets of nodes, edges are denoted as \(|\mathcal{V}|=N\), \(|\mathcal{E}|=m\), respectively. A directed edge from node \(v_{i}\) to node \(v_{l}\) is denoted by \((v_{l},v_{i})\in\mathcal{E}\), and captures the fact that node \(v_{l}\) can receive information from node \(v_{i}\) at time step \(k\) (but not the other way around). The subset of nodes that can directly transmit information to node \(v_{i}\) is called the set of in-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{-}=\{v_{j}\in\mathcal{V}\mid(v_{i},v_{j})\in\mathcal{E}\}\). The subset of nodes that can directly receive information from node \(v_{i}\) is called the set of out-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{+}=\{v_{l}\in\mathcal{V}\mid(v_{l},v_{i})\in\mathcal{E}\}\). The _in-degree_, and _out-degree_ of \(v_{j}\) are denoted by \(\mathcal{D}_{i}^{-}=|\mathcal{N}_{i}^{-}|\), \(\mathcal{D}_{i}^{+}=|\mathcal{N}_{i}^{+}|\), respectively. The diameter \(D\) of a digraph is the longest shortest path between any two nodes \(v_{l},v_{i}\in\mathcal{V}\). A directed _path_ from \(v_{i}\) to \(v_{l}\) of length \(t\) exists if we can find a sequence of agents \(i\equiv l_{0},l_{1},\ldots,l_{t}\equiv l\) such that \((l_{r+1},l_{r})\in\mathcal{E}\) for \(\tau=0,1,\ldots,t-1\). A digraph is _strongly connected_ if there exists a directed path from every node \(v_{i}\) to every node \(v_{l}\), for every \(v_{i},v_{l}\in\mathcal{V}\). **ADMM Algorithm.** The standard ADMM algorithm [16] is designed to solve the following problem: \[\min_{x\in\mathds{R}^{p},z\in\mathds{R}^{m}} f(x)+g(x),\] (1) s.t. \[Ax+Bz=c,\] where \(A\in\mathds{R}^{q\times p}\), \(B\in\mathds{R}^{q\times m}\) and \(c\in\mathds{R}^{q}\) (for \(q,p,m\in\mathds{N}\)). In order to solve (1), the augmented Lagrangian is: \[L_{\rho}(x,z,\lambda)=f(x)+g(x) +\lambda(Ax+Bz-c)\] \[+\frac{\rho}{2}\|Ax+Bz-c\|^{2}, \tag{2}\] where \(\lambda\in\mathds{R}\) is the Lagrange multiplier, and \(\rho\in\mathds{R}\) is the positive penalty parameter. The primary variables \(x\), \(z\) and the Lagrangian multiplier \(\lambda\) are initialized as \([x,z,\lambda]^{\top}=[x^{[0]},z^{[0]},\lambda^{[0]}]^{\top}\). Then, during every ADMM time step, the \(x\), \(z\) and \(\lambda\) are updated as: \[x^{[k+1]}= \operatorname*{arg\,min}_{x\in\mathds{R}^{p}}L_{\rho}(x,z^{[k]}, \lambda^{[k]}), \tag{3}\] \[z^{[k+1]}= \operatorname*{arg\,min}_{z\in\mathds{R}^{m}}L_{\rho}(x^{[k+1]},z,\lambda^{[k]}),\] (4) \[\lambda^{[k+1]}= \lambda^{[k]}+\rho(Ax^{[k+1]}+Bz^{[k+1]}-c), \tag{5}\] where \(\rho\) in (5) is the penalty parameter from (2). **Asymmetric Quantizers.** Quantization is a strategy that lessens the number of bits needed to represent information. This reduces the required communication bandwidth and increases power and computation efficiency. Quantization is mainly used to describe communication constraints and imperfect information exchanges between nodes [17]. The three main types of quantizers are (i) asymmetric, (ii) uniform, and (iii) logarithmic. In this paper, we rely on asymmetric quantizers to reduce the required communication bandwidth. Note that the results of this paper are transferable to other quantizer types (e.g., logarithmic or uniform). Asymmetric quantizers are defined as \[q_{\Delta}^{a}(\xi)=\Big{\lfloor}\frac{\xi}{\Delta}\Big{\rfloor}, \tag{6}\] where \(\Delta\in\mathds{Q}\) is the quantization level, \(\xi\in\mathds{R}\) is the value to be quantized, and \(q_{\Delta}^{a}(\xi)\in\mathds{Q}\) is the quantized version of \(\xi\) with quantization level \(\Delta\) (note that the superscript "\(a\)" indicates that the quantizer is asymmetric). ## III Problem Formulation **Problem Statement.** Let us consider a distributed network modeled as a digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(n=|\mathcal{V}|\) nodes. In our network \(\mathcal{G}\), we assume that the communication channels among nodes have limited bandwidth. Each node \(v_{i}\) is endowed with a scalar local cost function \(f_{i}(x):\mathds{R}^{p}\mapsto\mathds{R}\) only known to node \(v_{i}\). In this paper we aim to develop a distributed algorithm which allows nodes to cooperatively solve the following optimization problem \[\min_{x\in\mathds{R}^{p}} \sum_{i=1}^{n}f_{i}(x), \tag{7}\] where \(x\in\mathds{R}^{p}\) is the global optimization variable (or common decision variable). We will solve (7) via the distributed ADMM strategy. Furthermore, in our solution we guarantee efficient communication between nodes (due to communication channels of limited bandwidth in the network). **Modification of the Optimization Problem.** In order to solve (7) via the ADMM and guarantee efficient communication between nodes, we introduce (i) the variable \(x_{i}\) for every node \(v_{i}\), (ii) the constraint \(|x_{i}-x_{j}|\leq\epsilon\) for every \(v_{i},v_{j}\in\mathcal{V}\) (where \(\epsilon\in\mathds{R}\) is an error tolerance which is predefined), and (iii) the constraint that nodes communicate with quantized values. The second constraint is introduced to allow an asynchronous implementation of the distributed ADMM strategy, and the third constraint to guarantee efficient communication between nodes. Considering the aforementioned constraints (i), (ii) and (iii), (7) becomes: \[\min_{x_{i}} \sum_{i=1}^{n}f_{i}(x_{i}),i=1,...,n\] (8) s.t. \[|x_{i}-x_{j}|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V},\] (9) nodes communicate with quantized values. (10) Let us now define a closed nonempty convex set \(\mathcal{C}\) as \[\mathcal{C}=\left\{\begin{bmatrix}x_{1}^{\mathrm{ T}}&x_{2}^{\mathrm{ T}}&\ldots&x_{n}^{\mathrm{ T}}\end{bmatrix}^{\mathrm{ T}}\in\mathds{R}^{np}\,:\,\left\|x_{i}-x_{j}\right\|\leq\epsilon\right\}. \tag{11}\] Furthermore, denote \(X\coloneqq\begin{bmatrix}x_{1}^{\mathrm{ T}}&x_{2}^{\mathrm{ T}}&\ldots&x_{n}^{\mathrm{ T}}\end{bmatrix}^{\mathrm{ T}}\) and its copy variable \(z\in\mathds{R}^{np}\). This means that (9) and (11) become \[X=z,\,\forall z\in\mathcal{C}. \tag{12}\] Now let us define the indicator function \(g(z)\) of set \(\mathcal{C}\) as \[g(z)=\left\{\begin{array}{ll}0,&\mbox{if}\,z\in\mathcal{C},\\ \infty,&\mbox{otherwise}.\end{array}\right. \tag{13}\] Incorporating (12) and (13) into (8), we have that (8) becomes \[\min_{z,x_{i}} \left\{\sum_{i=1}^{n}f_{i}(x_{i})+g(z)\right\},i=1,\ldots,n\] (14) s.t. \[X-z=0,\,\forall z\in\mathcal{C},\] nodes communicate with quantized values. As a result, in this paper, we aim to design a distributed algorithm that solves (14) via the distributed ADMM strategy. ## IV Preliminaries on Distributed Coordination We now present a definition of asynchrony (borrowed from [8]) that defines the operation of nodes in the network. Furthermore, we present a distributed coordination algorithm that operates with quantized values and is necessary for our subsequent development. ### _Definition of Asynchronous Operation_ During their optimization operation, nodes aim to coordinate in an asynchronous fashion. Specifically, let us assume that the iterations for the optimization operation start at time step \(t(0)\in\mathds{R}_{+}\). Furthermore, we assume that one (or more) nodes transmit values to their out-neighbors at a set of time instances \(\mathcal{T}=\{t(1),t(2),t(3),\ldots\}\). During the nodes' asynchronous operation, a message that is received at time step \(t(\eta_{1})\) from node \(v_{i}\), is processed at time step \(t(\eta_{2})\) where \(\eta_{2}>\eta_{1}\). This means that the message received at time step \(t(\eta_{1})\) suffers from a processing delay of \(t(\eta_{2})-t(\eta_{1})\) time steps. An example of how processing delays affecting transmissions is shown in Fig. 1 (that is borrowed from [8]). Note here that the nodes states at time step \(t(\eta)\) are indexed by \(\eta\). This means that the state of node \(v_{i}\) at time step \(t(\eta)\) is denoted as \(x_{i}^{\eta}\in\mathds{R}^{p}\). We now present the following assumption which is necessary for the asynchronous operation of every node. **Assumption 1**: _The number of time steps required for a node \(v_{i}\) to process the information received from its in-neighbors is upper bounded by \(\mathcal{B}\in\mathds{N}\). Furthermore, the actual time (in seconds) required for a node \(v_{i}\) to process the information received from its in-neighbors is upper bounded by \(T\in\mathds{R}_{\geq 0}\)._ Assumption 1 states that there exists a finite number of steps \(\mathcal{B}\) before which all nodes have updated their states and proceed to perform transmissions to their neighboring nodes. The upper bound \(\mathcal{B}\) is translated to an upper bound of \(T\) in actual time (in seconds). This is mainly because it is not possible for nodes to count the number of time steps elapsed in the network (and understand when \(\mathcal{B}\) time steps have passed. The value \(T\) can be counted by each node individually. ### _Asynchronous \(\max\)/\(\min\) - Consensus_ In asynchronous max/\(\min\) consensus (see [18]), the update rule for every node \(v_{i}\in\mathcal{V}\) is: \[x_{i}^{[k+\theta_{i}^{[k]}]}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}} \{x_{j}^{[k+\theta_{i_{j}}^{[k]}]}\}, \tag{15}\] where \(\theta_{i}^{[k]}\) is the update instance of node \(v_{i}\), \(x_{j}^{[k+\theta_{ij}^{[k]}]}\) are the states of the in-neighbors \(v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}\) during the time instant of \(v_{i}\)'s update, \(\theta_{ij}^{[k]}\) are the asynchronous state updates of the in-neighbors of node \(v_{i}\) that occur between two consecutive updates of node \(v_{i}\)'s state. The asynchronous max/\(\min\) consensus in (15) converges to the maximum value among all nodes in a finite number of steps \(s^{\prime}\leq D\mathcal{B}\) (see [18]), where \(D\) is the diameter of the network, and \(\mathcal{B}\) is the upper bound on the number of time steps required for a node \(v_{j}\) to process the information received from its in-neighbors. ## V Distributed Asynchronous Optimization via ADMM with Efficient Communication In this section we present a distributed algorithm which solves problem (14). Before presenting the operation of the Fig. 1: Example of how processing and transmission delays affect the operation of nodes \(v_{1}\), \(v_{2}\), \(v_{3}\). Blue dots indicate the iterations and blue arrows indicate the transmissions. Transmissions occur at time steps \(t_{i}(\eta)\), and \(t_{i}(\eta+1)-t_{i}(\eta)\) is the processing delay, where \(i\in\{1,2,3\}\), \(\eta\in\mathds{Z}_{\geq 0}\). The time difference from the blue dot to the blue arrow is the transmission delay [8]. proposed algorithm, we analyze the ADMM operation over the problem (14). In (14), let us denote \(F(X)\coloneqq\sum_{i=1}^{n}f_{i}(x_{i})\). This means that the Lagrangian function is equal to \[L(X,z,\lambda)=F(X)+g(z)+\lambda^{\mathrm{T}}(X-z), \tag{16}\] where \(\lambda\in\mathds{R}^{np}\) is the Lagrange multiplier. We now make the following assumptions to solve the problem (14). **Assumption 2**: _Every cost function \(f_{i}:\mathds{R}^{p}\to\mathds{R}\) is closed, proper and convex._ **Assumption 3**: _The Lagrangian \(L(X,z,\lambda)\) has a saddle point. This means that there exists \((X^{*},z^{*},\lambda^{*})\), for which_ \[L(X^{*},z^{*},\lambda)\leq L(X^{*},z^{*},\lambda^{*})\leq L(X,z,\lambda^{*}), \tag{17}\] _for all \(X\in\mathds{R}^{np}\), \(z\in\mathds{R}^{np}\), and \(\lambda\in\mathds{R}^{np}\)._ Assumption 2 means that the local cost function \(f_{i}\) of every node \(v_{i}\) can be non-differentiable (see [19]). Furthermore, Assumptions 2 and 3 mean that \(L(X,z,\lambda^{*})\) is convex in \((X,z)\) and \((X^{*},z^{*})\) is a solution to problem (14) (see [19, 20]). Note that this is also based on the definition of \(g(z)\) in (13). Note here that our results extend naturally to strongly convex cost functions, since strong convexity implies convexity. Let us now focus on the Lagrangian of the problem in (14). At time step \(k\), the augmented Lagrangian of (14) is \[L_{\rho}(X^{[k]},z^{[k]},\lambda^{[k]}) \tag{18}\] \[= \sum_{i=1}^{n}f_{i}(x_{i}^{[k]})+g(z^{[k]})+\lambda^{[k]^{\mathrm{ T}}}(X^{[k]}-z^{[k]})\] \[+ \frac{\rho}{2}\|X^{[k]}-z^{[k]}\|^{2}\] \[= \sum_{i=1}^{n}\Big{(}f_{i}(x_{i}^{[k]})+\lambda_{i}^{[k]^{\mathrm{ T}}}(x_{i}^{[k]}-z_{i}^{[k]})+\frac{\rho}{2}\|x_{i}^{[k]}-z_{i}^{[k]}\|^{2} \Big{)}\] \[+ g(z^{[k]}),\] where \(z_{i}\in\mathds{R}^{p}\) is the \(i^{th}\) element of vector \(z\). In (3)-(5) we ignore terms that are independent of the optimization variables such as \(x_{i},z\) for node \(v_{i}\). This means that (3)-(5) become: \[x_{i}^{[k+1]}= \operatorname*{argmin}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{[k]^{ \mathrm{T}}}x_{i}+\frac{\rho}{2}\|x_{i}-z_{i}^{[k]}\|^{2}, \tag{19}\] \[z^{[k+1]}= \operatorname*{argmin}_{z}g(z)+\lambda^{[k]^{\mathrm{T}}}(X^{[k+1 ]}-z)+\frac{\rho}{2}\|X^{[k+1]}-z\|^{2}\] \[= \operatorname*{argmin}_{z}g(z)+\frac{\rho}{2}\|X^{[k+1]}-z+\frac{1 }{\rho}\lambda^{[k]}\|^{2},\] (20) \[\lambda_{i}^{[k+1]}= \lambda_{i}^{[k]}+\rho(x_{i}^{[k+1]}-z_{i}^{[k+1]}). \tag{21}\] Note that for (20) we use the identity \(2a^{T}b+b^{2}=(a+b)^{2}-a^{2}\) for \(a=\lambda^{[k]}/\rho\) and \(b=X^{[k+1]}-z\). Equations (19), (21) can be executed independently by node \(v_{i}\) in a parallel fashion. Specifically, node \(v_{i}\) can solve (19) for \(x_{i}^{[k+1]}\) by a classical method (e.g., a proximity operator [19, Section 4]), and implement trivially (21) for \(\lambda_{i}^{[k+1]}\). In (13), \(g(z)\) is the indicator function of the closed nonempty convex set \(\mathcal{C}\). This means that (20) becomes \[z^{[k+1]}=\Pi_{\mathcal{C}}(X^{[k+1]}+\lambda^{[k]}/\rho), \tag{22}\] where \(\Pi_{\mathcal{C}}\) is the projection (in the Euclidean norm) onto \(\mathcal{C}\). It is important to note here that the elements of \(z\) (i.e., \(z_{1},z_{2},\ldots,z_{n}\)) should belong into the set \(\mathcal{C}\) in finite time. This is due to the definition of \(g(z)\) in (13). Specifically, if the elements of \(z\) do not belong in \(\mathcal{C}\) then \(g(z)=\infty\) (thus (20) cannot be executed). Therefore, we need to adjust the values of the elements of \(z\) so that they belong in the set \(\mathcal{C}\) in finite time (i.e., we need to set the elements of \(z\) such that \(\|z_{i}-z_{j}\|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V}\)). Note that if \(z_{i}-z_{j}=0,\forall v_{i},v_{j}\in\mathcal{V}\), then every node \(v_{i}\in\mathcal{V}\) has reached consensus. Specifically, we can have that in finite time the state \(z_{i}\) becomes \[z_{i}=\frac{1}{n}\sum_{l=1}^{n}z_{l}^{[0]},\forall v_{i}\in\mathcal{V}, \tag{23}\] where \(z_{l}^{[0]}=x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho\). Furthermore, \(\|z_{i}-z_{j}\|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V}\) means that \[z_{i}\in[\frac{1}{n}\sum_{l=1}^{n}z_{l}^{[0]}-\frac{\epsilon}{2},\frac{1}{n} \sum_{l=1}^{n}z_{l}^{[0]}+\frac{\epsilon}{2}],\forall v_{i}\in\mathcal{V}, \tag{24}\] where \(z_{l}^{[0]}=x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho\). This means that for every node \(v_{i}\), \(z_{i}\) enters a circle with its center at \(\frac{1}{n}\sum_{l=1}^{n}(x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho)\) and its radius as \(\epsilon/2\). Finally, from (14), we have that each node in the network needs to communicate with its neighbors in an efficient manner. For this reason, we aim to allow each node \(v_{i}\) coordinate with its neighboring nodes by exchanging quantized values in order to fulfil (24). ### _Distributed Optimization Algorithm_ We now present our distributed optimization algorithm. The algorithm is detailed below as Algorithm 1 and allows each node in the network to solve the problem presented in (14). The operation of the proposed algorithm is based on two parts. During these parts, each node \(v_{i}\) (i) calculates \(x_{i}^{[k+1]}\), \(z_{i}^{[k+1]}\), \(\lambda_{i}^{[k+1]}\) according to (19)-(21) (see Algorithm 1), and (ii) coordinates with other nodes in a communication efficient manner in order to calculate \(z_{i}^{[k+1]}\) that belongs in \(\mathcal{C}\) in (11) (see Algorithm 2). Note that Algorithm 2 is a finite time coordination algorithm with quantized communication and is executed as a step of Algorithm 1. Note that during Algorithm 1, nodes operate in an asynchronous fashion. Synchronous operation requires synchronization among nodes or the existence of a global clock so that all nodes to agree on their update time. In our setting, asynchronous operation arises when each node (i) starts calculating \(x_{i}^{[k+1]}\), \(z_{i}^{[k+1]}\), \(\lambda_{i}^{[k+1]}\) according to (19)-(21) in Algorithm 1, and (ii) calculates \(z_{i}^{[k+1]}\) that belongs in \(\mathcal{C}\) in (11) in Algorithm 2. This can be achieved by making the internal clocks of all nodes have similar pacing, which will allow them to execute the optimization step at roughly the same time [21]. Furthermore, making the internal clocks of all nodes have similar pacing does not mean that we have to synchronize the clocks of the nodes (or their time-zones). Note that this is a common procedure in most modern computers as the clock pacing specification is defined within the Advanced Configuration and Power Interface (ACPI) specification [22]. We now make the following assumption which is important for the operation of our algorithm. **Assumption 4**: _The diameter \(D\) (or an upper bound) is known to every node \(v_{i}\) in the network._ Assumption 4 is necessary so that each node \(v_{i}\) is able to determine whether calculation of \(z_{i}\) that belongs in \(\mathcal{C}\) in (11) has been achieved in a distributed manner. We now present the details of Algorithm 1. **Input:** Strongly connected \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), parameter \(\rho\), diameter \(D\), error tolerance \(\epsilon\in\mathbb{Q}\), upper bound on processing delays \(\mathcal{B}\). Assumptions 1, 2, 3, 4 hold. \(k_{\text{max}}\) (ADMM maximum number of iterations). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) sets randomly \(x^{[0]},z^{[0]},\lambda^{[0]}\), and sets \(\Delta=\epsilon/3\). **Iteration:** For \(k=0,1,2,\ldots,k_{\text{max}}\), each node \(v_{i}\in\mathcal{V}\) does the following: 1. Calculate \(x_{i}^{[k+1]}\) via (19); 2. Calculate \(z_{i}^{[k+1]}\) = Algorithm \(2(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B})\); 3. Calculate \(\lambda_{i}^{[k+1]}\) via (21). **Output:** Each node \(v_{i}\in\mathcal{V}\) calculates \(x_{i}^{*}\) which solves problem (14) in Section III. **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) does the following: 1. Assigns probability \(b_{li}\) to each out-neigbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\), as follows \[b_{li}=\left\{\begin{array}{ll}\frac{1}{1+\mathcal{D}_{i}^{i}},&\text{if $l=i$ or $v_{l}\in\mathcal{N}_{i}^{+}$},\\ 0,&\text{if $l\neq i$ and $v_{l}\notin\mathcal{N}_{i}^{+}$};\end{array}\right.\] 2. flag\({}_{i}=0\), \(\xi_{i}=2\), \(y_{i}=2\)\(q_{\Delta}^{a}(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho)\) (see (6)); **Iteration:** For \(\eta=1,2,\ldots\), each node \(v_{i}\in\mathcal{V}\), does: 1. **if**\(\eta\mod(D\mathcal{B})=1\)**then** sets \(M_{i}=\lceil y_{i}/\xi_{i}\rceil\), \(m_{i}=\lfloor y_{i}/\xi_{i}\rfloor\); 2. broadcasts \(M_{i}\), \(m_{i}\) to every \(v_{l}\in\mathcal{N}_{i}^{+}\); receives \(M_{j}\), \(m_{j}\) from every \(v_{j}\in\mathcal{N}_{i}^{-}\); sets \(M_{i}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}M_{j}\), \(m_{i}=\min_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}m_{j}\); 3. sets \(d_{i}=\xi_{i}\); 4. **while**\(d_{i}>1\)**do \(4.1)**\(c_{i}^{[n]}=\lfloor y_{i}\ /\ \xi_{i}\rfloor\); 5. sets \(y_{i}=y_{i}-c_{i}^{[n]}\), \(\xi_{i}=\xi_{i}-1\), and \(d_{i}=d_{i}-1\); 6. transmits \(c_{i}^{[\eta]}\) to randomly chosen out-neighbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\) according to \(b_{li}\); 7. receives \(c_{j}^{[n]}\) from \(v_{j}\in\mathcal{N}_{i}^{-}\) and sets \[y_{i}=y_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (25) \[\xi_{i}=\xi_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (26) where \(w_{\eta-r,ij}^{[r]}=1\) when the processing time of node \(v_{i}\) is equal to \(r\) at time step \(\eta-r\), so that node \(v_{i}\) receives \(c_{i}^{[\eta]}\), \(1\) from \(v_{j}\) at time step \(\eta\) (otherwise \(w_{\eta-r,ij}^{[r]}=0\) and \(v_{i}\) receives no message at time step \(\eta\) from \(v_{j}\)); 8. **if**\(\eta\mod(D\mathcal{B})=0\)**and**\(M_{i}-m_{i}\leq 1\)**then** sets \(z_{i}^{[k+1]}=m_{i}\Delta\) and stops operation. **Output:**\(z_{i}^{[k+1]}\). **Algorithm 2** QuAsAvCo - Quantized Asynchronous Average Consensus **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) does the following: 1. Assigns probability \(b_{li}\) to each out-neigbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\), as follows \[b_{li}=\left\{\begin{array}{ll}\frac{1}{1+\mathcal{D}_{i}^{i}},&\text{if $l=i$ or $v_{l}\in\mathcal{N}_{i}^{+}$},\\ 0,&\text{if $l\neq i$ and $v_{l}\notin\mathcal{N}_{i}^{+}$};\end{array}\right.\] 2. flag\({}_{i}=0\), \(\xi_{i}=2\), \(y_{i}=2\)\(q_{\Delta}^{a}(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho)\) (see (6)); **Iteration:** For \(\eta=1,2,\ldots\), each node \(v_{i}\in\mathcal{V}\), does: 1. **if**\(\eta\mod(D\mathcal{B})=1\)**then** sets \(M_{i}=\lceil y_{i}/\xi_{i}\rceil\), \(m_{i}=\lfloor y_{i}/\xi_{i}\rfloor\); 2. broadcasts \(M_{i}\), \(m_{i}\) to every \(v_{l}\in\mathcal{N}_{i}^{+}\); receives \(M_{j}\), \(m_{j}\) from every \(v_{j}\in\mathcal{N}_{i}^{-}\); sets \(M_{i}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}M_{j}\), \(m_{i}=\min_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}m_{j}\); 3. sets \(d_{i}=\xi_{i}\); 4. **while**\(d_{i}>1\)**do \(4.1)**\(c_{i}^{[n]}=\lfloor y_{i}\ /\ \xi_{i}\rfloor\); 5. sets \(y_{i}=y_{i}-c_{i}^{[n]}\), \(\xi_{i}=\xi_{i}-1\), and \(d_{i}=d_{i}-1\); 6. transmits \(c_{i}^{[n]}\) to randomly chosen out-neighbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\) according to \(b_{li}\); 7. receives \(c_{j}^{[n]}\) from \(v_{j}\in\mathcal{N}_{i}^{-}\) and sets \[y_{i}=y_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (25) \[\xi_{i}=\xi_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (26) where \(w_{\eta-r,ij}^{[r]}=1\) when the processing time of node \(v_{i}\) is equal to \(r\) at time step \(\eta-r\), so that node \(v_{i}\) receives \(c_{i}^{[n]}\), \(1\) from \(v_{j}\) at time step \(\eta\) (otherwise \(w_{\eta-r,ij}^{[r]}=0\) and \(v_{i}\) receives no message at time step \(\eta\) from \(v_{j}\)); 8. **if**\(\eta\mod(D\mathcal{B})=0\)**and**\(M_{i}-m_{i}\leq 1\)**then** sets \(z_{i}^{[k+1]}=m_{i}\Delta\) and stops operation. **Output:**\(z_{i}^{[k+1]}\). **Algorithm 2** QuAsAvCo - Quantized Asynchronous Average Consensus **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node **Remark 1**: _It is important to note here that during the initialization of Algorithm 1, the error tolerance \(\epsilon\) is chosen to be a rational number (i.e., \(\epsilon\in\mathbb{Q}\)). This is not a limitation for the ADMM optimization process in Algorithm 1. The real-valued \(\epsilon\) can be chosen such that it can be represented as a rational value. Furthermore, this choice facilitates the operation of Algorithm 2. Specifically, a rational value for \(\epsilon\) facilitates the choice of a suitable quantization level \(\Delta\) (since \(\Delta=\epsilon/3\)). During the execution of Algorithm 2 nodes quantize their states, thus an error \(e_{q_{1}}\leq\Delta\) is imposed to every state. Then, Algorithm 2 converges to the quantized average thus, the final states of the nodes have an error \(e_{q_{2}}\leq\Delta\). This means that after executing Algorithm 2, we have \(|z_{i}-z_{j}|\leq 2\Delta<\epsilon\), and thus we have \(z_{i}^{[k+1]}\in\mathcal{C}\) in (11), \(\forall v_{i}\in\mathcal{V}\). For this reason, any choice of \(\Delta\) for which \(\Delta<\epsilon/2\) is suitable for the operation of our algorithm for a given error tolerance \(\epsilon\)._ **Remark 2**: _In practical applications, nodes do not know the value of \(\mathcal{B}\). However, \(B\) time-steps (which is its upper bound) is guaranteed to be executed within \(T\) seconds (see Assumption 1). As noted previously, consistent pacing of each node's clock ensures that the check for convergence at each node will happen at roughly the same time (see [21]). Therefore, at every \(DT\) seconds, each node checks whether Algorithm 2 can be terminated._ ### _Convergence of Algorithm 1_ We now analyze the convergence time of Algorithm 1 via the following theorem. Our theorem is inspired from [8] but is adjusted to the quantized nature of Algorithm 1. However, due to space limitations we omit the proof (we will include it at an extended version of our paper). **Theorem 1**: _Let us consider a strongly connected digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Each node \(v_{i}\in\mathcal{V}\), is endowed with a scalar local cost function \(f_{i}(x):\mathds{R}^{p}\mapsto\mathds{R}\), and Assumptions 1-4 hold. Furthermore, every node \(v_{i}\) has knowledge of a parameter \(\rho\), the network diameter \(D\), an error tolerance \(\epsilon\in\mathds{Q}\), and an upper bound on processing delays \(\mathcal{B}\). During the operation of Algorithm 1, let us consider the variables \(\{X^{[k]},z^{[k]},\lambda^{[k]}\}\), where \(X^{[k]}=[x_{1}^{[k]^{\intercal}},x_{2}^{[k]^{\intercal}},\ldots,x_{n}^{[k]^{ \intercal}}]^{\intercal}\) and \(\lambda^{[k]}=[\lambda_{1}^{[k]^{\intercal}},\lambda_{2}^{[k]^{\intercal}}, \ldots,\lambda_{n}^{[k]^{\intercal}}]^{\intercal}\); then, define \(\bar{X}^{[k]}=\frac{1}{k}\sum_{s=0}^{k-1}X^{[s+1]},\bar{z}^{[k]}=\frac{1}{k} \sum_{s=0}^{k-1}z^{[s+1]}\). During the operation of Algorithm 1 we have \[0 \leq L(\bar{X}^{[k]},\bar{z}^{[k]},\lambda^{*})-L(X^{*},z^{*}, \lambda^{*}) \tag{27}\] \[\leq\frac{1}{k}\left(\frac{1}{2\rho}\|\lambda^{*}-\lambda^{[0]}\| ^{2}+\frac{\rho}{2}\|X^{*}-z^{[0]}\|^{2}\right)+\mathcal{O}(2\Delta\sqrt{n}),\] for every time step \(k\), where \(\Delta\) is the quantization level for calculating \(z_{i}\in\mathcal{C}\) in (11) during the operation of Algorithm 2. It is important to note that in Theorem 1 we focus on the convergence of the optimization steps, i.e., the steps executed during the operation of Algorithm 1. Due to the operation of Algorithm 2 we have that in (27) an additional term \(\mathcal{O}(2\Delta\sqrt{n})\) appears. This term (as will be seen later in Section VI) affects the precision according to which the optimal solution is calculated. However, we can adjust Algorithm 2 to operate with a dynamically refined quantization level \(\Delta\). For example, we can initially set \(\Delta=\epsilon/3\) (where \(\epsilon\in\mathds{Q}\)). Then, execute Algorithm 2 during every time step \(k\) with quantization level \(\Delta^{\prime}=\frac{\Delta}{10(k+1)}\). Since we have \(\frac{\Delta}{10(k+1)}<\frac{\Delta}{10(k)}\) for every \(k\), then, Algorithm 2 will lead to a reduction of the error on the optimal solution that depends on the quantization level (i.e., the term \(\mathcal{O}(2\Delta\sqrt{n})\) in (27) will be reduced after every execution of Algorithm 2). However, please note that this analysis is outside of the scope of this paper and will be considered in an extended version. ## VI Simulation Results In this section, we present simulation results in order to demonstrate the operation of Algorithm 1 and its advantages. Furthermore, we compare Algorithm 1 against existing algorithms and emphasize on the introduced improvements. In Fig. 2, we focus on a network comprised of \(100\) nodes modelled as a directed graph. Each node \(v_{i}\) is endowed with a scalar local cost function \(f_{i}(x)=0.5x^{\top}P_{i}x+q_{i}^{\top}x+r_{i}\). This cost function is quadratic and convex. Furthermore, for \(f_{i}(x)\) we have that (i) \(P_{i}\) was initialized as the square of a randomly generated symmetric matrix \(A_{i}\) (ensuring it is positive definite), (ii) \(q_{i}\) is initialized as the negation of the product of the transpose of \(A_{i}\) and a randomly generated vector \(b_{i}\) (i.e., it is a linear term), (iii) and \(r_{i}\) is initialized as half of the squared norm of the randomly generated vector \(b_{i}\) (i.e., it is a scalar constant). We execute Algorithm 1 and we show how the nodes' states converge to the optimal solution for \(\epsilon=0.03,0.003,0.0003\), and \(\Delta=0.01,0.001,0.0001\), respectively. We plot the error \(e^{[k]}\) defined as \[e^{[k]}=\frac{\sqrt{\sum_{j=1}^{n}(x_{j}^{[k]}-x^{*})^{2}}}{\sqrt{\sum_{j=1}^{ n}(x_{j}^{[0]}-x^{*})^{2}}}, \tag{28}\] where \(x^{*}\) is the optimal solution of the optimization problem in (14). Note that from Remark 1, we have that any \(\Delta<\epsilon/2\) is suitable for the operation of Algorithm 1 for a given \(\epsilon\). In Fig. 2, we execute Algorithm 1 for \(\Delta=\epsilon/3\). We can see that Algorithm 1 converges to the optimal solution for the three different values of \(\epsilon\). However, Algorithm 1 is able to approximate the optimal solution with precision that depends on the quantization level (i.e., during Algorithm 1, nodes are able to calculate a neighborhood of the optimal solution). Reducing the quantization level \(\Delta\) allows calculation of the optimal solution with higher precision. Furthermore, we can see that after calculating the optimal solution our algorithm exhibits an oscillatory behavior due to quantized communication. This means quantized communication introduces nonlinearities to the consensus calculation which in turn affect the values of other parameters such as \(x\) and \(z\), and \(\lambda\) (see iteration steps \(1\), \(2\), \(3\)), and for this reason we have this oscillatory behavior. Finally, we can see that Algorithm 1 exhibits comparable performance with [24] (which is plotted until optimization step \(14\)) until the neighborhood of the optimal solution is calculated. However, in [24] nodes are able to exchange real-valued messages. Specifically, in [24] nodes are required to form the Hankel matrix and perform additional computations when the matrix loses rank. This requires nodes to exchange the exact values of their states. Therefore, the main advantage of Algorithm 1 compared to [24], is that it exhibits comparable performance while guaranteeing efficient (quantized) communication among nodes. ## VII Conclusions and Future Directions In this paper, we presented an asynchronous distributed optimization algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. We showed that our proposed algorithm is able to calculate the optimal solution while operating over directed communication networks in an asynchronous fashion, and guaranteeing efficient (quantized) communication between nodes. We analyzed the operation of our algorithm and showed that it converges to a neighborhood of the optimal solution (that depends on the quantization level) at a rate of \(O(1/k)\). Finally, we demonstrated the operation of our algorithm and compared it against other algorithms from the literature. In the future, we aim to enhance the operation of our algorithm to avoid the oscillatory behavior after calculating the optimal solution. Furthermore, we plan to develop strategies that allow calculation of the _exact_ optimal solution while guaranteeing efficient communication among nodes. Finally, we will focus on designing efficient communication strategies for non-convex distributed optimization problems.
2305.20080
Findings of the VarDial Evaluation Campaign 2023
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. Three separate shared tasks were included this year: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages -- True Labels (DSL-TL), and Discriminating Between Similar Languages -- Speech (DSL-S). All three tasks were organized for the first time this year.
Noëmi Aepli, Çağrı Çöltekin, Rob Van Der Goot, Tommi Jauhiainen, Mourhaf Kazzaz, Nikola Ljubešić, Kai North, Barbara Plank, Yves Scherrer, Marcos Zampieri
2023-05-31T17:55:21Z
http://arxiv.org/abs/2305.20080v1
# Findings of the VarDial Evaluation Campaign 2023 ###### Abstract This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. Three separate shared tasks were included this year: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages - True Labels (DSL-TL), and Discriminating Between Similar Languages - Speech (DSL-S). All three tasks were organized for the first time this year. ## 1 Introduction The workshop series on _NLP for Similar Languages, Varieties and Dialects_ (VarDial), traditionally co-located with international conferences, has reached its tenth edition. Since the first edition, VarDial has hosted shared tasks on various topics such as language and dialect identification, morphosyntactic tagging, question answering, and cross-lingual dependency parsing. The shared tasks have featured many languages and dialects from different families and data from various sources, genres, and domains (Aepli et al., 2022; Chakravarthi et al., 2021; Gaman et al., 2020; Zampieri et al., 2019, 2018, 2017; Malmasi et al., 2016; Zampieri et al., 2015, 2014). As part of the VarDial Evaluation Campaign 2023, we offered three shared tasks which we present in this paper: * **SID4LR:** Slot and intent detection for low-resource language varieties1 Footnote 1: Task organizers: Nömi Aepli, Rob van der Goot, Barbara Plank, Yves Scherrer. * True Labels2 Footnote 2: Task organizers: Marcos Zampieri, Kai North, Tommi Jauhainen. Footnote 3: Task organizers: Cágni Coltekin, Mourhaf Kazzz, Tommi Jauhainen, Nikola Ljubesic. DSL-TL and DSL-S continue the long line of language and dialect identification (Jauhiainen et al., 2019) shared tasks at VarDial, whereas the SID4LR features a task novel to the evaluation campaigns. This overview paper is structured as follows: in Section 2, we briefly introduce the three shared tasks. Section 3 presents the teams that submitted systems to the shared tasks. Each task is then discussed in detail, focusing on the data, the participants' approaches, and the obtained results. Section 4 is dedicated to SID4LR, Section 5 to DSL-TL, and Section 6 to DSL-S. ## 2 Shared Tasks at VarDial 2023 The evaluation campaign took place in January - February 2023. Due to the ACL placing the workshop at the EACL conference in early May, the schedule from the shared tasks' first announcement to completion was relatively tight. The call for participation in the shared tasks was first published in early January, the training data sets for the shared tasks were released on January 23rd, and the results were due to be submitted on February 27th.4 Footnote 4: [https://sites.google.com/view/vardia](https://sites.google.com/view/vardia) ### SID for Low-resource Language Varieties (SID4LR) The SID4LR shared task focused on Slot and Intent Detection (SID) for digital assistant data in three low-resource language varieties: Swiss German (GSW) from the city of Bern, South Tyrolean (DEST), and Neapolitan (NAP). Intent detection is the task of automatically classifying the intent of an utterance and slot detection aims at finding the relevant (labeled) span. Figure 1 illustrates these two tasks with an example. The objective of this shared task is to address the following question: _How can we best do zero-shot transfer to low-resource language varieties without standard orthography?_ The xSID-0.4 corpus5, which includes data from both Snips Coucke et al. (2018) and Facebook Schuster et al. (2019), constitutes the training data, providing labeled information for slot and intent detection in 13 different languages. The original training data is in English, but we also provided automatic translations of the training data into German, Italian, and other languages. These translations are obtained with the Fairseq library Ott et al. (2019), using spoken data for training (more details in van der Goot et al. (2021)). Bleu scores Papineni et al. (2002) were 25.93 and 44.73 for respectively German and Italian. Slot label annotations were transferred using the attention weights. Participants were allowed to use other data to train on as long as it was not annotated for SID in the target languages. Specifically, the following resources were allowed: Footnote 5: [https://bitbucket.org/robvanderg/sid](https://bitbucket.org/robvanderg/sid) 1. annotated data from other (related and unrelated) languages in the xSID-0.4 corpus; 2. raw text data from the target languages, if available (e.g., Wikipedia, web crawls); 3. pre-trained language models containing data from the target languages. It was not mandatory for the participants to provide systems for all tasks and languages; they had the option to only take part in a specific subset. We used the standard evaluation metrics for these tasks, namely the span F1 score for slots and accuracy for intents. ### Discriminating Between Similar Languages - True Labels (DSL-TL) Discriminating between similar languages (e.g., Croatian and Serbian) and national language varieties (e.g., Brazilian and European Portuguese) has been a popular topic at VarDial since its first edition. The DSL shared tasks organized from 2014 to 2017 Zampieri et al. (2017); Malmasi et al. (2016); Zampieri et al. (2015, 2014) have addressed this issue by providing participants with the DSL Corpus Collection (DSLCC) Tan et al. (2014), a collection of journalistic texts containing texts written in groups of similar languages (e.g., Indonesian and Malay) and language varieties (e.g., Brazilian and European Portuguese).6 The DSLCC was compiled assuming each instance's gold label is determined by where the text is retrieved from. While this is a straightforward and primarily accurate practical assumption, previous research Goutte et al. (2016) has shown the limitations of this problem formulation as some texts may present no linguistic marker that allows systems or native speakers to discriminate between two very similar languages or language varieties. Footnote 6: [http://ttg.uni-saarland.de/resources/DSLCC/](http://ttg.uni-saarland.de/resources/DSLCC/) At VarDial 2023, we tackle this important limitation by introducing the DSL True Labels (DSL-TL) shared task. DSL-TL provided participants with the DSL-TL dataset Zampieri et al. (2023), the first human-annotated language variety identification dataset where the sentences can belong to several varieties simultaneously. The DSL-TL dataset contains newspaper texts annotated by multiple native speakers of the included language and language varieties, namely English American and British varieties, Portuguese Brazilian and European varieties), and Spanish (Argentinian and Peninsular varieties). More details on the DSL-TL shared task and dataset are presented in Section 5. Figure 1: Example of the SID tasks. The **three target languages (NAP, GSW, DE-ST)** are in bold, the corresponding high-resource languages (DE and IT) and the translation (EN) are included for comparison. The _slot_ annotations are coloured: datetime and reminder/todo. The _intent_ for this sentence is reminder/set_reminder. ### Discriminating Between Similar Languages - Speech (DSL-S) In the DSL-S 2023 shared task, participants were using the training, and the development sets from the Mozilla Common Voice (CV, Ardila et al., 2020) to develop a language identifier for speech.7 The nine languages selected for the task come from four different subgroups of Indo-European or Uralic language families (Swedish, Norwegian Nynorsk, Danish, Finnish, Estonian, Moksha, Erzya, Russian, and Ukrainian). Footnote 7: Further information available at: https://dsl-s.g \begin{table} \begin{tabular}{l|c c c c} **Team** & **SID4LR** & **DSL-TL** & **DSL-S** & **System Description Paper** \\ \hline UBC & ✓ & & & Kwon et al. (2023) \\ Notre Dame & ✓ & & & Srivastava and Chiang (2023) \\ VaidyaKane & & ✓ & & Vaidya and Kane (2023) \\ ssl & & ✓ & & Hohl and Shim (2023) \\ UnibucNLP & & ✓ & & Gaman (2023) \\ SATLab & & ✓ & & \\ \end{tabular} \end{table} Table 1: The teams that participated in the VarDial Evaluation Campaign 2023. lation, and pre-training on the target languages. For the latter, they made use of additional external data from various sources for all three target languages for the training. Notre Dame:Team Notre Dame (Srivastava and Chiang, 2023) submitted a research paper to the VarDial workshop, within which they also described their participation in the intent detection subtask. The team applied zero-shot methods, i.e., they did not use any data from the target language in the training process. They fine-tuned monolingual language models9 with noise-induced data. The noising technique they applied is similar to that of Aepli and Sennrich (2022) with three main differences: they 1) add an additional noise type: _swapping_ between adjacent letters; 2) they employ higher levels of noise and include multiple copies of the fine-tuning data; and 3) remove the step of continued pre-training to avoid using any target language data. Footnote 9: German BERT: [https://huggingface.co/dbm](https://huggingface.co/dbm) dz/bert-base-german-uncased and Italian BERT: [https://huggingface.co/dbmdz/bert-base-i](https://huggingface.co/dbmdz/bert-base-i) Italian-uncased Baseline:The baseline we provided is the same as in the original xSID paper, trained on the English data, with an updated version of MaChAmp (van der Goot et al., 2021). The model uses an mBERT encoder and a separate decoder head for each task, one for slot detection (with a CRF layer) and one for intent classification. ### Results We evaluated the submitted systems according to accuracy for intents and according to the span F1 score for slots (where both span and label must match exactly). Table 2 contains the scores. For intent classification, the winner for all three languages is the team Notre Dame. Both teams beat the baseline by a large margin. All systems reached the highest scores on DE-ST and the lowest scores on GSW, but both participating teams managed to significantly close the gaps between the languages compared to the baseline. For slot detection, the UBC team outperformed the baseline for DE-ST and GSW but not for NAP. Again, GSW turned out to be the most difficult language variety of the three. We must note, however, that the UBC submission contained a large amount of ill-formed slots. Between 13% (DE-ST, NAP) and 28% (GSW) of predicted slots start with an I- label instead of B-; the evaluation script simply ignores such slots. Furthermore, a small number of predicted spans have inconsistent labels (e.g., I-datetime immediately followed by I-location). This suggests that the model architecture chosen by the UBC team was not appropriate for span labeling tasks and that a different architecture could have led to further improvements compared to the baseline. The baseline system, which uses a CRF prediction layer, did not produce any such inconsistencies. ### Summary The UBC submissions are based on a pre-trained multilingual language model (mT0), which was fine-tuned on the 12 languages of the xSID dataset. Among these languages are Italian and German, but all training sets except the English one have been produced by machine translation. This setup worked better than using only the related languages of xSID (IT and DE) or only English. Also, further data augmentation with paraphrasing and machine translation did not have any positive effect. These findings suggest that task-specific knowledge is more important than having access to linguistic material in the target languages (or even in related high-resource languages). The Notre Dame participation provides a somewhat contrasting result. They start with a monolingual BERT model of the related high-resource language (IT or DE) and use fine-tuning to make the model more robust to character-level noise. The possibility of including unrelated languages was not explored here. The contributions proposed by the participants are thus largely complementary, and it would be interesting to see if their combination leads to further improvements on the task. For instance, task-specific fine-tuning (using all of the xSID data) \begin{table} \begin{tabular}{l l|c c c} & & **Baseline** & **UBC** & **Notre Dame** \\ \hline \multirow{3}{*}{\begin{tabular}{l} **Intent** \\ **detection** \\ \end{tabular} } & **DE-ST** & 0.6160 & 0.8940 & **0.9420** \\ & **GSW** & 0.4720 & 0.8160 & **0.8860** \\ & **NAP** & 0.5900 & 0.8540 & **0.8900** \\ \hline \multirow{3}{*}{ \begin{tabular}{l} **Slot** \\ **detection** \\ \end{tabular} } & **DE-ST** & 0.4288 & **0.4692** & – \\ & **GSW** & 0.2530 & **0.2899** & – \\ & **NAP** & **0.4457** & 0.4215 & – \\ \end{tabular} \end{table} Table 2: Results for intent classification (accuracy) and slot detection (Span-F1 score). UBC submitted several models for intent detection, and here we report their best-performing system for each language. could be combined with language-specific fine-tuning (based on the noise induction task) and complemented with the baseline's CRF architecture to provide consistent slot labels. A striking finding of this shared task are the poor results on Swiss German compared to the other two low-resource varieties, Neapolitan and South-Tyrolean German. This may be due to the particular Swiss German dialect used in this dataset and/or to some translator-specific preferences or biases. Further analysis will be required to fully explain these differences. ## 5 Discriminating Between Similar Languages - True Labels The DSL-TL shared task contained two tracks: * Three-way Classification:** In this track, systems were evaluated with respect to the prediction of all three labels for each language, namely the variety-specific labels (e.g., PT-PT or PT-BR) and the common label (e.g., PT). * Binary Classification:** In this track, systems were scored only on the variety-specific labels (e.g., EN-GB, EN-US). In addition to the two tracks mentioned above, we provided participants with the option of using external data sources (open submission) or only the DSL-TL dataset (closed submission). ### Dataset DataDSL-TL contains 12,900 instances split between three languages and six national language varieties, as shown in Table 3. Instances in the DSL-TL are short extracts (1 to 3 sentences long) from newspaper articles randomly sampled from two sources (Zellers et al., 2019; Tan et al., 2014). Considering the source's ground truth label, the DSL-TL creators randomly selected 2,500 instances for each Portuguese and Spanish variety and 1,500 instances for each English variety. AnnotationDSL-TL was annotated using crowdsourcing through Amazon Mechanical Turk (AMT).10 The annotation task was restricted to annotators based on the six national language variety countries, namely Argentina, Brazil, Portugal, Spain, United Kingdom, and the United States. The annotators were asked to label each instance with what they believed to be the most representative variety label, namely European (pt-PT) or Brazilian Portuguese (pt-BR), Castilian (es-ES) or Argentine Spanish (es-AR), and British (en-GB) or American English (en-US). The label distributions are shown in Table 3. The annotators were presented with three choices: (1) language variety A, (2) language variety B, or (3) both or neither for cases in which no clear language variety marker (either linguistic or named entity) was present in the text. The annotator agreement calculations and filtering carried out after the annotation stage are described in detail in the dataset description paper (Zampieri et al., 2023). Finally, the instances in DSL-TL have been split into training, development, and testing partitions, as shown in Table 4. Footnote 10: [https://www.mturk.com/](https://www.mturk.com/) ### Participants and Approaches Four teams provided submissions to the shared task. VaidyaKane:All submissions from the team VaidyaKane used a pre-trained multilingual XLM-RoBERTa fine-tuned to language identification11 to classify the language of the sentence (Conneau et al., 2020). After the initial language identification, they experimented with several language-specific BERT models to identify the exact variety. Their best submission on track one used "bert-base-uncased"12 for English (Devlin et al., 2019), "bertin-project/bertin-roberta-base-spanish"13 for Spanish (la Rosa et al., 2022), and "neuralmind/bert-base-portuguese-cased"14 for Portuguese (Souza et al., 2020). On track two, the models for Spanish and Portuguese were the same, but "roberta-base"15 was used for English (Liu et al., 2019). Footnote 11: [https://huggingface.co/papluca/xlm-r-oberta-base-language-detection](https://huggingface.co/papluca/xlm-r-oberta-base-language-detection) Footnote 12: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) Footnote 13: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 14: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 15: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 16: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 17: [https://huggingface.co/roberta-base-portuguese-cased](https://huggingface.co/roberta-base-portuguese-cased) Footnote 18: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 19: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 17: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 20: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 21: [https://huggingface.co/roberta-base-portuguese-cased](https://huggingface.co/roberta-base-portuguese-cased) Footnote 22: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 23: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 24: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 25: [https://huggingface.co/roberta-base](https://huggingface.co/roberta-base) tracks, they also used names of people obtained from Wikidata (Vrandecic and Krotzsch, 2014). UnibucNLP:On track one, the UnibucNLP team submitted a run using an XGBoost stacking ensemble (Chen and Guestrin, 2016). The classifier stack for the ensemble consisted of one SVM and one KRR classifier. For track two, the stack classifiers were the same, but Logistic Regression was used for the stacking ensemble. SATLab:On both tracks, the SATLab team used a Logistic Regression classifier from the LIBLinear package with character n-grams from one to five weighted by BM25 and L2 normalization. The n-grams had to appear in at least two different sentences in the training data. The system was very similar to the one used by Bestgen (2021) in the Dravidian Language Identification (DLI) shared task in 2021 (Chakravarthi et al., 2021). ### Results Tables 5 to 8 show the recall, precision, and F1 scores for the baselines and best submissions for all track combinations. Team UnibucNLP (Gaman, 2023) achieved the first place out of nine submissions on the closed version of track one. Their XGBoost stacking ensemble attained an F1 score of 0.5318. The results were still slightly worse than the multilingual BERT16 (mBERT) (Devlin et al., 2019) and the XLM-RoBERTa17 (XLM-R) (Liu et al., 2019) baselines. All other submissions achieved slightly worse F1 scores. In the second place, team SATLab's logistic regressor obtained an F1 score of 0.4905. In third place, team ssl's SVM produced an F1 score of 0.4817. The similarity between the top three F1 scores shows that automatically differentiating between similar language varieties is a challenging task, especially when taking into consideration neutral labels (EN, ES, or PT), as well as only using the provided data. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline & baseline-ANB & 0.8200 & 0.7990 & 0.7990 \\ & baseline-NB & 0.8110 & 0.7920 & 0.7940 \\ & baseline-XLM-R & 0.7830 & 0.7820 & 0.7800 \\ 1 & run-1-ssl & 0.7521 & 0.7885 & 0.7604 \\ & baseline-mBERT & 0.7600 & 0.7530 & 0.7550 \\ 2 & run-2-SATLab & 0.7520 & 0.7430 & 0.7452 \\ 3 & run-1-UnibucNLP & 0.6502 & 0.7756 & 0.6935 \\ \end{tabular} \end{table} Table 6: The macro average scores of the best run for each team on **closed track 2**. \begin{table} \begin{tabular}{l|l l l|l} **Variable** & **Train** & **Dev** & **Test** & **Total** \\ \hline Portuguese & 3,467 & 991 & 495 & 4,953 \\ Spanish & 3,467 & 985 & 495 & 4,947 \\ English & 2,097 & 603 & 300 & 3,000 \\ \hline Total & & & & 12,900 \\ \end{tabular} \end{table} Table 4: DSL-TL’s train, dev, and test splits are 70/20/10% of the total number of instances, respectively. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & run-1-VaidyaKa & 0.8705 & 0.8523 & 0.8561 \\ & baseline-NB & 0.8200 & 0.8030 & 0.8030 \\ 2 & run-1-ssl & 0.7647 & 0.7951 & 0.7729 \\ \end{tabular} \end{table} Table 7: The macro average scores of the best run for **open track 1**. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & baseline-mBERT & 0.5490 & 0.5450 & 0.5400 \\ & baseline-XLM-R & 0.5280 & 0.5490 & 0.5360 \\ 1 & run-3-UnibucNLP & 0.5291 & 0.5542 & 0.5318 \\ & baseline-NB & 0.5090 & 0.5090 & 0.5030 \\ 2 & run-1-sATLab & 0.4987 & 0.4896 & 0.4905 \\ 3 & run-1-ssl & 0.4978 & 0.4734 & 0.4817 \\ \end{tabular} \end{table} Table 5: The macro average scores of the best run for each team on **closed track 1**. \begin{table} \begin{tabular}{l|l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & run-1-VaidyaKa & 0.8705 & 0.8523 & 0.8561 \\ & baseline-NB & 0.8200 & 0.8030 & 0.8030 \\ 2 & run-1-ssl & 0.7647 & 0.7951 & 0.7729 \\ \end{tabular} \end{table} Table 8: The macro average scores of the best run for each team on **open track 2**. Team ssl (Hohl and Shim, 2023) achieved the best performance out of ten submissions on the closed version of track two. Their SVM was able to more effectively differentiate between six labels that did not include the aforementioned neutral labels (en-GB, en-US, es-AR, es-ES, pt-PT, or pt-BR). They achieved an F1 score of 0.7604. Their results were closely followed by the performance of SATLab's logistic regressor, having attained an F1 score of 0.7452, and UnibucNLP's XGBoost stacking ensemble with an F1 score of 0.6935. All submissions were clearly behind the adaptive and traditional Naive Bayes baselines, which were identical to the systems winning the Identification of Languages and Dialects of Italy (ITDI) shared task in 2022 (Jauhiainen et al., 2022; Aepli et al., 2022). SVMs are well-known to perform well when there is a clear distinction between class boundaries. This likely explains why team ssl's SVM has outperformed UnibucNLP's ensemble since neutral labels that contained features of both classes were no longer considered. Team VaidyaKane's (Vaidya and Kane, 2023) submission to the open version of track 1 outperformed all other open and closed submissions for this track. Their two-stage transformer-based model achieved an F1 score of 0.5854. Team ssl was the only other team to submit predictions for open tracks 1 and 2. Their open submission for track 1 achieved an F1 score of 0.4889 which surpassed that of their closed submission for this track. The use of additional data was, therefore, found to improve overall performances. Team VaidyaKane produced the highest F1 score on the open version of track 2. They achieved an F1 score of 0.8561, which was greater than all other open and closed submissions for either track. Team ssl also saw a further improvement in their SVM's model performance when using additional data for track 2. Their SVM model produced an F1 score of 0.7729, which was superior to their closed-track submission. These performances show that the use of additional data is beneficial and further proves that the classification of language varieties is an easier task than the classification of language varieties with neutral labels. ### Summary The DSL-TL shared task introduced a novel problem formulation in language variety identification. The new human-annotated dataset with the presence of the 'both or neither' class represent a new way of looking at the problem. Given the similarity between language varieties, we believe this new problem formulation constitutes a fairer way of evaluating language identification systems, albeit rather challenging in terms of performance as demonstrated in this shared task. ## 6 Discriminating Between Similar Languages - Speech ### Dataset The DSL-S shared task uses Mozilla Common Voice data (version 12 released in Dec 2022) in 9 languages from two language families. The data comes from volunteers reading a pre-selected set of sentences in each language. The audio is recorded through a web-based interface. For training and development sets, we follow the training and development set of the source data. Even though the test data used in this task comes from the Common Voice test data for the nine languages, we do not use the entire test set of the CV release but sample 100 audio files for each language. There is no overlap of sentences and speakers between the data sets. Table 9 presents the test set's statistics. The total amount of unpacked speech data is around 15 gigabytes. The data includes severe class imbalance, as well as substantial differences in the number of speakers. Generalization from a small number of speakers is a known challenge in similar speech data sets, including earlier VarDial evaluation campaigns.18 The CV data set makes this task further challenging since the variety of speakers in the test set is much larger than the training and the development sets. Footnote 18: See Jauhiainen et al. (2018) and Wu et al. (2019) for earlier approaches to this problem. Similar to the earlier VarDial shared tasks with audio data (Zampieri et al., 2017, 2018, 2019), we provided 400-dimensional i-vector and 512-dimensional x-vector features, both extracted using Kaldi (Povey et al., 2011). Unlike earlier tasks, however, the raw audio data was also available to the potential participants. ### Participants and Approaches Two teams registered for the shared task, but neither provided any submissions. In this section, we briefly introduce the baselines we provided. For the closed track, we provided a linear SVM baseline with x-vectors features (Snyder et al., 2018). The SVM baseline was implemented using scikit-learn [20], and tuned only for the SVM margin parameter 'C'. The open track baseline uses two baselines - the XLS-R multilingual pre-trained transformer speech model [14]19 with a classification head for direct speech classification, and a multilingual speech recognition system 20 based on XLS-R [1] to transcribe the speech, and uses Naive Bayes [13, 14] to identify the language.21 Footnote 19: [https://huggingface.co/facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Footnote 20: [https://huggingface.co/voidful/wav2vc2-xlsr-multilingual-56](https://huggingface.co/voidful/wav2vc2-xlsr-multilingual-56) Footnote 21: [https://github.com/tosaja/TunPRF-NADI](https://github.com/tosaja/TunPRF-NADI) ### Results The scores for the baselines are presented in Table 10. The SVM baseline performs particularly badly on the test set (the development precision, recall, and F1 scores are 0.4088, 0.4011, 0.3777, respectively). The reason behind this is likely due to the fact that, although they were used for language identification in earlier research, the x-vectors are designed for speaker identification. Given the variability of speaker features in the test set, any classifier relying on speaker features are likely to fail. The baselines relying on pre-trained transformer models perform substantially better, with the direct speech classifier being more than 10 points behind the transcription and text classification approach. While the direct speech classification approach could be further improved through hyperparameter optimisation (currently we fine-tune for 3 epochs with a batch size of 24 and a learning rate of 1e-04) and a selection of the layer from which the features are extracted (related work suggests that lower transformer layers are more informative for discriminating between languages [1]), these baseline results show that transcription and text classification might still be a shorter path to a reasonably performing system for discriminating between similar languages than direct speech classification. ### Summary Although we did not have any submissions for this shared task, we believe that the task includes many interesting challenges. Based only on our baseline results, identifying languages from a limited amount of data (without pre-trained speech models) seems challenging, yet this is particularly interesting for low-resource settings and for investigating differences and similarities for closely related language varieties. We hope to see more interest in the community for language/dialect identification from speech. ## 7 Conclusion This paper presented an overview of the three shared tasks organized as part of the VarDial Evaluation Campaign 2023: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages - True La \begin{table} \begin{tabular}{l|r r r} **System** & **P** & **R** & **F1** \\ \hline SVM + x-vectors & 0.0914 & 0.1189 & 0.0876 \\ XLS-R & 0.6736 & 0.5953 & 0.5856 \\ XLS-R + NB & 0.7331 & 0.7167 & 0.7031 \\ \end{tabular} \end{table} Table 10: Baseline scores of the DSL-S shared task. \begin{table} \begin{tabular}{l|r r r r r r r r} & \multicolumn{3}{c}{**Train**} & \multicolumn{3}{c}{**Dev**} & \multicolumn{3}{c}{**Test**} \\ \cline{2-9} & n & spk & duration & n & spk & duration & n & spk & duration \\ \hline **DA** & 2734 & 3 & 3:17:38 & 2105 & 10 & 2:50:46 & 100 & 48 & 0:07:50 \\ **ET** & 3137 & 221 & 5:49:04 & 2638 & 167 & 4:57:54 & 100 & 88 & 0:11:12 \\ **FI** & 2121 & 3 & 2:43:47 & 1651 & 13 & 1:59:23 & 100 & 63 & 0:07:46 \\ **MDF** & 173 & 2 & 0:15:39 & 54 & 1 & 0:04:39 & 100 & 7 & 0:08:40 \\ **MYV** & 1241 & 2 & 1:58:26 & 239 & 1 & 0:22:55 & 100 & 9 & 0:09:07 \\ **NO** & 314 & 3 & 0:22:43 & 168 & 4 & 0:13:28 & 100 & 18 & 0:07:35 \\ **RU** & 26043 & 252 & 37:16:50 & 10153 & 394 & 15:23:17 & 100 & 98 & 0:09:15 \\ **SV** & 7421 & 22 & 8:11:54 & 5012 & 73 & 5:32:33 & 100 & 89 & 0:07:24 \\ **UK** & 15749 & 28 & 18:38:31 & 8085 & 103 & 10:58:25 & 100 & 28 & 0:08:22 \\ \end{tabular} \end{table} Table 9: Number of instances (n), number of speakers (spk) and total duration (hour:minute:seconds) for each split of the DSL-S shared task. The speaker numbers are approximated based on client id detection by CV. bels (DSL-TL), and Discriminating Between Similar Languages - Speech (DSL-S). ## Acknowledgements We thank all the participants for their interest in the shared tasks. The work related to the SID4LR shared task has received funding from the Swiss National Science Foundation (project nos. 191934 and 176727) and ERC Grant 101043235. The work related to the DSL-TL and DSL-S shared tasks has received partial funding from the Academy of Finland (funding decision no. 341798). The work related to the DSL-S shared task has received funding from the Slovenian Research Agency within the research project J7-4642 and the research programme P6-0411.
2304.00079
Dielectric Barrier Discharge Actuators: Experimental and Numerical Study of Momentum Injection into Co-flow and Counter-flow Freestream
Dielectric barrier discharge (DBD) plasma actuators can generate a wall jet without moving parts by interacting with ionized and neutral molecules in an electric field. The coupling between electrohydrodynamic (EHD), turbulence, inertial and viscous effects in the flow boundary layer remains poorly understood and requires investigation. We present an experimental investigation of momentum injection by DBD actuators into the free stream flow with Re = 35,000 and 75,000 in co-flow and counter-flow scenarios over a range of VAC = 12 kV - 19.5 kV peak-to-peak at a frequency of 2 kHz. In the co-flow configuration, the DBD actuator injects momentum into the boundary layer. In co-flow, the momentum injection results in the thinning boundary layer, while in the counter-flow configuration, flow separation can occur. For the tested condition, a separation bubble is observed at Re = 35,000. The momentum displacement in the counter-flow configuration is six times greater than the EHD jet momentum in a quiescent environment. Both co-flow and counter-flow momentum injections show diminishing effects with increasing external velocities. This work highlights that the resulting flow pattern is not a simple superposition of the EHD jet and the free stream but is determined by the coupling of inertial, viscous, and Coulombic effects in the EHD-driven wall jet and the external flow. The velocity profiles and momentum measurements presented here can be used to validate numerical models and inform the design of DBD actuators for active flow control.
Anthony Tang, Nathan Li, Benjamin Price, Alexander Mamishev, Alberto Aliseda, Igor Novosselov
2023-03-31T19:05:47Z
http://arxiv.org/abs/2304.00079v1
Dielectric Barrier Discharge Actuators: Experimental and Numerical Study of Momentum Injection into Co-flow and Counter-flow Freestream ###### Abstract Dielectric barrier discharge (DBD) plasma actuators can generate a wall jet without moving parts by interacting with ionized and neutral molecules in an electric field. The coupling between electrohydrodynamic (EHD), turbulence, inertial and viscous effects in the flow boundary layer remains poorly understood and requires investigation. We present an experimental investigation of momentum injection by DBD actuators into the free stream flow with \(\mathrm{Re=35,000}\) and \(\mathrm{75,000}\) in co-flow and counter-flow scenarios over a range of \(\mathrm{V_{AC}=12~{}kV}\) - \(\mathrm{19.5~{}kV}\) peak-to-peak at a frequency of \(\mathrm{2~{}kHz}\). In the co-flow configuration, the DBD actuator injects momentum into the boundary layer. In co-flow, the momentum injection results in the thinning boundary layer, while in the counter-flow configuration, flow separation can occur. For the tested condition, a separation bubble is observed at \(\mathrm{Re=35,000}\). The momentum displacement in the counter-flow configuration is six times greater than the EHD jet momentum in a quiescent environment. Both co-flow and counter-flow momentum injections show diminishing effects with increasing external velocities. This work highlights that the resulting flow pattern is not a simple superposition of the EHD jet and the free stream but is determined by the coupling of inertial, viscous, and Coulombic effects in the EHD-driven wall jet and the external flow. The velocity profiles and momentum measurements presented here can be used to validate numerical models and inform the design of DBD actuators for active flow control. DBD, active flow control, plasma/flow interaction, separation control ## 1 Introduction Non-thermal plasma devices have been proposed as actuators for active flow control [1-7]; these plasma actuators have the potential to instantaneously change flow profiles while staying silent and compact [8-10]. Corona discharge or dielectric barrier discharge (DBD), actuators generate ions when the electric field exceeds the dielectric strength of the working fluid. The interaction between free ions, accelerated by the electric field, the working fluid, and walls can be utilized in aerodynamic drag reduction [11-13], lift augmentation [10, 14], separation control [15, 16], and electric propulsion [17-20]. Despite lower electromechanical efficiency than corona-driven actuators, DBD actuators are more stable and can effectively provide a consistent electrohydrodynamic (EHD) forcing [4, 9]. Current DBD applications are limited to flow control at low-speed conditions due to their relatively weak EHD forces [17, 21, 22]. Scientific studies have explored these multiphysics phenomena to optimize electrical and mechanical effects in DBD systems [16, 23-29]. Modeling DBD discharge from first principles is cost-prohibitive due to the large separation of timescales, so studies of corona-driven ionic flows can be relevant to gain insight into the flow-ion interactions [7, 25, 27, 30]. Early numerical efforts led to the development of simplified DBD ion injection models, including the Suzen & Huang [31], Orlov [32], and Shyy [33] models that were able to predict quiescent EHD flow through the interactions of positive and negative ion charges and the external environment [9, 29]. More recent work has pushed past these earlier models to numerically explore the performance of DBDs, such as the evolution of the velocity field in the DBD-driven jet [28]. In addition, other strategies have been proposed for more robust, computationally-lean momentum injection models for EHD-driven jets [34]. Nearly all these models were developed and validated in quiescent conditions and are yet to be tested in an external flow condition with significant inertial and viscous flow interactions. Several authors have noted and presented varying success in quiescent conditions with different numerical approaches, including direct numerical simulations and turbulence RANS closure models [35-37]. Most reports describing EHD-flow interaction are currently limited to analysis of electroconvective instabilities at very low Reynolds numbers. Electroconvection (EC) phenomenon was first reported by G. I. Taylor in 1966, describing cellular convection in the liquid droplet [38]. Since then, EC has been observed in other systems where electric force interacts with fluids. In nonequilibrium EHD systems [38-59], poorly conductive leaky dielectric fluid acquires unipolar charge injection at the surface interface in response to the electric field. In charge-neutral electrokinetic (EK) systems, EC is triggered by the electro-osmotic slip of electrolyte in the electric double layer at membrane surfaces [60-71]. In 3D shear flow, the EHD addition to crossflow forms streamwise rolling patterns as observed numerically [57, 72-74] and experimentally [75, 76]. 2D and 3D flow analysis of the DBD momentum injection in the shear flow is missing from the published literature. A mechanistic understanding of the interaction between discharge and fluid flow in the presence of an external flow is needed to inform the development of DBD actuators for active flow control. To maximize the effect of DBD actuators, recent experimental work varied actuator geometries such as electrode shapes, number of electrodes, and their placement on an aerodynamic surface. However, most of the fundamental EHD studies were performed in a quiescent environment; these actuators have not been well studied in the presence of an external freestream flow, especially at low to moderate velocity relevant to flow separation control [77, 78]. Several airfoil and small-scale unmanned aerial vehicle (UAV) studies have explored the ability of DBD actuators to manipulate lift and drag forces; however, these studies did not provide insight into the fluid flowfield and the underlying physics responsible for the lift and drag changes [79, 80]. The two traditional external flow conditions include co-flow, when the jet direction is the same as the external flow, and counter-flow when the momentum injection is opposite to the external flow. Pereira et al. reported force measurements in both co- and counter-flow DBD injection. They found that the total EHD thrust (or the difference between EHD thrust and shear stress at the surface) was identical for both co-flow and counter-flow [81]. However, the range of freestream velocities (\(10-60\) m/s) in increments of \(10\) m/s did not address the regime where the EHD velocity equals the external flow (\(0-10\) m/s) [81]. Probing the underlying fluid dynamics requires measurement of the velocity profiles near the momentum injection location. Bernard et al. reported on the velocity profiles of DBD actuators in co-flow and found that the effect of the DBD injection diminished at higher external velocities; however, the authors did not investigate counter-flow conditions [82]. The literature does not report experimental work characterizing velocity profiles in the counter-flow momentum injection by DBDs. Over the past decade, several studies have been conducted on DBD mounted to various airfoils [14, 83, 84], multi-element airfoils [85], flaps [86, 87], and full-scale or near full-scale aircraft [17, 21]. In all studies, the DBD actuator demonstrated its ability to change aerodynamic performance by increasing airfoil lift, decreasing drag, or changing pitching moment. Gu et al. employed a simplified ion injection model to explore the effects of DBD actuators on Gurney flaps by modeling actuators on the pressure and suction side of the trailing edge of an airfoil [36]. Multiple DBD array systems have been tested with moderate success; many of these studies found the need to balance the simultaneous interactions between jets acting in opposite directions [77, 88]. Counter-flow momentum injection potentially manipulates the flow more efficiently by increasing drag, decreasing lift, and changing the pitching moment on an aerodynamic surface. This study explores the performance and effect of EHD momentum injection by the DBD actuator at U\({}_{\infty}\)\(=5\) m/s and U\({}_{\infty}\)\(=11\) m/s in co-flow and counter-flow. The AC voltage was varied in the range V\({}_{\mathrm{AC}}\)\(=12\) kV - \(19.5\) kV, and the frequency was constant at 2 kHz. This study is the first to explore the fluid characteristics of DBD-driven flow in counter-flow and quantify the onset separation due to an adverse pressure gradient. Finally, a simple momentum injection model based on the empirically derived DBD discharge in a quiescent environment was tested in external flow computational fluid dynamics (CFD) simulation. This work provides insight into the interaction between the DBD flow and a freestream flow over a flat plate, informing the potential placement of actuators on airfoils and providing validation for numerical studies. ## 2 Experimental Setup and Diagnostics Traditional metrics to characterize plasma actuators' performance include current, power consumption, forces on the surfaces, and DBD jet velocity. A current measurement shows a superposition of capacitive current, discharge current, and noise. The capacitive current is filtered out or ignored because it corresponds to the transiently stored energy in the dielectric or air, not the energy used to accelerate the fluid [5]. The discharge current indicates the amount of charged species that can participate in the energy transfer to fluid motion. The discharge current comprises of numerous peaks in the positive-going cycle due to streamer propagation with the addition of glow discharge during the negative-going cycle [89]. Recent research characterized the relationship between DBD discharge, capacitance, power consumption, and DBD actuator performance [90, 91]. High-resolution measurements have shown that both positive and negative half-cycles contribute to EHD force, and their relevant contributions are the topic of active scientific discussions [5, 92, 93]. Velocity measurements can be obtained by pitot tubes or particle imaging velocimetry (PIV); these measurements characterize momentum transferred from charged species to neutral molecules. While PIV measurements can capture an entire fluid field, integrating a PIV system with DBD-induced flow can be challenging. In addition, there is a risk of tracer particles being charged and interacting with the electric field, i.e., not following the flow streamlines. DBD wall jet similarity analysis was recently proposed [94]; however, additional experimental data is needed to perform a robust analysis. ### Wind Tunnel Our measurements were conducted in an open-circuit wind tunnel with a 100mm x 100mm cross-sectional test section. The wind tunnel consists of an inlet section followed by a 1000 mm long section and a test section allowing for testing the DBD actuator. The DBD actuator plate surface rests parallel to the bottom wall of the wind tunnel, see Figure 1. The inlet section comprises four 120 mm x 120 mm fans with inlet cows, a large honeycomb screen, and a contraction cone. The contraction cone with a 9:1 contraction ratio results in a 100mm x 100mm wind tunnel section constructed of plexiglass. An aluminum extrusion frame supports the wind tunnel section. As described below, the velocity measurements were obtained using a custom glass pitot tube with a 0.4mm ID and 0.5mm OD. The boundary layer height (\(\delta_{99}\)) at U\({}_{\infty}\)\(=5\) m/s external flow was measured at the location of the actuator to be \(\sim\)8.0 mm. Using the height of the wind tunnel and the characteristic length; the Re is 35,000 at U\({}_{\infty}\)\(=5\) m/s and 75,000 at U\({}_{\infty}\)\(=11\) m/s. The turbulence intensity is \(\sim\)1% measured using a calibrated hot-wire anemometer. The hot-wire anemometer was not used for velocity measurements of the DBD actuator as the hot-wire wires and electronics would introduce a risk of arc discharge from the high-voltage electrode. ### DBD actuator The DBD actuator is comprised of two electrodes separated by a thin dielectric barrier, as shown in Figure 1, similar to previously published work [95]. When high voltage is applied to the active electrode, the electric field is strongest at the edge of the active electrode, resulting in plasma generation [92, 96, 97]. The electrodes' thickness and the dielectric media can impact the actuator's performance [97-101]. A straight-edged DBD actuator with a spanwise uniform electric field produces a two-dimensional forcing on the fluid, resulting in a planar jet. Other actuator designs have been considered, including serrated electrodes that produce a three-dimensional force onto the flow field [102-104]. The spanwise length or width of the electrodes serves as a nominal reference length in the analysis [5]. The DBD actuator was installed on the 6" by 8" acrylic plate for this study. The dielectric material used in this study is Kapton (~3.5 dielectric constant at 1 kHz). Each actuator has one layer of ~0.075mm Kapton-FN (FEP layered Kapton) and four layers of 1 mil Kapton-HN with a total thickness of ~ 0.4mm (including the adhesive and FEP layers). The ground electrode (copper, 50 \(\mathrm{\SIUnitSymbolMicro m}\) thick, 25 mm long, 110 mm wide) is flush-mounted on the acrylic plate. The upper electrode (copper, 50 \(\mathrm{\SIUnitSymbolMicro m}\) thick, 15 mm long, 110 mm wide) is glued onto the top of the Kapton dielectric layer. Both electrodes have straight edges producing a uniform spanwise discharge. The active and ground electrodes' edges are aligned with each other, i.e., there is no overlap or gap between the electrodes in the x-direction. The air-exposed HV electrode is connected to a Trek 615-10 (PM04014) power supply that provides up to 20 kV (peak-to-peak) AC voltage. ### Electrical Measurements The electric current in the DBD circuit is a superposition of a capacitive current and a discharge current. The discharge current is associated with plasma microdischarges, and they appear as a series of fast current pulses [105], as shown in Figure 2(a). DBD current is measured using a 200 MHz bandwidth non-intrusive Pearson 2877 current monitor with a rise time of 2 ns. The current monitor is connected to a Tektronix DPO 7054 oscilloscope with a sampling rate of 1 GS/s to resolve 500 MHz. These conditions are essential for accurately capturing individual discharges that have been shown to occur over a 30 ns duration on average [106]. The high bandwidth and the sampling rate minimize the noise during the current measurements and can be used to compute the time-averaged Figure 1: **Schematic of the experimental setup, DBD actuator is mounted on an acrylic glass plate** **flushed with the wind tunnel bottom. The blue region is the dielectric layer separating the electrodes.** electrical power [52]. To determine the currents associated with the plasma micro-discharges, determining the capacitive current through analytical methods has been explored [32, 107]; removing the capacitive current through signal processing methods, including low-pass filters or Fast-Fourier Transform (FFT) has also been attempted [93, 105, 106]. To determine the power consumed by the actuator, a charge-voltage Lissajous curve is created by introducing a capacitor between the grounded electrode and the ground [108, 109]. The Integrating charge-voltage relationship multiplied by the frequency yields the total power usage of the DBD system. The time-averaged electrical power consumed by the actuator is computed as \[W_{elec}=f_{AC}\int\limits_{t^{*}=0}^{t^{*}=1}VdQ, \tag{1}\] where \(f_{AC}\) is the frequency of the applied voltage in Hz, and \(V\) and \(Q\) are the voltage and charge at each point in the period. The normalized time (\(t^{*}\)) represents a single cycle. We compute the averaged resulting power from at least four separate periods to reduce the noise impact. In the wind tunnel study by Pereira et al., the DBD actuator in co-flow and counter-flow was found not to have significantly different electrical characteristics [81]. Figure 2(a) below shows a typical DBD current measurement with a voltage curve. Figure 2(b) shows the representative filtered Lissajous curve of four consecutive discharge cycles. These data were used to determine the power of the DBD actuator as a function of the operating condition. ### Wall jet and momentum displacement characterization The flowfield induced by EHD depends on the plasma actuators' configuration and operational conditions. We employed a custom-made glass pitot tube with a 0.4 mm inner diameter and 0.5 mm outside diameter to measure the time-averaged x - velocity profile. Compared to traditional stainless steel pitot tubes, the glass tube reduces electrical interaction with the discharge. This method has been previously used to characterize plasma actuators' performance [5, 95, 105, 110]. The pitot tube is mounted on an optical table and controlled on the x and y - axis by linear stages connected to an Ashcroft CXLdp differential pressure transmitter (0 - 25 Pa with 0.25% accuracy). The pressure transducer outputs a 4 - 20 mA current, linear in its pressure range, and it was placed in series with a 1.5 k\(\Omega\) resistor. The pressure within the pitot tube equilibrated nearly instantly after changing the flow condition. The voltage across the resistor is recorded for at least 30 seconds with a Hydra Data Logger II. With the time-averaged pressure (\(P\)), a time-averaged wind velocity (\(v\)) is calculated using Figure 2: (a) Typical DBD current with voltage signals at 18 kV (p-p) and 2 kHz applied frequency (b) 4 consecutive Q-V discharge cycles measured from a 100 nF capacitor. Bernoulli's equation with a calibration correction factor (\(\mathcal{C}\)) that is characteristic for a custom pitot tube expressed as \[\Delta P=\mathcal{C}\rho v^{2}, \tag{2}\] where \(\rho\) is the fluid density. In our experiments, the typical velocity measurements had a standard deviation \(<\) 0.02 m s\({}^{\text{-1}}\) over a 30 s sampling period. X-velocity measurements are taken at varying x and y positions downstream and upstream on the active electrode edge. At each x-position, the y-velocity profile is obtained from the surface to 10 mm above the plate at increments of 0.25 mm or 0.5 mm (at a higher location). The streamwise measurements were taken by holding a constant y-position and spanning in the x - direction at 0.5 mm intervals to complete the datasets over a regular measurement grid. Due to the pitot tube dimension, we could not capture velocity at y \(<\) 0.25 mm. We assume the velocity is linear between the no-slip condition at y = 0 mm and the data at y = 0.25 mm for plotting purposes. Considering a 2D control volume (with a spanwise unit length), integration of the streamwise velocity along a vertical profile in the wind tunnel with the DBD actuator provides the total mass flow rate per meter spanwise, \(Q\), of the total system: \[Q_{system}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U(y)dy, \tag{3}\] where \(U(y)\) is the velocity measured along the vertical profile at a constant x - location. Similarly, the system's total momentum can be found by integrating the square of the velocity along a vertical profile: \[M_{system}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U^{2}(y)dy. \tag{4}\] To identify the momentum produced by only the DBD actuator, the same measurements were taken with the DBD actuator ON and OFF. The difference in momentum integrated from the wall to the point where the two velocity profiles intersect is the momentum injected by the DBD. Above the intersection of the two profiles, there is entrainment due to mass conservation, and this entrainment is confirmed by taking the velocity profile of the entire wind tunnel with and without actuation. The resulting momentum is expressed as \[M_{DBD}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U_{DBD\ \mathit{ON}}^{2}(y)-\ \mathit{U}_{DBD\ \mathit{OFF}}^{2}(y)dy \tag{5}\] and this approach similarly holds for mass flow rate and mechanical power. The mechanical power of the system (\(W_{mech}\)) can be computed by \[W_{mech}=\ \frac{1}{2}\rho L\int_{y\ =\ 0}^{y\ =\ \infty}U^{3}(y)dy. \tag{6}\] The derived values of mass flow rate, momentum, and power of the DBD in external flow are compared to the results of similar measurements in quiescent flow. ## 3 Results and Discussion ### Power Consumption Figure 3 illustrates the power usage of the DBD for a range of operating voltages, external flow speed, and DBD actuator orientation. Integration of Lissajous Q-V curves yielded an average power consumption of the actuator. The power usage is normalized to the spanwise length of the actuator (0.11 m). In general, the power consumption increases quadratically with applied voltage, which is consistent with previous reports for AC DBD [5, 92, 95, 111] and the EHD flows driven by corona discharge [26, 30, 112]. Power usage between the AC cycles for any configuration had a maximum variance of \(\sim 8\%\), see Figure 3. These data were taken for the DBD actuator in quiescent, counter-flow, and co-flow conditions at the two external flow speeds. Pereira et al. also found power usage to vary less than 10% between co-flow and counter-flow forcing [81]. The magnitude of normalized power usage measured for this study is slightly higher than Pereira et al., which used a thicker 3mm dielectric; however, other studies, such as Forte et al., have found similar power usage for thinner thicknesses and noted that the increase in dielectric thickness decreases DBD capacitance, thus decreasing power usage [5, 92, 95]. ### Operation in Quiescent Condition First, the momentum injection of the AC DBD actuator is considered in a quiescent environment. Figure 4 shows the velocity profile of the DBD jet at \(\mathrm{x=10\;mm}\) and \(\mathrm{x=25\;mm}\) downstream of the active electrode edge [95]. Although the velocity decay and the spreading of the jet are apparent, as expected in wall jets, the presence of Coulombic forces associated with the DBD and the fact that the fluid is being accelerated over the fluid volume rather than the point source make the parameterization of the flow challenging. The addition of co- and counter-flow further complicates the analysis. Figure 3: DBD power usage at U\({}_{\infty}\)= 5 and U\({}_{\infty}\)= 11 m/s external flow in co-flow and counter-flow ### Co-flow EHD Momentum Injection This section discusses the DBD actuator performance in co-flow over the range \(\mathrm{V_{AC}=14-19.5\ kV}\), a frequency of 2 kHz, \(\mathrm{U_{\infty}=5\ m/s}\) (Re=35,000) and \(\mathrm{U_{\infty}=11\ m/s}\) (Re-75,000). The virtual origin and coordinate system are defined in Figure 5. The velocity profiles of the EHD jet are measured at \(\mathrm{x=10\ mm}\) and \(\mathrm{x=25\ mm}\) downstream of the active electrode edge. Due to EHD momentum injection, the velocity in the boundary layer increases; however, the difference in momentum displacement is affected by the thickness of the freestream boundary layer. Figure 6 shows the velocity data versus DBD voltage at two downstream locations. The dotted line is the wind tunnel velocity profile without DBD actuation. Note that in quiescent conditions (Figure 4), the DBD wall jet has a maximum velocity, \(\mathrm{U_{max}\sim 4.8\ m/s}\) at \(\mathrm{x=10\ mm}\), \(\mathrm{y=0.25\ mm}\) when \(\mathrm{V_{AC}=19.5\ kV}\) and \(\mathrm{f_{AC}=2\ kHz}\), and the peak velocity of the jet decays quickly away, downstream and wall-normal, from this maximum. At \(\mathrm{U_{\infty}=5\ m/s}\), EHD-induced velocities are comparable to the freestream, and the effects of DBD actuation are dominant. The increase in boundary layer velocity of \(\sim 2\ m/s\) is nearly identical to that of Bernard et al. [82] for similar conditions. The \(\mathrm{U_{max}\sim 5.4\ m/s}\) is greater than in quiescent condition (\(\sim 4.8\ m/s\)) but located at \(\mathrm{y=0.75\ mm}\) for the same x-position, as viscous effects shift the location of \(\mathrm{U_{max}\ away}\) from the wall. At \(\mathrm{y=0.25\ mm}\), the U velocity is 5.3 Figure 4: DBD with no external flow at \(\mathrm{x=10\ mm}\) (a) and \(\mathrm{x=25\ mm}\) (b) downstream. The maximum velocity at 19.5kV is \(\sim 4.7\ m/s\) at \(\mathrm{y=0.5\ mm}\) at \(\mathrm{x=10\ mm}\) downstream. Figure 5: DBD actuator in co-flow configuration, the first measurement is taken at x=10 mm to avoid plasma region disruption with pitot probe. The plasma region is colored purple. m/s and is greater than that of the quiescent jet at y\(=0.25\) mm; thus, the viscous effects between the DBD jet and freestream can be seen as mixing and entraining fluid from the external freestream into the boundary layer. In co-flow, the DBD-induced momentum does not diffuse into the outer flow as quickly as in the quiescent environment, and this can be observed from the velocity profile at x \(=25\) mm. The interaction of the wall jet with the freestream means that its momentum does not diffuse into a quiescent environment but rather continues to mix and entrain fluid into the boundary layer, high-speed fluid from the freestream. This entrainment can be seen by a slight decrease in the external flow with the energized DBD actuator starting at approximately y \(=3.75\) mm. Conservation of mass in the wind tunnel means that as the boundary layer accelerates by the EHD-added momentum, the freestream velocity decreases slightly (the momentum thickness of the boundary layer is smaller). The complete velocity profile confirms that mass is conserved as the entrainment section eventually merges with the base wind tunnel profile. Figure 7 shows the effect of the EHD wall jet in the boundary layer at U\({}_{\infty}\) = 11 m/s. In this case, the EHD velocity is about half the freestream for the highest DBD settings. The effect of momentum injection is reduced, as the enhanced mixing in this higher Reynolds number case is more effective at spreading the effect of the EHD momentum injection throughout a thinner boundary layer. Even at maximum DBD power, the velocity increase is less than \(\,1\) m/s at x \(=10\) mm. At x \(=25\) mm, the effect of the EHD momentum injection is almost negligible. These results agree with Bernard et al. [82] for U\({}_{\infty}\) = 10 m/s. At higher external flows, the EHD momentum addition results in a lower overall impact on the boundary layer, as the enhanced mixing in the thin boundary layer rapidly restores the boundary layer profile to the un-actuated shape. Figure 6: DBD actuator in U\({}_{\infty}\) = 5 m/s co-flow at (a) x = 10 mm and (b) x = 25 mm downstream. The dashed line shows the freestream profile without plasma injection. The DBD voltage is varied in the 14kV-19.5kV range; the AC frequency is set constant at 2kHz. The EHD momentum addition cannot be treated as a linear superposition of the EHD jet in a quiescent environment and the momentum associated with the boundary layer of the free stream. For external flows compatible with EHD wall jet velocities, the momentum injection into the co-flow leads to effective boundary layer thinning; this effect diminishes at higher freestream velocities (higher Reynolds numbers and thinner boundary layers). The wall jet mixing is influenced by (i) interaction with the freestream and (ii) viscous wall shear increase in the viscous sublayer. This point is explored further in Figure 13. ### Counter-Flow EHD jet This section describes the behavior of counter-flow EHD jet at DBD V\({}_{\mathrm{AC}}\) = 14 kV - 19.5 kV at f\({}_{\mathrm{AC}}\) = 2 kHz and wind speeds of U\({}_{\infty}\)= 5 m/s and U\({}_{\infty}\)= 11 m/s. The virtual origin and coordinate system of the DBD in counter-flow are defined above in Figure 8. The datum for analysis is set at the plasma generation edge of the active electrode; however, the EHD momentum injection is now in the negative x-direction. Figure 9 and Figure 10 show the velocity profiles for the EHD momentum injection into the counter-flow. The dotted line is the measured wind tunnel velocity profile without DBD actuation. At U\({}_{\infty}\) = 5 m/s, the velocity of the EHD jet has a similar magnitude to the external flow resulting in a significant adverse pressure gradient and the formation of a recirculation zone. The exact boundaries Figure 8: DBD actuator in counter-flow configuration. The plasma region is colored purple. Figure 7: DBD actuator in U\(\bullet\) = 11 m/s co-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. of the separation region are difficult to determine experimentally in the plasma region (x \(<\) 0 mm, y \(<\) 2mm) as the insertion of the pitot probe into the plasma interferes with the experiment, see Figure 8. However, to preserve continuity, the EHD jet must entrain fluid from above and behind the jet; thus, transects downstream and along x for constant y can be used to determine the boundaries of the separation bubble. First, we examine the y-scan at the fixed x-position. Figure 9 and Figure 10 show the profiles at x = 10 mm (above the active electrode) and x = 25 mm for U\({}_{\infty}\) = 5 m/s and U\({}_{\infty}\) = 11 m/s, respectively. As with the co-flow experiments, the EHD jet strength is varied by varying V\({}_{\mathrm{AC}}\) = 14 kV - 19.5 kV, while the AC frequency is 2kHz. For all voltages, the DBD in counter-flow creates a more significant deficit in the boundary layer than its co-flow counterpart, e.g., the counter-flow EHD jet creates a \(\Delta\)U \(>\) 5 m/s at V\({}_{\mathrm{AC}}\) = 19.5 kV compared to the \(\Delta\)U \(\sim\) 2 m/s in the co-flow case. Figure 9 (a) also shows that in counter-flow cases at 18kV and 19.5kV, the wall shear stress changes direction due to the increased strength of the EHD jet. In the co-flow scenario and lower voltage counter-flow configurations, the wall shear stress remains opposite of the freestream direction. Note that the maximum negative velocity is likely located in the EHD momentum injection region (x= - 10 mm - 0 mm). However, measurements could not be taken within the plasma region due to the plasma interactions with the pitot tube. Figure 9 shows that in the V\({}_{\mathrm{AC}}\) = 19.5 kV case, the separation bubble extends past x = 10mm downstream of the active electrode edge, while other conditions exhibit flow reattachment. The flow is fully attached at x = 25 mm; however, the pressure gradient in the flow boundary layer has not yet recovered. For higher external flow, the effects of the DBD jet are less dramatic. Within the boundary layer, the largest decrease in velocity in counter-flow with U\({}_{\infty}\) = 11 m/s is approximately 2.0 m/s at y = 0.5 mm and x = 10 mm. No separation was observed in the U\({}_{\infty}\) = 11 m/s counter-flow cases, though the plasma region was not probed. While the effects of the DBD in counter-flow at U\({}_{\infty}\) = 11 m/s are less dominant than for slower freestream experiments, the effects of the EHD wall jet are still more significant than in co-flow at the same U\({}_{\infty}\). Figure 9: DBD actuator in U\({}_{\infty}\) = 5m/s counter-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. The x-scans were performed while holding the y position constant to determine the boundaries of the separation region. Figure 11 shows the velocity profiles for y = 0.5 mm and y = 1.0 mm, while the x position was varied from x = 0 mm (edge of the active electrode) to x = 15 mm. The DBD voltage was V\({}_{\mathrm{AC}}\) = 12, 14, 16, 18 kV at f = 2 kHz. The data for V\({}_{\mathrm{AC}}\) = 19.5 kV is not shown due to the limited range of the pressure gauge. At y = 0.5 mm, immediately above the dielectric layer, separation is observed at all voltages. The x-location, where the velocity direction changes from backward to forward, determines the separation bubble's edge. At VAC = 18 kV, the separation length is approximately 7.5 mm downstream. At y = 1.0 mm, there is no signature of the separation bubble for VAC = 12 kV; however, it exists for the higher voltages. Figure 11: DBD actuator in U\({}_{\infty}\)= 5 m/s counter-flow at y = 0.5 mm (a) and y = 1.0 mm (b). The DBD voltage is varied in the 14 kV-19.5 kV range, the AC frequency is set constant at 2 kHz. Figure 10: DBD actuator in U\({}_{\infty}\)= 11 m/s counter-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. To better visualize the flow pattern at 5 m/s, additional x-scans were performed, and the 2D velocity fields were reconstructed. Figure 12 shows the velocity contour plots for the counter-flow EHD jet obtained by merging x and y scans at U\({}_{\infty}\) = 5 m/s. Each grid point in the figure is associated with a velocity measurement; the spatial resolution was 0.5 mm in both x and y directions, totaling approximately 400 measurements for each condition. All DBD actuations cause a separated region (for U\({}_{\infty}\) = 5 m/s, RE=35,000) with a negative x-velocity downstream of the plasma injection. With the increase in DBD voltage, the edge of the reversed flow region within the separation bubble extends from 3.0 mm (12 kV) to \(>\)10.0 mm (18 kV) in the x-direction from the edge of the active electrode and from 0.6 mm (12kV) to \(>\) 1.75 mm (18 kV) in the y-direction. This growth in the length and height of the reversed flow region of the separated bubble is nonlinear with increasing voltage. The size and shape of the recirculation bubble are determined by the competition of the EHD jet strength vs. the forward momentum in the boundary layer. As the EHD injects negative momentum at high DBD voltages, it can overtake the momentum in the boundary layer at greater heights. Without sampling within the plasma region, it is challenging to characterize the entire length of the separation regions. It can be expected that the separation region extends into the forcing plasma region. Multiphysics CFD simulations can potentially address this issue; however, robust models need to be developed and validated. Although multiphysics CFD simulations are not available, the separation region is not fully explored. Figure 12: **X-velocity contour plot for EHD jet in counter-flow for U\({}_{\infty}\) = 5 m/s for varying voltages. Gridlines correspond to recorded points spaced 0.5 mm apart in the x- and y-direction.** analysis is beyond the scope of this paper, preliminary results with a simplified momentum model proposed in [95] also suggest that the momentum injection in the counter-flow scenario triggers flow separation, though additional validation is required. ### Momentum Difference This section discusses the momentum difference in the boundary layer due to the EHD momentum injection. Momentum difference is calculated at x = 10 mm downstream of the DBD wall jet by integrating the velocity profiles in the y-direction up to a height where mass and momentum are injected and not entrained. Note that the x = 10 mm location is in the direction of the external flow with the datum at the plasma generation edge of the DBD actuator. Thus, in the co-flow case, the x = 10 mm location is downstream or in front of the plasma region and above the dielectric and grounded electrode (Figure 5); however, in the counter-flow case, the x = 10 mm location is behind the plasma region and above the active electrode (Figure 8). Figure 4 compares the DBD with external flow cases against an EHD jet in a quiescent environment. The absolute value of the momentum difference is shown, as the momentum difference is calculated by subtracting the counter-flow actuation profile from the un-actuated boundary layer profile. The literature does not contain any momentum comparison between the co- and counter-flow DBD injections. In the co-flow scenarios, the momentum addition into the boundary layer is equal to or lower than the momentum injected in the EHD jet. The momentum difference appears linear with V\({}_{\mathrm{AC}}\); however, the change in total momentum is relatively flat, suggesting that momentum dissipation is driven primarily by the inner layer wall jet interaction with the wall. At 11 m/s co-flow, lower momentum differences are found for all voltages, and an increase in turbulent dissipation can explain this. The increase in dissipation is shown in the velocity profiles in Figure 6 and Figure 7 as the effect of the DBD momentum can still be seen at x = 25 mm downstream when U\({}_{\infty}\) = 5 m/s (Re = 35,000), but the effect of the DBD momentum is almost unseen at x = 25mm downstream when U\({}_{\infty}\) = 11 m/s (Re = 75,000). Unlike the experiments in quiescent conditions [95], the fluid momentum of the EHD jet in the co-flow injection is not conserved as it travels downstream. In the counter-flow configuration, the momentum difference is more significant due to reversing flow near the wall. The Figure 13: DBD momentum difference at U\({}_{\infty}\)= 5 m/s and U\({}_{\infty}\)= 11 m/s external flow largest difference is observed in counter-flow, at U\({}_{\infty}\) = 5 m/s. The momentum difference is \(\sim\) 6.5x greater than its co-flow counterpart. The ratio of momenta within the DBD jet heights, \(M^{*}\), is proposed as a nondimensional relation that could predict separation in the counter-flow injection. \(M^{*}\) represents the DBD momentum injection compared to the inertial force in the external flow. \(M^{*}\) is defined as: \[M^{*}=\frac{M_{DBD}}{M_{External}}=\frac{\int_{0}^{h_{\rm jet}}{U_{quiescent~ {}DBD}}^{2}(\nu)dy}{\int_{0}^{h_{\rm jet}}{U_{external~{}flow}}^{2}(\nu)dy} \tag{7}\] The M\({}_{\rm DBD}\) value and the height of the jet (h\({}_{\rm jet}\)) in a quiescent environment can be directly measured or estimated from the empirical relationship as proposed by Tang et al. [95]. E.g., the height of the DBD jet in Figure 4 varies from 2 mm to 2.25 mm at the range of the voltages tested. The M\({}_{\rm External}\) value can be estimated analytically, numerically, or experimentally for a given value of the h\({}_{\rm jet}\). The ratio of the terms can be evaluated to determine flow separation criteria \(M^{*}\); the higher values are likely to result in flow separation. The values of \(M^{*}\) in the 5 m/s and 11 m/s cases are shown in Table 1. In this limited set of experiments, the separation was observed for cases with \(M^{*}>0.1\) (for \(M^{*}<0.1\), counter-flow DBD did not induce separation). Additional testing and numerical simulations at various DBD and external flow conditions could further define separation threshold criteria. Note that MDBD varies with DBD parameters and the x-position within the jet as it expands and loses momentum due to viscous dissipation. At the same time, the value MExternal depends on the external flow conditions and the x-position of the DBD jet as the jet thickness changes with the x-position. ## 4 Momentum Injection Model Numerical modeling of DBD has generally been categorized into three categories with increasing complexity: momentum injection models, simplified ion injection models, and species transport models. Momentum injection models such as that of Yoon _et al._[34] and Kriegseis _et al._[96] have been shown to accurately predict steady-state fluid flowfield of DBD actuators in a few configurations using empirically estimated forcing fields while remaining extremely computationally inexpensive. Simplified ion injection models such as the Orlov [113], Shyy [23], and the Suzen and Huang model [29] employ analytically or semi-empirically estimated charge density boundary conditions or charged density regions to model the transport of electrons and generalized ions. The electric potential and the charge density are then used to calculate the electrostatic Lorentz force with no magnetic field, and this EHD force is then coupled to the Navier-Stokes. In species transport models such as that of Bie _et al._[114] and Soloviev _et al._[115], the transport of the dominant chemical species, the resulting ions, and the radicals is modeled, and the distribution of ions is used to compute the electrostatic force at each time, similar to the simple ion models. Simplified ion and momentum injection models have been popular as they can readily be used for different applications while remaining relatively computationally lean. An early implementation of a momentum injection model based on previously published empirical measurements is tested in co-flow and counter-flow. No published DBD model has been tested in an external co-flow or counter-flow. This supplemental information further supports the previously proposed momentum injection model in an external flow while providing insights into the fluid interactions at the focus of this manuscript. The two-dimensional schematic is identical to Figure 5 and Figure 8. The domain height is set to match the experimental wind tunnel of 10cm. The velocity \begin{table} \begin{tabular}{c c c c c} \hline \hline **Reynolds** & **BL Height (mm)** & **DBD Momentum (mN/m)** & **M*** & **Separation** \\ \hline 35,000 & 8.0 & 6 – 22 & 0.14 – 0.52 & Yes \\ 75,000 & 2.5 & 6 – 22 & 0.02 – 0.07 & No \\ \hline \hline \end{tabular} \end{table} Table 1: **Conditions of Separation due to a DBD Jet in Counter-Flow** profile of the wind tunnel is defined as a custom user-defined velocity profile. A mesh of 332,000 cells is employed with refinement near the forcing region, and courser meshes were tested to ensure mesh independence. Since the DBD actuator does not have a fluid mass flux, the resulting fluid governing equations are the incompressible Navier-Stokes continuity and momentum equation with an added momentum source, defined as \[\nabla\cdot\mathbf{u}=0 \tag{8}\] \[\rho\frac{D\boldsymbol{u}}{Dt}=-\nabla P+\ \mu\nabla^{2}\boldsymbol{u}+\ \vec{f}_{EHD} \tag{9}\] The area of the momentum injection is within a right triangle region, similar to the approach of Shyy et al. [23], which assumes a linear approximation between plasma length and height. The author's previous work outlines that plasma length approaches \(\sim 8\) mm for these electrical parameters, and the length-to-height ratio is approximately constant at L/H = 4. For all simulations, the steady-state k-o turbulence model is used. A steady-state assumption for the DBD forcing is often assumed as the electrostatic and unsteady forcing timescales, and variances are considered sufficiently small. High-temporal resolution PIV data has shown that the time fluctuations of the DBD jet are often approximately 10% within a single voltage period; however, this variance depends on the applied voltage waveform [82, 116]. Yoon _et al._ used the one-equation Spalart-Allmaras turbulence model to model the DBD jet in quiescent flow; however, this significantly underpredicted the separation region in the counter-flow. The resulting velocity profiles and momentum difference calculations are presented below in Figure 14. In the co-flow configuration, the momentum injection model matches the maximum velocity within 10%. However, the location of the maximum velocity in the model is higher than experimentally measured, and the velocity difference appears more compact. This is believed to be due to the inability of a two-dimensional CFD simulation to capture turbulence effects well. In addition, the higher location of the maximum velocity supports that the forcing distribution shape should likely be more concentrated at the near-wall region, possibly similar to the modified Gaussian used in Yoon et al. [34]. The total momentum difference matches the experimental measurement using the velocity profile integrated at x = 10mm. Figure 14: **Numerical and experimental DBD velocity profiles with 19.5kv 2kHz forcing (\(\sim\)22 mn/m) in 5 m/s external co- and counter-flow.** In the counter-flow configurations, the strength of the separation region is overpredicted. This is believed to be due to the k-\(\omega\) turbulence model incorrectly predicting the separation region, which is generally in line with many other adverse-pressure gradient studies, including one on the backward-facing step that over-predicted shear stress and reattachment locations [117]. The momentum deficit measured at the x = 10mm has an error of about 30%. In addition, experimental results show mass entrainment at higher locations in the wind tunnel compared to the numerical results that show entrainment as low as \(\sim\)7mm above the plate. The downstream dissipation is not matched well, as the x = 25mm experimental profile shows higher dissipation than the numerical solution. Overall, the presence of a separation region in the counter-flow case was the most challenging case to match. A basic numerical simulation will unlikely accurately predict flow in a boundary layer with a strong adverse pressure gradient inducing separation. Thus, advanced numerical efforts are needed to predict this DBD-induced separation region accurately. The main strength, yet main limitation, of this model, is its simplicity. However, that potential can only be fulfilled by further understanding the dependencies on turbulent models, unsteady forcing, and plasma forcing volume. More advanced turbulent models such as Large Eddy Simulation (LES), Detached Eddy Simulation (DES), or Direct Numerical Simulation (DNS) may be needed to resolve viscous effects, which is especially important in counter-flow correctly. Time-averaged forcing with a triangular plasma force shape may be appropriate for simple cases such as in co-flow, but in the cases such as the counter-flow or crossflow, shear stress and turbulent effects over the momentum deficit volume are magnified, and proper time-resolution may be required. With these aspects tackled, this model can serve as a valuable DBD design tool while providing accurate results and shedding important insight into the DBD forcing. Figure 15: **Numerical and experimental DBD momentum difference with 19.5 kV 2 kHz forcing (\(\sim\)22 mN/m) in 5 m/s external co-flow and counter-flow** ## 5 Conclusion We have experimentally investigated the performance of a DBD plasma actuator over a range of voltages (12 kV - 19.5 kV) at 2 kHz in co-flow and counter-flow with freestream velocities of 5 m/s and 11 m/s. The power consumption associated with DBD discharge is measured through capacitive measurements, with high temporal resolution throughout several cycles. For all voltages and freestream conditions in this experiment, there was no significant difference in power expenditure between the co-flow, counter-flow, and quiescent conditions, consistent with previous results. The DBD jet increased boundary layer velocity by \(>2.0\) m/s in co-flow and decreased the boundary layer velocity \(>5\) m/s in counter-flow (leading to fully reversed flow near the wall). The momentum difference in counter-flow leads to flow separation; separation zone boundaries and velocity magnitudes were evaluated using velocity magnitude contour plots. At low freestream velocities, the EHD jet significantly influences boundary layer flow, and the dissipation is driven by the interaction of the DBD wall jet inner layer and the wall. However, at the higher freestream velocity, the external flow affects the outer layer of the EHD due to the more effective turbulent mixing. The counter-flow momentum difference is 6.5 times higher than its co-flow counterpart at U\({}_{\infty}\) = 5 m/s. The momentum difference in counter-flow offers promising results for active flow control applications. A non-dimensional flow separation criteria M* is proposed as the ratio of DBD jet momentum to integrated boundary layer momentum. This experimental data set can be used to develop models and validate multiphysics simulations for EHD flow. Future research should extend the understanding of the relationship between the unsteady forcing of the DBD and the turbulent characteristics of the external flow. ## 6 Acknowledgments This work was funded by the Joint Center for Aerospace Technology Innovation (JCATI). **NOMENCLATURE** \begin{tabular}{|l|l|} \hline \(\mathcal{C}\) & Pitot tube correction factor \\ \hline \(E\) & Electric field \\ \hline \(f_{AC}\) & Frequency of the applied voltage \\ \hline \(\vec{f}_{EHD}\) & Electro-hydrodynamic force term \\ \hline \(i(t)\) & Current \\ \hline \(l_{dis}\) & Discharge current \\ \hline \(L\) & Spanwise Length \\ \hline \(M\) & The momentum of the induced jet \\ \hline \(P\) & Pressure reading from the pitot tube \\ \hline \(W\) & Discharge energy consumption \\ \hline \(W_{mech}\) & Mechanical power \\ \hline \(W_{elec}\) & Electrical power \\ \hline \(U(y)\) & Velocity at y height \\ \hline \(U_{max}\) & Maximum velocity of the wall jet \\ \hline \(U_{\infty}\) & External flow velocity \\ \hline \(V_{AC}\) & AC Voltage in the DBD actuator \\ \hline \(v\) & Time-averaged velocity \\ \hline \(t^{*}\) & Normalized time value \\ \hline \(\rho\) & Density \\ \hline \(Q\) & Mass flow rate \\ \hline \end{tabular}
2309.00091
On the local aspect of valleytronics
Valley magnetic moments play a crucial role in valleytronics in 2D hexagonal materials. Traditionally, based on studies of quantum states in homogeneous bulks, it is widely believed that only materials with broken structural inversion symmetry can exhibit nonvanishing valley magnetic moments. Such constraint excludes from relevant applications those with inversion symmetry, as specifically exemplified by gapless monolayer graphene despite its technological advantage in routine growth and production. This work revisits valley-derived magnetic moments in a broad context covering inhomogeneous structures as well. It generalizes the notion of valley magnetic moment for a state from an integrated total quantity to the local field called "local valley magnetic moment" with space-varying distribution. In suitable inversion-symmetric structures with inhomogeneity, e.g., zigzag nanoribbons of gapless monolayer graphene, it is shown that the local moment of a state can be nonvanishing with sizable magnitude, while the corresponding total moment is subject to the broken symmetry constraint. Moreover, it is demonstrated that such local moment can interact with space-dependent electric and magnetic fields manifesting pronounced field effects and making possible a local valley control with external fields. Overall, a path to "local valleytronics" is illustrated which exploits local valley magnetic moments for device applications, relaxes the broken symmetry constraint on materials, and expands flexibility in the implementation of valleytronics.
Zheng-Han Huang, Feng-Wu Chen, Yu-Shu G. Wu
2023-08-31T19:17:49Z
http://arxiv.org/abs/2309.00091v1
# On the local aspect of valleytronics ###### Abstract Valley magnetic moments play a crucial role in valleytronics in 2D hexagonal materials. Traditionally, based on studies of quantum states in homogeneous bulks, it is widely believed that only materials with broken structural inversion symmetry can exhibit nonvanishing valley magnetic moments. Such constraint excludes from relevant applications those with inversion symmetry, as specifically exemplified by gapless monolayer graphene despite its technological advantage in routine growth and production. This work revisits valley-derived magnetic moments in a broad context covering inhomogeneous structures as well. It generalizes the notion of valley magnetic moment for a state from an integrated total quantity to the local field called 'local valley magnetic moment' with space-varying distribution. In suitable inversion-symmetric structures with inhomogeneity, e.g., zigzag nanoribbons of gapless monolayer graphene, it is shown that the local moment of a state can be nonvanishing with sizable magnitude, while the corresponding total moment is subject to the broken symmetry constraint. Moreover, it is demonstrated that such local moment can interact with space-dependent electric and magnetic fields manifesting pronounced field effects and making possible a local valley control with external fields. Overall, a path to 'local valleytronics' is illustrated which exploits local valley magnetic moments for device applications, relaxes the broken symmetry constraint on materials, and expands flexibility in the implementation of valleytronics. ## I Introduction After pioneering studies on the quantum Hall effect in graphene layers [1, 2, 3], atomically thin 2D hexagonal crystals with broken inversion symmetry, e.g. gapped graphene [4, 5, 6, 7] and monolayer transition metal dichalcogenides [8] have been recognized [9, 10] to form a crucial class of topological materials with significant impacts, due to the presence of two degenerate and inequivalent band structure valleys generally designated by K and K', respectively. The valley degree of freedom has important technological implications for binary information processing and, as such, has inspired the emergence of valleytronics [11]. In addition, extensive research efforts have led to the exciting discovery of a diverse range of novel valley phenomena including valley magnetic moments [9, 10] and those connected with the moments [12, 13, 14], such as robust valley topological currents [15, 16, 17, 18, 19, 20, 21], valley-polarized interface states [15], valley-orbit [22, 23] and valley Zeeman interactions [9, 23], with the findings having also motivated important device proposals for valleytronic applications such as valley filters/valves [11, 18, 24, 25], qubits [23, 26, 27, 28, 29], and FETs [30]. Traditionally, studies of valley magnetic moments have been performed from a homogeneous perspective, with important deductions specifically drawn from investigating moments of homogeneous bulk states as topological quantities [9, 10]. Such a perspective has long guided the field with important influence. For instance, studies have skipped any potential nontrivial spatial dependence in the valley magnetic moment and have been focused primarily on its integrated total value. Constraints such as breaking of the structural inversion symmetry [9] have been established as rules for nonvanishing total moments and widely applied to the selection of materials in experiments and applications, with gapped AB stacked bilayer graphene [12, 13, 14] and monolayer transition metal dichalcogenides [31, 32, 33] being well-known options for experiments. Moreover, when external field-valley magnetic moment interactions are explored, primarily those between homogeneous fields and total moments have been investigated. Within the above perspective, a restricted description of the spatial dependence can in principle be provided, though. In the limit of weak, slowly varying structural inhomogeneity, for example, such description would consist of a suitable partition of the space and application to each region the deduction drawn from the homogeneous case. However, rigorously speaking, the quasihomogeneous treatment of spatial dependence may overall under-describe the spectrum of valley physics, specifically that in the limit of strong inhomogeneity. From the scientific standpoint, the under-description may have overlooked interesting hidden aspects beyond the homogeneous perspective which are worthy of exploration. From the application standpoint, the under-description may raise the issue of validity concerning taking broken inversion symmetry as a universal material constraint, and clarifying such issue is critical as it impacts material options and opportunities for applications with valley magnetic moments. Inspired by both foregoing prospects, this work revisits valley-derived magnetic moments across the spectrum of inhomogeneity covering both weak and strong limits. It generalizes the notion of valley magnetic moment for a quantum state from an integrated total quantity to a local field with space-varying distribution called 'local valley magnetic moment' in the work. In suitable inversion-symmetric structures, e.g., zigzag nanoribbons of gapless graphene, where abrupt boundaries induce strong inhomogeneity, it is shown that even though the total moment of a state vanishes due to inversion symmetry, the state can nevertheless exhibit a sizable, nonvanishing local moment distribution. Moreover, it is demonstrated that such local moment can interact with space-dependent electric or magnetic fields, manifesting pronounced field effects and making possible a local valley control with external fields. Altogether, a path to 'local valleytronics' is opened up with advantages including expanded material options, among which an important one is gapless monolayer graphene. In particular, in view of available routine production with exfoliation or state-of-the-art 2D crystal growth [34, 35, 36, 37, 38, 39] for such graphene, the path considerably relaxes valley-derived magnetic moment based experiments and applications. The presentation is organized as follows. **Sec. II** discusses the notion of local valley magnetic moments in an analytical way. Specifically, it develops a current density formulation for both notional and quantitative discussions of local valley magnetic moments. In addition, it provides a compatibility discussion from the symmetry stand point, for the existence of nonvanishing local valley magnetic moments in inversion-symmetric structures. Last, interactions between local moments and magnetic and electric fields - local valley-orbit interactions, respectively, are presented near the end of **Sec. II**. **Sec. III** performs numerical studies and presents results in connection with and validating analytical discussions in **Sec. II**. **Sec. IV** gives conclusion and outlook. **Appendix A** provides a derivation of the current density used in **Sec. II**. **Appendix B** applies the formulation developed in **Sec. II** to the calculation of local valley magnetic moments in the homogeneous bulk case. **Appendix C** provides a supplement to the compatibility discussion in **Sec. II**. **II. LOCAL VALLEY MAGNETIC Moments** For clarity, we start the discussion with graphene serving as an example, describe the notion of local valley magnetic moments in terms of an intuitive picture, and then support the picture by deriving an analytical expression of local moments in the Dirac model of graphene with inhomogeneity, which also provides in the weak inhomogeneous limit a connection with the current theoretical understanding of valley magnetic moments, as well as goes beyond the limit with an important clue given for the likely existence of nonvanishing local moments in the case of an inversion-symmetric structure. Following it, an exact, symmetry-based argument is presented to explicitly support the compatibility between foregoing existence likelihood and inversion symmetry. Last, built on these foregoing discussions, a generic, operationally defined expression of local valley magnetic moments is developed independent of materials and structures for numerical calculations. **Figure** 1 shows a monolayer graphene crystal structure, where each unit cell consists of two carbon sites denoted by A (red) and B (blue) throughout the work. It also depicts a representative graphene electron state, in the tight-binding model including only nearest neighbor hopping and carbon atomic 2p\({}_{x}\) orbitals [40] with on-site energy \(\varepsilon_{{}_{A}}=\Delta\) for the orbital on A and \(\mathcal{E}_{B}=-\Delta\) for that on B. \(\Delta\) is also the gap parameter characterizing the corresponding graphene band structure, with 2 \(\Delta\) being the gap between conduction and valence bands. In gapless graphene, \(\Delta=0\) and \(\varepsilon_{{}_{A}}=\varepsilon_{{}_{B}}\) giving inversion symmetry between A and B sites and, thus, to the structure, too. As illustrated in **Figure 1**, the local valley magnetic moment of an electron arises out of a spin-like, local electron orbital rotation, as explained in the following. Take a near-K electron state \(\phi_{K}\) as an example. Write the state as \(\phi_{K,A}+\phi_{K,B}\) with \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) B site orbitals. For a conduction (valence) band electron, the component \(\phi_{K,A}\) ( \(\phi_{K,B}\) ) dominates over the other. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry, respectively, in the group-theoretical representation language [41], which means they carry opposite phase increments \(\pm\)2/3\(\pi\), respectively, when moving among sites. Such phase variations lead to corresponding loop currents of opposite senses (orange and blue circles, respectively) that compete with each other. Since each current is weighted by electron probability on the corresponding site, i.e., \(\rho_{A}\) or \(\rho_{B}\) ( \(\rho_{A(B)}=\) local probability on site A (B)), the competition yields a net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and gives a 'local pseudospin' with a local magnetic moment \(\propto\rho_{A}-\rho_{B}\). To facilitate the discussion, we further introduce the term 'probability based inversion symmetry breaking' for an electron state. Irrespective of the actual situation in structural inversion symmetry, when \(\rho_{A}=\rho_{B}\) ( \(\rho_{A}\neq\rho_{B}\) ), the probability based inversion symmetry is said to exist (be broken) locally in the state. Then, the local magnetic moment actually correlates with the local degree of probability based inversion symmetry breaking. For example, when \(\rho_{A}=\rho_{B}\)( \(\rho_{A}\neq\rho_{B}\) ) the moment is zero (nonvanishing) reflecting the existence (breaking) of probability based inversion symmetry. \(\varepsilon_{{}_{B}}=-\Delta\) for the orbital on B site (blue atom), with \(\Delta=0\) in gapless graphene. Overall, the electron performs a spin-like, local orbital rotation (light green circle) while executing a global translation (grey dashed line). Consider a near-K state, for example. Generally, it consists of the two components - \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) composed of B site orbitals. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry in the group-theoretical representation language, with phase increments \(\pm\)2/3\(\pi\), respectively, as well as corresponding loop currents of opposite senses (orange and blue circles, respectively), resulting in the net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and corresponding local valley magnetic moment (green, out-of-plane arrow). Last, we make a note about how probability based inversion symmetry breaking may arise in the presence of structural inversion symmetry. Take a zigzag nanoribbon of gapless graphene for example. While the structure is invariant under structural inversion, it is well known that the bounded structure terminates on A sites on one edge and B sites on the other. Therefore, through boundary conditions on the electron state, an edge-induced AB asymmetry enters the corresponding electron probability distribution, giving distinct \(\rho_{A}\) and \(\rho_{B}\) and resulting in probability based inversion symmetry breaking. Further discussions about zigzag nanoribbons will be given below in this section as well as in **Sec. III**. ## 1 The Dirac model A discussion in the Dirac model of graphene with inhomogeneity is now performed to illustrate the foregoing picture. Consider a simple Q1D inhomogeneous structure in the absence of external fields, with the inhomogeneity derived from a spatial variation in the gap parameter of the model. For simplicity, the varying gap parameter is taken to be \(\Delta(y)\) which preserves translational symmetry in the \(x\) direction. Moreover, we take \(\Delta(y)\) to be a regular function free of singularities and, thus, avoid complications such as those due to abrupt boundaries. Let \(F\) be the Dirac two-component wave amplitude on carbon A and B sites ( \(F=\left(F_{A},\ \ \ F_{B}\right)^{\dagger}\), '\(t^{\prime}=\) transpose), valley index \(\tau=1\) (-1) for valley K (K'), \(E=\) electron energy relative to the mid-gap point, \(k_{x}\)\(=\) wave vector relative to the Dirac point, and (\(x\),\(y\)) \(=\) cell position. \(F\) satisfies the following Dirac equation ( \(h=1\), \(e=\) -1 (electron charge), Figure 1: **Cell-orbital magnetic moment** Monolayer graphene is used for illustration. Each unit cell consists of two carbon sites (A and B). In the tight-binding model used here, on-site energy \(\varepsilon_{{}_{A}}=\Delta\) for the 2p\({}_{x}\) orbital on A site (red atom) and \(\Delta=0\) in gapless graphene. Overall, the electron performs a spin-like, local orbital rotation (light green circle) while executing a global translation (grey dashed line). Consider a near-K state, for example. Generally, it consists of the two components - \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) composed of B site orbitals. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry in the group-theoretical representation language, with phase increments \(\pm\)2/3\(\pi\), respectively, as well as corresponding loop currents of opposite senses (orange and blue circles, respectively), resulting in the net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and corresponding local valley magnetic moment (green, out-of-plane arrow). and \(v_{F}=1\) ( Fermi velocity) throughout the work) [3, 23, 42]: \[H_{{}_{\mathit{hom}}}F=EF,\] \[H_{{}_{\mathit{hom}}}=\begin{pmatrix}\Delta(y)&k_{x}-\vec{w}_{y} \\ k_{x}+\vec{w}_{y}&-\Delta(y)\end{pmatrix}. \tag{1}\] Note that the usual \(k_{x}\pm ik_{y}\) in the off-diagonal matrix elements of Dirac Hamiltonian for bulk graphene is now replaced by \(k_{x}\pm\vec{w}_{y}\), with the substitution \(k_{y}\rightarrow-i\partial_{y}\) for the structure considered here following the standard effective mass theory [43]. Eqn. (1) is a generalization of those given in References [23] and [42] for graphene ribbons to the case where \(\Delta(y)\) is space varying. From Eqn. (1), the current density operator '\(j_{x}\)' in the \(x\) direction is easily constructed, giving \[j_{x}=-\frac{k_{x}\rho}{E}-\tau\frac{\partial_{y}\rho_{{}_{\mathit{diff}}}}{2E}\,, \tag{2}\] where \(\rho(x,y)=\rho_{A}(x,y)+\rho_{B}(x,y)\),, \(\rho_{{}_{\mathit{diff}}}(x,y)=\rho_{A}(x,y)-\rho_{B}(x,y)\), and \(\rho_{A,(B)}(x,y)\equiv\mid F_{{}_{A(B)}}(x,y)\mid^{2}\). Details of the derivation of Eqn. (2) are given in **Appendix A**. Note that both \(\rho_{A}\) and \(\rho_{B}\) are actually independent of \(x\) due to the translational symmetry in the \(x\) direction. So is \(j_{x}\). Following the standard theory of magnetostatics [44], where the current density \(\stackrel{{\rightarrow}}{{j}}\) in the presence of a magnetization distribution \(\stackrel{{\rightarrow}}{{m}}\) is written as \(\stackrel{{\rightarrow}}{{j}}=\stackrel{{ \rightarrow}}{{j}}_{{}_{\mathit{pw}}}+\nabla\times\stackrel{{ \rightarrow}}{{m}}\), we identify the first term in Eqn. (2) with \((\stackrel{{\rightarrow}}{{j}}_{{}_{\mathit{pw}}})_{x}\) - a free charge- composed translational current and the second term \((\nabla\times\stackrel{{\rightarrow}}{{m}})_{x}\) - a magnetization current, with the corresponding magnetization distribution \(\stackrel{{\rightarrow}}{{m}}\) given by \[\stackrel{{\rightarrow}}{{m}}=-\frac{\tau\rho_{{}_{\mathit{ diff}}}}{2E}\stackrel{{\rightarrow}}{{z}}. \tag{3}\] Important implications follow Eqn. (3) as given below. 1. As \(\stackrel{{\rightarrow}}{{m}}\propto\rho_{{}_{\mathit{diff}}}\), it confirms the picture of local valley magnetic moments depicted in '\(m\)', with \(m=\stackrel{{\rightarrow}}{{m}}\cdot\stackrel{{ \rightarrow}}{{z}}=-\frac{\tau\rho_{{}_{\mathit{diff}}}}{2E}\). 2. For a homogeneous bulk, where \(\Delta(y)=\Delta_{a}\), Eqn. (3) gives \[m=\rho\ \mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau),\] \[\mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau)\equiv-\frac{\tau \Delta_{0}}{2E^{2}}\,,\] (4) where \(\mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau)=\) total valley magnetic moment of the bulk state. Derivation of Eqn. (4) is given in **Appendix B**. Eqn. (4) agrees exactly with that obtained in the traditional, homogeneous perspective with a topological, valley Berry curvature-based approach [9]. Importantly, it shows the two notable features of \(\mu_{{}_{\mathit{bulk}}}\) traditionally established within the perspective, namely, the one-to-one correspondence between \(\tau\) and \(\mathrm{sgn}(\mu_{{}_{\mathit{bulk}}})\), and vanishing \(\mu_{{}_{\mathit{bulk}}}\) in the presence of structural inversion symmetry (\(\Delta_{0}=0\)) [9]. Such features constitute what we call 'homogeneous perspective-based expectations or constraints. 3. The expression of \(m\) given in Eqn. (4) takes the form of a \(\rho\) -weighted distribution of \(\mu_{{}_{\mathit{bulk}}}\) in the (\(x\),\(y\)) space, which suggests a simple extension to the weak, slowly varying inhomogeneous case, that is, \(m(x,y)=\rho(x,y)\)\(\mu_{{}_{\mathit{bulk}}}(E,\Delta(x,y);\tau)\). Such a quasi-homogeneous extension would, however, subject the local moment to the rule of broken structural inversion symmetry, that is, when \(\Delta(x,y)=0\), \(\mu_{{}_{\mathit{bulk}}}(E,\Delta(x,y);\tau)=0\) and so \(m(x,y)=0\). In contrast, the expression of \(m\) given in Eqn. (3) which is suited to general inhomogeneity is less restricted. It predicts, on the contrary, the likely existence of nonvanishing \(m\) when \(\rho_{{}_{\mathit{diff}}}(x,y)\) is finite, irrespective of the actual situation in structural inversion symmetry. 4. **Local valley magnetic moments and inversion symmetry** The likely existence of nonvanishing local Figure 1: Moreover, a quantitative expression of local valley magnetic moment is provided by the corresponding projection ‘\(m\)’, with \(m=\stackrel{{\rightarrow}}{{m}}\cdot\stackrel{{ \rightarrow}}{{z}}=-\frac{\tau\rho_{{}_{\mathit{diff}}}}{2E}\). moments, even in the presence of structural inversion symmetry, marks an important deviation of the present local valleytronics from the traditional, homogeneous perspective based valleytronics. The likelihood can generally be argued from the standpoint of symmetry, regardless of singularities such as abrupt boundaries in structures, as follows. Inversion-symmetric structures with translational symmetry in the \(x\) direction are considered. Let \(m_{t}(y)\) be the local moment distribution in the transverse (\(y\)) dimension, for a quantum state near one of the two Dirac points with valley index \(\tau\). Note that \(m_{\tau}\) is uniform in the \(x\) direction, given the translational symmetry in the direction. Firstly, we briefly apply the traditional symmetry argument [9] to the total valley magnetic moment \(\int m_{\tau}\left(y\right)dy\) and show that it vanishes in the structure considered here. Denote \(\int m_{\tau}\left(y\right)dy\) by M. Then an apparent conflict would come up when applying the inversion operation (Inv) as follows. 1) With the inversion being a symmetry of the structure, it follows that M remains invariant under Inv. 2) On the other hand, Inv flips the wave vector of the state and, hence, valley index, too, giving \(\mathrm{Inv}(\tau)=\) -\(\tau\). Since valleys \(\tau\) and -\(\tau\) are also time reversal (TR) transforms of each other, i.e., \(\mathrm{TR}(\tau)=\) -\(\tau\), it follows that the corresponding current loop of M reverses sense when going from \(\tau\) to -\(\tau\) thus leading to a sign flip in M, in conflict with the earlier conclusion of M being invariant. The conflict can only be resolved by putting \(\mathrm{M}=0\). However, the above symmetry argument does not forbid the existence of a nonvanishing \(m_{t}(y)\). For example, an oscillating, antisymmetric \(m_{t}(y)\), i.e., \(m_{t}(\)-\(y)\) -\(m_{t}(y)\) with nonvanishing amplitude would not violate the conclusion of vanishing M. Below, we show the compatibility between an antisymmetric \(m_{t}(y)\) and structural inversion symmetry. In **Figure 2**, such \(m_{t}(y)\) is depicted in the middle graph. Applying Inv changes \(m_{t}(y)\) to '\(m_{t}(\)-\(y)\)', with the transformed distribution shown in the left graph. On the other hand, as Inv flips the valley index as TR, it therefore changes \(m_{t}(y)\) to '\(m_{t}(y)\)' or '\(-m_{t}(y)\)', with the transformed distribution shown in the right graph. The agreement between the transformed \(\mathrm{Inv}(m_{t}(y))\) and \(\mathrm{TR}(m_{t}(y))\) demonstrates a consistency in the case of an antisymmetric \(m_{t}(y)\) and, hence, concludes compatibility between such \(m_{t}(y)\) and inversion symmetry. A more detailed argument is presented in **Appendix C**, in the case of zigzag nanoribbons in gapless graphene, where it provides a brief overview of state transformation under Inv and TR, and applies it to \(m_{t}(y)\), as a supplement to the above discussion. It would be worthwhile to note about the role of translational symmetry in the two examples, namely, homogeneous bulks and zigzag nanoribbons, both in gapless graphene. As already concluded, \(\int m_{\tau}\left(y\right)dy=0\) in both cases. But, concerning \(m_{t}(y)\), a distinction resulting from the symmetry may exist between the cases, as follows. In the homogeneous bulk case, with \(m_{t}\) a uniform distribution due to the translational symmetry, it is obvious that only a trivial antisymmetric distribution, i.e., \(m_{t}(y)=0\) everywhere can concur with the vanishing \(\int m_{\tau}\left(y\right)dy\). In contrast, for inhomogeneous structures such as zigzag nanoribbons, \(m_{t}(y)\) is likely space varying due to the lack of translational symmetry in the \(y\) direction. Therefore, even though \(\int m_{\tau}\left(y\right)dy=0\), it leaves plenty of room for \(m_{t}(y)\) to dodge the trivial destiny if it is antisymmetric. As will be illustrated explicitly with numerical results in **Sec. III**, \(m_{t}(y)\) in zigzag nanoribbons indeed oscillates with a nonvanishing amplitude as graphed in **Figure 2**. ## 3 Generic definition A both model- and material- independent, functional derivative expression is given below to define the local valley magnetic moment in terms of the local Zeeman response to a weak probing magnetic field, as follows: \[\begin{split}& m(\vec{r})=-\frac{\delta E_{Z_{zeman\_unit}}[B_{z}^{ (probe)}(\vec{r})]}{\delta B_{z}^{(probe)}(\vec{r})}\Bigg{|}_{B_{z}^{(probe)}( \vec{r})=0},\\ & E_{Z_{zeman\_unit}}[B_{z}^{(probe)}(\vec{r})]=-\int m(\vec{r})B_ {z}^{(probe)}(\vec{r})d^{2}r\end{split} \tag{5}\] Figure 2: **Local valley magnetic moment** in a zigzag graphene nanoribbon of gapless graphene. Middle – antisymmetric distribution \(m_{t}(y)\); left – transformed \(\mathrm{Inv}(m_{t}(y))\); right – transformed \(\mathrm{TR}(m_{t}(y))\)). Thin, short horizontal arrows indicate corresponding wave vectors (\(k\)) of quantum states. ( \(E_{Zeeman\_valley}=\) valley Zeeman energy, \(B_{z}^{(probe)}=\) probing magnetic field). Eqn. (5) exploits the physics of local Zeeman interaction \({}^{*}-m(\vec{r})B_{z}^{(probe)}\)\(\cdot\)\(\cdot\)\(\cdot\) to operationally define \(m(\vec{r})\). Without going into details, we state that it can be shown that such definition when applied to the Q1D inhomogeneous structure earlier considered in the Dirac model reproduces the same expression of \(m\) derived there. Eqn. (5) can be applied to numerical studies, including those with abrupt boundaries. In the graphene case, we perform such studies with the same tight-binding model used in **Figure 1**, with the magnetic field included in the model through the Peierls substitution method [45]. In the case of a Q1D structure, \(B_{z}^{(probe)}\) is taken to be a strip of flux as shown in **Figure 3**. Usage of the strip flux results in \(m(y)\) independent of \(x\), consistent with translational symmetry in the \(x\)-direction in the structure. **Figure 3. B.(probe) A strip of local, vertical magnetic field is used in the case of a Q1D structure.** ## 4 Effects of external fields Interactions between local valley magnetic moments and space-dependent electric and magnetic fields are discussed below. Because derivations of the interactions are somewhat involved, the presentation below takes the following strategy. Previous results in the homogeneous bulk case are briefly mentioned, followed by conjectures for extensions to the inhomogeneous case based on the results. Rigorous results are stated at the end, with derivations given and accessible elsewhere [46]. In the homogeneous bulk case, it is known that for a bulk state the corresponding valley magnetic moment \(\mu_{bulk}\) can interact with a uniform, out-of-plane magnetic field, e.g., \(B_{z}\), shifting the state energy by the valley Zeeman term \(\cdot\)\(-\mu_{bulk}\,B_{z}\), [9, 23]. In the inhomogeneous case, the foregoing result is replaced by the local expression \({}^{*}-m(\vec{r})B_{z}(\vec{r})\), following the earlier discussion in **Sec. II** that defines the local valley magnetic moment. Similarly, it is known that \(\mu_{bulk}\) can also couple with an electric field giving rise to the valley-orbit interaction. For a bulk graphene state with wave vector \(k_{x}\), the corresponding interaction energy is given by \(\cdot\)\(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}\mu_{bulk}\), [23], in the case where \(\varepsilon_{y}\) is a uniform, in-plane electric field in the y direction. This result leads, in the Q1D case, to the conjecture of a corresponding local expression given by \(\cdot\)\(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}(y)m(y)\), for the interaction between the local moment \(m(\mathrm{y})\) and a space-dependent electric field \(\varepsilon_{y}(y)\). In the following, we restrict the attention to Q1D structures and present a rigorous statement in the linear response regime for local valley \(-\) external field interactions. Consider a quantum state with wave vector \(k_{x}\). Let \(m^{(0)}(y)\) and \(E^{(0)}\) be the corresponding field-free local valley magnetic moment and electron state energy, respectively. In the linear response regime, the local valley \(-\) external field interaction energy is given by \[\begin{split} E_{valley-fold}&=\int\limits_{-\infty}^ {\infty}\frac{k_{x}}{E^{(0)}}\varepsilon_{y}(y)m^{(0)}(y)dy\\ &-\int\limits_{-\infty}^{\infty}B_{z}(y)m^{(0)}(y)dy\end{split} \tag{6}\] with \[\frac{k_{x}}{E^{(0)}}\varepsilon_{y}(y)m^{(0)}(y) \tag{7}\] being the _local valley-orbit interaction_ due to the electric field \(\varepsilon_{y}(y)\) and \[-B_{z}(y)m^{(0)}(y) \tag{8}\] the _local valley Zeeman interaction_ due to the magnetic field \(B_{z}(y)\). Both interactions can serve as useful _mechanisms_ for _local valley control_ with space-dependent electric / magnetic fields, in analogy to their bulk counterparts which have already been demonstrated to be useful for valleytronic device applications [34]. Note that in the low energy limit where \(E^{(0)}\rightarrow\Delta_{0}\), Eqn. (7) reduces to the earlier conjecture - \(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}(y)m(y)\)'based on the homogeneous bulk result. ## III Numerical results This section carries out numerical studies to illustrate i) nonvanishing local valley magnetic moments in the presence of inversion symmetry and ii) local magnetic and electric effects. As the local magnetic moment is dominated by the difference \(\rho_{diff}\), we look for structures with strong modulation on the atomic scale in order to create a pronounced contrast between A and B sites for \(\rho_{diff}\) to be nonvanishing. This leads to the consideration of zigzag graphene nanoribbons, where one boundary abruptly terminates at A sites and the other at B sites thus creating a strong asymmetry between A and B sites. In all figures presented below, the same nanoribbon is studied, with the gap parameter \(\Delta\) = 0 eV and ribbon width W = 65.8 \(a\) (\(a\) = 1.42 A being the bulk lattice constant). Throughout presentations, magnetic moments are expressed in units of the Bohr magneton (\(\mu_{B}\) = 5.79\(\times\)10\({}^{5}\) eV/Tesla). **Figure 4** presents local valley magnetic moments of nanoribbon subband states. **(a)** shows a few valence and conduction subbands in the ribbon. States of opposite Dirac valleys are located near \(k_{x}\) \(\sim\) -2.10 \(a^{\shortmid}\) and \(k_{x}\) \(\sim\) 2.10 \(a^{\shortmid}\), respectively. **(b)** shows VMM\({}_{1/2}\) - local valley magnetic moments accumulated over half width of the ribbon (i.e., VMM\({}_{1/2}\) = \(\int\limits_{0}^{W/2}m(y)dy\) ), for subband states already presented in **(a)**, using the same color index scheme used in **(a)**. Note that for each subband, VMM\({}_{1/2}\) flips in sign for states of opposite valleys (near \(k_{x}\)\(\sim\) -2.10 \(a^{\shortmid}\) and \(k_{x}\)\(\sim\) 2.10 \(a^{\shortmid}\), respectively). Moreover, for each \(k_{x}\), VMM\({}_{1/2}\) flips in sign, too, for corresponding conduction and valence band states related by electron-hole symmetry, for example, second conduction and valence subband states. Both flips can be attributed to the underlying time reversal symmetry and will play a role in **Figures 5** and **6** below when field effects are considered. Note that VMM\({}_{1/2}\) in **(b)** is sizable and can sometimes exceed 10 \(\mu_{B}\) in the nanribbon considered. **(c)** illustrates local valley magnetic moments (LVMMs) of second valence subband states (in the red curve shown in **(a)**) at a few selected \(k_{x}\)'s (-1.88 \(a^{\shortmid}\), -2.10 \(a^{\shortmid}\), and -2.31 \(a^{\shortmid}\)) near the Dirac point at \(k_{x}\) \(\sim\) -2.10 \(a^{\shortmid}\). All LVMMs shown here exhibit antisymmetry in the \(y\)-direction, giving vanishing total valley magnetic moments irrespective of \(k_{x}\). **(d)** presents \(\rho_{A}(y)\) and \(\rho_{B}(y)\) of the second valence subband state at \(k_{x}\) = -2.10 \(a^{\shortmid}\), which implies a sign oscillation in \(\rho_{diff}(y)\) and, hence, in the corresponding LVMM as well in agreement with the oscillation shown by the red curve in **(c)**. **Gapless zigzag graphene nanoribbon** **Figure 5** illustrates effects of the local valley Zeeman interaction given in Eqn. (6) and highlights the usefulness of local valley magnetic moments when total moments vanish. Introduce the magnetic field strength parameter B\({}_{\rm{z0}}\) with \(\mu_{B}\)B\({}_{\rm{z0}}=1\) meV, which corresponds to B\({}_{\rm{z0}}\sim 17\) Tesla. **(a)** compares the field-free subbands (black) with those when a locally varying, step-like magnetic field B\({}_{\rm{z}}(y)\) is applied, with the field flux confined exclusively to the lower half ribbon, i.e., B\({}_{\rm{z}}(y<\) W/2) = B\({}_{\rm{z0}}\) and B\({}_{\rm{z}}(y>\) W/2) = 0 (red). In order to interpret the graph, we apply the expression of local valley Zeeman interaction energy \({}^{*-}\int\limits_{0}^{W/2}B_{z0}m^{(0)}(y)dy\), which yields the product '-B\({}_{\rm{z0}}\) VMM\({}_{1/2}\)'. Since VMM\({}_{1/2}\) carries opposite signs for opposite valleys, as noted earlier in **Figure 4**, the interaction lifts the valley degeneracy resulting in the valley Zeeman splitting \(\sim\) 17 meV shown in the second conduction or valence subband. **(b)** compares the field-free subbands (black) with those when the whole ribbon is immersed in the uniform magnetic field given by B\({}_{\rm{z}}(y)=\) B\({}_{\rm{z0}}\) (blue). The valley Zeeman interaction energy in this case is proportional to the total valley magnetic moment and thus vanishes. As the result, the magnetic field only induces a Landau magnetic energy shift common to both valleys without breaking the valley degeneracy. **Local magnetic effects** In **Figure 6**, we illustrate effects of the Figure 4: **Local valley magnetic moments** in a zigzag nanoribbon of gapless graphene. **(a)** shows a few nanoribbon subbands, with each subband indexed by a corresponding color. The two Dirac points are located near \(k_{x}=-2.10\)\(a^{-1}\) and \(k_{x}=2.10\)\(a^{-1}\), respectively. **(b)** shows local valley magnetic moments integrated over half width of the ribbon (VMM\({}_{1/2}\)) for subband states already presented in **(a)**, using the same color index scheme given in **(a)**. **(c)** Nontrivial, antisymmetric local valley magnetic moments (LVMMs) are obtained for second valence subband states (in the red curve shown in **(a)**) at a few selected \(k_{x}\)’s near the Dirac point with \(k_{x}\)\(\sim-2.10\)\(a^{-1}\). **(d)** depicts the density distributions, \(\rho_{A}(y)\) and \(\rho_{B}(y)\), of the second valence subband state at \(k_{x}=-2.10\)\(a^{-1}\). Figure 5: **Local magnetic effects** in the same zigzag nanoribbon used in **Figure 4**. **(a)** compares field-free subbands (black) with those when a locally varying, step-like magnetic field (B\({}_{\rm{z}}(y)\)) is applied to the lower half ribbon (red). The valley degeneracy is lifted by B\({}_{\rm{z}}(y)\) leading to a valley Zeeman splitting \(\sim\) 17 meV. **(b)** compares field-free subbands (black) with those when a uniform magnetic field is applied (blue). It only introduces a Landau magnetic energy shift without breaking the valley degeneracy. local valley-orbit interaction given in Eqn. (6). We take \(\varepsilon_{y}(y)\) to be generated by a symmetric, piecewise linear, electrical potential \(V(y)\) with the slope given by \(\pm\,\varepsilon_{y0}\) of opposite signs for \(y<\) W/2 and \(y>\) W/2 (i.e., \(\varepsilon_{y}(y)\) = - \(\partial_{y}V\) = \(\varepsilon_{y0}\,\mathrm{sgn}(y\,\)-\(\,W\,/\,2\,\)) ). The figure compares field-free subbands (black) with those in the presence of \(V(y)\) (red). In the field-free case, the subband structure has a direct gap between the second conduction and second valence bands. But in the presence of \(V(y)\), band edge states of each subband shift in \(k_{x}\) in opposite directions, for opposite valleys (near \(k_{x}=\pm 2.10\)\(a\) -1, respectively), giving a 'valley Rashba splitting'. Moreover, band edge states of the two subbands shift in \(k_{x}\) in opposite directions creating a relative wave vector difference \(\delta k_{x}\sim 0.02\)\(a^{-1}\) between the two subbands' edges and, correspondingly, an indirect gap between the two subbands. Both foregoing \(V(y)\)-induced shifts can be explained in terms of the local valley-orbit interaction energy \(\int\limits_{-\infty}^{\infty}\frac{k_{x}}{E^{(0)}}\,\varepsilon_{y}(y)m^{(0) }(y)dy\), as follows. When applied to the present case, the expression reduces to \(\,\,\,\,\,2\,\frac{k_{x}}{E^{(0)}}\,\varepsilon_{y0}\)\(VMM_{1/2}\) ', giving a linear-in-\(k_{x}\) energy shift in the subband dispersion with a sign dependent on both the state energy \(E^{(0)}\) and corresponding \(VMM_{1/2}\). As noted earlier in **Figure 4**, \(VMM_{1/2}\) carries opposite signs for opposite valleys as well as for second conduction and valence subbands, thus resulting in the shifts observed above. In passing, we note that due to the antisymmetry in local valley magnetic moment, a relatively simple linear potential \(V(y)\) corresponding to a uniform \(\varepsilon_{y}\) would not produce the splitting, while it would in the homogeneous bulk case with broken inversion symmetry, where it interacts with a nonvanishing \(\mu_{bulk}\) and produces the splitting. **Local electric effects** In closing the section, we briefly remark on a flexibility brought by local valley - external field interactions for valley control. In the case where the local moment exhibits a sign variation in space, as illustrated in **Figure 4**, alternative local magnetic (electrical) fields with signs and distributions locally correlated to those of the local moment may produce the same valley Zeeman splitting (valley Rashba splitting) and effect the same magnetic (electric) valley control. For example, in **Figure 5 (a)**, an alternative magnetic field given by B\({}_{x}(y<\) W/2) = 0 and B\({}_{x}(y>\) W/2) = - B\({}_{x0}\) would produce the same valley Zeeman splitting, as can verified using the Zeeman term in Eqn. (6). **IV. CONCLUSION** In conclusion, while being valley topological quantities, valley magnetic moments also carry a hidden dimension of local physics. As this work has shown, the presence of inhomogeneity in space makes possible distinct local valley phenomena which are not dictated by the traditional perspective developed from studies of homogeneous bulks. In order to explore the dimension, the notion of local valley magnetic moments has been introduced as a vehicle to address degrees of freedom beyond total valley magnetic moments. An operational definition Figure 6: **Local electric effects** in the same zigzag nanoribbon used in **Figure 4**. We compare field-free subbands (black) with those in the presence of a symmetric \(V(y)\) (or antisymmetric \(\varepsilon_{y}(y)\)) that varies linearly between \(\pm 0.05\)\(eV\) with a piecewise constant slope \(\pm\,\varepsilon_{y0}\). In the field-free case, it shows a direct gap between the second conduction and second valence subbands. However, effects of the electric field shift various subband edges, resulting in a valley Rashba splitting as well as an indirect gap between the two subbands, with the latter characterized by a relative conduction-valence band edge wave vector difference \(\delta k_{x}\sim 0.02\)\(a^{-1}\). is given to such moments in terms of local magnetic response. Both analytical and numerical analysis have been performed for local valley magnetic moments giving interesting findings as summarized below. In graphene, for example, the local valley magnetic moment is shown to be tied to the local site probability difference'\(\rho_{A}-\rho_{s}\) ', which suggests the breaking of local probability-based inversion symmetry, in place of structural inversion symmetry, as the condition for existence of and applications based on valley-derived magnetic moments. By relaxing the structural inversion symmetry constraint on materials, the study has expanded the family of valleytronic materials. In particular, it adds to the list gapless monolayer graphene, an important material which is relatively accessible, for magnetic moment-based experiments and applications. In addition, the local valley magnetic moment variable introduced is also application-suited, as it is directly linked to local valley- external field interactions. Specifically, where total valley magnetic moments vanish, local valley Zeeman and local valley-orbit interactions have been shown to exist and manifest pronounced magnetic and electric effects, respectively. Such effects can be exploited for local valley control and provide a conduit to 'local valleytronics' for the implementation of valleytronics. Last but not the least, the novel local valley phenomena revealed suggest the exciting direction of _valley engineering_ - design and search for inhomogeneous structures to tailor local valley physics for applications. ## Acknowledgment We thank Yen-Ju Lin for technical support in numerical calculations. We acknowledge the financial support of MoST, ROC through Contract No. MOST 110-2112-M-007-038. F.-W. C. acknowledges partial support from Academia Sinica. \({}^{\dagger}\) Corresponding author. Email: [email protected] ## Appendix A Current density Consider a simple Q1D inhomogeneous structure exhibiting translational symmetry in the \(x\) direction. The Dirac Eqn. (1) is expanded giving rise to the following: \[\begin{split}\Delta(\gamma)F_{A}+(k_{x}-\widehat{\nu}_{y})F_{B}& =EF_{A}\\ -\Delta(\gamma)F_{A}^{{}^{\ast}}-\Delta(\gamma)F_{B}& =EF_{B}\\ \Delta(\gamma)F_{B}^{{}^{\ast}}-\Delta(\gamma)F_{A}^{{}^{\ast}}- \Delta(\gamma)F_{B}&=EF_{B}\\ \Delta(\gamma)F_{A}^{{}^{\ast}}+F_{B}(k_{x}-\widehat{\nu}_{y})F_ {B}^{{}^{\ast}}&=EF_{B}F_{A}^{{}^{\ast}}\\ -\Delta(\gamma)F_{A}F_{B}^{{}^{\ast}}+F_{A}(k_{x}+\widehat{\nu}_{ y})F_{A}^{{}^{\ast}}&=EF_{A}F_{B}^{{}^{\ast}}\end{split} \tag{10}\] Combining the four wave equations in Eqn. (10), we obtain \[2k_{x}\rho+\widehat{\nu}_{y}\rho_{diff}=2Ej_{x}^{particle}\, \tag{11}\] where \(j_{x}^{particle}=F_{A}^{{}^{\ast}}F_{B}+F_{B}^{{}^{\ast}}F_{A}\) is the particle current density in the Dirac model [3]. This gives the charge current density \[j_{x}=-\frac{k_{x}\rho}{E}-\tau\frac{\partial_{x}\rho_{diff}}{2E} \tag{12}\] shown in Eqn. (2). ## Appendix B Valley magnetic moment in the bulk In the homogeneous bulk case, the Dirac equation is given by [3] \[\begin{split} H_{{}_{\rm max}}F&=EF,\\ H_{{}_{\rm max}}&=\left(\begin{array}{cc}\Delta_{ 0}&k_{x}-i\tau k_{y}\\ k_{x}+i\tau k_{y}&-\Delta_{0}\end{array}\right),\end{split} \tag{13}\] with the following solution \[\begin{split} F_{A}&=\rho^{V2}\ (k_{x}-i\tau k_{y})\,/\left[k^{2} +(E-\Delta_{0})^{2}\right]^{V2},\\ F_{B}&=-\rho^{V2}\ (\Delta_{0}-{\rm E})\,/\left[k^{2}+(E- \Delta_{0})^{2}\right]^{V2}.\end{split} \tag{14}\] where \(E\) is the electron energy given by \((\,k^{2}+\Delta_{0}^{\ 2}\,)^{1/2}\). The substitution of Eqn. (14) into the expression of \(m\) in Eqn. (3) yields \[m=-\frac{\tau\rho_{diff}}{2E}=-\frac{\tau\rho[k^{2}-(\Delta_{0}-{\rm E})^{2}]} {2E[k^{2}+(\Delta_{0}-{\rm E})^{2}]}, \tag{15}\] which can be transformed by straightforward mathematics into the form of Eqn. (4) using the energy dispersion \(E=(\,k^{2}+\Delta_{0}^{\ 2}\,)^{1/2}\) to express \(k^{2}\) in terms of \(E\) and \(\Delta_{0}\). ## Appendix C State transformation under Inv and TR We give an overview of state transformation under Inv and TR for zigzag nanoribbons in gapless graphene with translational symmetry in the \(x\) direction, and then apply it to the local moment. For valley \(\tau\), a nanoribbon state satisfies the following Dirac equation \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleftleft.}}}}\right)}}}}}}}}}}F=EF,\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}}}}}}}}}}}}= \begin{pmatrix}0&k_{x}-\widehat{w}_{y}\\ k_{x}+\widehat{w}_{y},&0\end{pmatrix},\\ & F_{{}_{A}}(W\,/\,2)=F_{{}_{B}}(-W\,/\,2)=0.\end{split} \tag{10}\] The last line in Eqn. (10) provides the boundary condition on \(F\)[42]. For the discussion below, we shall denote the corresponding local valley moment of \(F\) by \(m_{\text{r}}(y)\). Under Inv, \(k_{x}\rightarrow\) -\(k_{x}\) and \(\tau\rightarrow\) -\(\tau\), so Eqn. (10) transforms to the one below \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left.}}}}}}}}}}}}}}(lm)=EF^{(lm)},\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\left.}}}}}}}}}}}}}}}(lm)=\begin{pmatrix}0&-k_{x}+ \widehat{w}_{y}\\ -k_{x}-\widehat{w}_{y}&0\end{pmatrix},\\ & F_{{}_{A}}({}^{lm})(W\,/\,2)=F_{{}_{B}}^{(lm)}(W\,/\,2)=0,\end{split} \tag{11}\] with \(F^{(lm)}\) given by \[\begin{split}& F_{{}_{A}}^{(lm)}(y)=-F_{{}_{B}}(-y),\\ & F_{{}_{B}}^{(lm)}(y)=F_{{}_{A}}(-y),\end{split} \tag{12}\] \(F^{(lm)}\) given above satisfies both the transformed Dirac equation and the boundary condition as can be easily verified. As indicated by \(F^{(lm)}\) in Eqn. (12), Inv switches A and B sites and at the same time induces the mirror reflection \(y\rightarrow\) -\(y\). The site switch effectively flips the valley index of the state and offsets the previous valley flip in the Dirac Hamiltonian. Overall, with only the reflection in effect, it results in \[\begin{split}&\text{Inv}(m_{\text{r}}(y))=m_{\text{r}}(-y). \end{split} \tag{13}\] Under TR, again \(k_{x}\rightarrow\) -\(k_{x}\) and \(\tau\rightarrow\) -\(\tau\), so Eqn. (10) becomes also one for valley '\(-\tau\)' given by \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}\right)}}}}}}}}(TR)F^{(TR)}=EF^{(TR)},\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}}}}\right)}}}}}}}}(TR)=\begin{pmatrix}0&-k_{x}+ \widehat{w}_{y}\\ -k_{x}-\widehat{w}_{y},&0\end{pmatrix},\\ & F_{{}_{A}}^{(TR)}(W\,/\,2)=F_{{}_{B}}^{(TR)}(-W\,/\,2)=0,\end{split} \tag{14}\] with the solution given by \[\begin{split}& F_{{}_{A}}^{(TR)}(y)=F_{{}_{A}}^{{}^{*}}(y)=F_{{} _{A}}(y),\\ & F_{{}_{B}}^{(TR)}(y)=-F_{{}_{B}}^{{}^{*}}(y)=-F_{{}_{B}}(y). \end{split} \tag{15}\] Above, we have used the fact that for the zigzag nanoribbon bound state, \(F_{A}\) and \(F_{B}\) can be taken to be real. As TR produces only a valley flip here, we obtain \[\text{TR}(m_{\text{r}}(y))=m_{\text{r}}(y)=-m_{\text{r}}(y). \tag{16}\] Last, as Eqns. (11) and (14) are identical, with the assumption that the solutions for a given \(k_{x}\) are nondegenerate, indeed as numerical shown in **Figure 4 (a)**, we conclude that \(F^{(lm)}\) and \(F^{(TR)}\) describe the same state leading to Inv(\(m_{\text{r}}(y)\)) = TR(\(m_{\text{r}}(y)\)), that is, \(m_{\text{r}}(-y)\) = -\(m_{\text{r}}(y)\). We thus conclude that \(m_{\text{r}}(y)\) is antisymmetric in zigzag nanoribbons of gapless graphene. ## References * [1] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature **438**, 197 (2005). * [2] Y. Zhang, Y.-W. Tan, H. L. Stormer, and P. Kim, Nature **438**, 201 (2005). * [3] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). * [4] E. McCann and V. I. Fal'ko, Physical Review Letters **96**, 086805 (2006). * [5] E. V. Castro, K. S. Novoselov, S. V. Morozov, N. M. R. Peres, J. M. B. L. dos Santos, J. Nilsson, F. Guinea, A. K. Geim, and A. H. C. Neto, Physical Review Letters **99**, 216802 (2007). * [6] G. Giovannetti, P. A. Khomyakov, G. Brocks, P. J. Kelly, and J. van den Brink, Physical Review B **76**, 073103 (2007). * [7] B. Sachs, T. O. Wehling, M. I. Katsnelson, and A. I. Lichtenstein, Physical Review B **84**, 195414 (2011). * [8] K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Physical Review Letters **105**, 136805 (2010). * [9] D. Xiao, W. Yao, and Q. Niu, Phys. Rev. Lett. **99**, 236809 (2007). * [10] D. Xiao, G.-B. Liu, W. Feng, X. Xu, and W. Yao, Physical Review Letters **108**, 196802 (2012). * [11] A. Rycerz, J. Tworzydlo, and C. W. J. Beenakker, Nature Physics **3**, 172 (2007). * [12] R. V. Gorbachev, J. C. W. Song, G. L. Yu, A. V. Kretinin, F. Withers, Y. Cao, A. Mishchenko, I. V. Grigorieva, K. S. Novoselov, L. S. Levitov, and A. K. Geim, Science **346**, 448 (2014). * [13] Y. Shimazaki, M. Yamamoto, I. V. Borzenets, K. Watanabe, T. Taniguchi, and S. Tarucha, Nature Physics **11**, 1032 (2015). * [14] M. Sui, G. Chen, L. Ma, W.-Y. Shan, D. Tian, K. Watanabe, T. Taniguchi, X. Jin, W. Yao, D. Xiao, and Y. Zhang, Nature Physics **11**, 1027 (2015). * [15] F. Zhang, A. H. MacDonald, and E. J. Mele, Proceedings of the National Academy of Sciences **110**, 10546 (2013). * [16] I. Martin, Ya. M. Blanter, and A. F. Morpurgo, Physical Review Letters **100**, 036804 (2008). * [17] L. Ju, Z. Shi, N. Nair, Y. Lv, C. Jin, J. Velasco, C. Ojeda-Aristizabal, H. A. Bechtel, M. C. Martin, A. Zettl, J. Analytis, and F. Wang, Nature **520**, 650 (2015). * [18] J. Li, R.-X. Zhang, Z. Yin, J. Zhang, K. Watanabe, T. Taniguchi, C. Liu, and J. Zhu, Science **362**, 1149 (2018). * [19] P. San-Jose and E. Prada, Physical Review B **88**, 121408 (2013). * [20] S. Huang, K. Kim, D. K. Efimkin, T. Lovorn, T. Taniguchi, K. Watanabe, A. H. MacDonald, E. Tutuc, and B. J. LeRoy, Physical Review Letters **121**, 037702 (2018). * [21] P. Rickhaus, J. Wallbank, S. Slizovskiy, R. Pisoni, H. Overweg, Y. Lee, M. Eich, M.-H. Liu, K. Watanabe, T. Taniguchi, T. Ihn, and K. Ensslin, Nano Letters **18**, 6725 (2018). * [22] P. Gosselin, A. Berard, H. Mohrbach, and S. Ghosh, The European Physical Journal C **59**, 883 (2009). * [23] G. Y. Wu, N.-Y. Lue, and L. Chang, Phys. Rev. B **84**, 195463 (2011). * [24] C. Gold, A. Knothe, A. Kurzmann, A. Garcia-Ruiz, K. Watanabe, T. Taniguchi, V. Fal'ko, K. Ensslin, and T. Ihn, Physical Review Letters **127**, 046801 (2021). * [25] J. Pereira, F. Peeters, R. Costa Filho, and G. Farias, Journal of Physics: Condensed Matter **21**, 045301 (2009). * [26] N. Rohling and G. Burkard, New Journal of Physics **14**, 083008 (2012). * [27] Y. Wu, Q. Tong, G.-B. Liu, H. Yu, and W. Yao, Physical Review B **93**, 045313 (2016). * [28] G. Szechenyi, L. Chirolli, and A. Palyi, 2D Materials **5**, 035004 (2018). * [29] J. Pawlowski, D. Zebrowski, and S. Bednarek, Physical Review B **97**, 155412 (2018). * [30] M.-K. Lee, N.-Y. Lue, C.-K. Wen, and G. Y. Wu, Physical Review B **86**, 165411 (2012). * [31] K. F. Mak, K. He, J. Shan, and T. F. Heinz, Nature Nanotechnology **7**, 494 (2012). * [32] Z. Wu, B. T. Zhou, X. Cai, P. Cheung, G.-B. Liu, M. Huang, J. Lin, T. Han, L. An, Y. Wang, S. Xu, G. Long, C. Cheng, K. T. Law, F. Zhang, and N. Wang, Nature Communications **10**, 611 (2019). * [33] J. Lee, W. Heo, M. Cha, K. Watanabe, T. Taniguchi, J. Kim, S. Cha, D. Kim, M.-H. Jo, and H. Choi, Nature Communications **12**, 1635 (2021). * [34] X. Li, W. Cai, J. An, S. Kim, J. Nah, D. Yang, R. Piner, A. Velamakuni, I. Jung, E. Tutuc, S. K. Banerjee, L. Colombo, and R. S. Ruoff, Science **324**, 1312 (2009). * [35] T. Wu, X. Zhang, Q. Yuan, J. Xue, G. Lu, Z. Liu, H. Wang, H. Wang, F. Ding, Q. Yu, X. Xie, and M. Jiang, Nature Mater **15**, 43 (2016). * [36] Y. Kim, E. Moyen, H. Yi, J. Avila, C. Chen, M. C. Asensio, Y. H. Lee, and D. Pribat, 2D Mater. **5**, 035008 (2018). * [37] D. A. Boyd, W.-H. Lin, C.-C. Hsu, M. L. Teague, C.-C. Chen, Y.-Y. Lo, W.-Y. Chan, W.-B. Su, T.-C. Cheng, C.-S. Chang, C.-I. Wu, and N.-C. Yeh, Nat Commun **6**, 6620 (2015). * [38] M. Wang, M. Huang, D. Luo, Y. Li, M. Choe, W. K. Seong, M. Kim, S. Jin, M. Wang, S. Chatterjee, Y. Kwon, Z. Lee, and R. S. Ruoff, Nature **596**, 519 (2021). * [39] J. Li, M. Chen, A. Samad, H. Dong, A. Ray, J. Zhang, X. Jiang, U. Schwingenschlogl, J. Domke, C. Chen, Y. Han, T. Fritz, R. S. Ruoff, B. Tian, and X. Zhang, Nat. Mater. **21**, 740 (2022). * [40] P. R. Wallace, Phys. Rev. **71**, 622 (1947). * [41] M. S. Dresselhaus, G. Dresselhaus, and A. Jorio, _Group Theory: Application to the Physics of Condensed Matter_ (Springer-Verlag, Berlin, 2008). * [42] L. Brey and H. A. Fertig, Physical Review B **73**, 235411 (2006). * [43] J. M. Ziman, _Principles of the Theory of Solids_ (Cambridge university press, 1972). * [44] J. D. Jackson, _Classical Electrodynamics_, Third Edition (New York, 1998). * [45] R. Peierls, Zeitschrift fur Physik **80**, 763 (1933). * [46] F.-W. Chen, Z.-H. Huang, and Y.-S. G. Wu, "Valley field mechanics: a local perspective beyond valley flavor", arXiv:2208.02915 (2022).
2309.10282
Constraining hybrid potential scalar field cosmological model in Lyra's geometry with recent observational data
In the current study, we investigate a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we presupposed a variable displacement vector as an element of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hybrid function of redshift $z$, confirming the essential transition behavior of the universe from a decelerating era to the present accelerated scenario. We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon, taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are $ H_0 = 71.15\pm 0.26$ km/s/Mpc, $ \Omega_{m0}=0.2625\pm 0.0024$, $ \Omega_{\phi0} = 0.676\pm0.038$, $ \alpha=-0.22\pm0.13$, $n = 0.096\pm0.079$, and $k = 0.38\pm0.32$. The model exhibits a flipping nature, and the redshift transition occurs at $z_t = 0.756^{+0.005}_{-0.015}$. The current value of the decelerated parameter for the proposed model is calculated as $q_0 = -0.625^{+0.067}_{-0.085}$ for the combined dataset. Some dynamical properties of the model like energy density ($\rho_{\phi}$), scalar field pressure ($p_{\phi}$), EoS parameter of scalar field ($\omega_{\phi}$), and effective EoS parameter ($\omega_{eff}$) are analyzed and presented. Further, we have also examined the statefinder diagnosis and jerk parameters of the derived model. The total density parameter for the derived model is found to be unity which is in nice agreement with recent standard findings.
Vinod Kumar Bhardwaj, Anil Kumar Yadav, Lalit Kumar Gupta, Rajendra Prasad, Sudhir Kumar Srivastava
2023-09-19T03:11:07Z
http://arxiv.org/abs/2309.10282v2
###### Abstract ###### Abstract In the current study, we investigate a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we presupposed a variable displacement vector as an element of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hyperbolic function of redshift \(z\), confirming the essential transition behavior of the universe from a decelerating era to the present accelerated scenario. We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon, taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), \(\alpha=-0.22\pm 0.13\), \(n=0.096\pm 0.079\), and \(k=0.38\pm 0.32\). The model exhibits a flipping nature, and the redshift transition occurs at \(z_{t}=0.756^{+0.005}_{-0.015}\). The current value of the decelerated parameter for the proposed model is calculated as \(q_{0}=-0.625^{+0.067}_{-0.085}\) for the combined dataset. Some dynamical properties of the model like energy density (\(\rho_{\phi}\)), scalar field pressure (\(p_{\phi}\)), EoS parameter of scalar field (\(\omega_{\phi}\)), and effective EoS parameter (\(\omega_{eff}\)) are analyzed and presented. Further, we have also examined the statefinder diagnosis and jerk parameters of the derived model. The total density parameter for the derived model is found to be 1 which is in great agreement with recent standard findings. **Current Observation constraints on Hybrid potential scalar field cosmological model in Lyra Geometry** Vinod Kumar Bhardwaj Department of Mathematics, GLA University, Mathura-281 406, Uttar Pradesh, India E-mail:[email protected] ## 1 Introduction In order to explain how gravity interacts with space and time, Einstein developed the general relativity (GR) theory at the beginning of the 20th century. He makes a clear connection between space-time geometry and matter and radiation. Relating the energy and momentum, fundamental characteristics of matter and radiation, and the curvature of space-time, Einstein presented the field equation for GR as \(R_{ij}-\frac{1}{2}Rg_{ij}=8\pi GT_{ij}\)[1]. Since the theoretical development of general relativity that connect the geometry with gravity, several approaches and theories have been proposed in search of additional geometrization models. The idea in which the geometrization of gravity with fundamental forces has also been proposed. In this direction, Weyl suggested the idea of geometrizing gravity and electromagnetic together using Riemannian geometry to unite them under the umbrella of a "unified field theory" [2]. Weyl suggested the usage of Riemannian geometry and vector potential to describe electromagnetic forces. In general, as a vector goes across space, the net displacement is determined by the initial and ending positions, ignoring the history of the path travelled results in the non-integrability of path followed. Weyl's approach of using vector potential complicated the issue and hence debarred [2, 3]. In the subsequence, several additional theories have also been proposed, either to replace Einstein's theory of relativity or to reduce the complexity of Weyl's theory. Lyra proposed the use of the Gauge function with Riemannian geometry in the progress [4]. Lyra's idea resembles with Weyl's method and maintains the integrability of length transfer, as would be expected in Riemannian geometry. Since Lyra's geometrization behaves in accordance with Einstein's principle, it has frequently been used by many researchers to predict and explain cosmic phenomena [5, 6, 7, 8]. In Lyra's context, Sen suggested a static cosmic model behaving like Einstein's model under static conditions, though model suffers red shift [9]. On the other hand, Halford used Lyra's geometry to explore non-static cosmic events. Halford also pointed that a constant Gauge vector field developed when the Gauge function was introduced into Lyra's geometry and behaves similarly to the cosmological constant \(\Lambda\)[10]. Imposing the Lyra's suggestions, Soleng, additional explored the importance of Gauge vector field as a source of creation in cosmic theories [8]. Under the observational limits, several cosmological theories are developed on the basis of Lyra's geometry depicting similar consequences as anticipated in Einstein's theory of relativity [11, 12, 13]. Several astrophysical experiments have confirmed that the cosmos is expanding at an accelerated rate in present time [14, 15, 16, 17]. According to many recent studies [18, 19, 20], dark energy (DE) is anticipated to play a major role in the universe's expansion, whereas dark matter is anticipated to be a key component in the growth of large-scale structures (LSS). The mysterious kind of energy called as DE, which exerts a tremendous amount of repulsive (negative) pressure, is what causes the cosmos to expand. With the experimentally confirmations of present cosmic reality, theoretical researchers are motivated to create universe models in various frameworks. The cosmological term has been believed to be a suitable replacement of DE because of its repulsive behavior [21]. In literature, to explain the universe's present accelerated expansion, a variety of alternative hypotheses without the cosmological constant (CC) have been suggested. Each of these theories has a different prediction in describing the characteristics of DE and cosmic behaviour of the universe. In order to fit observational data, some of these theories also have extra parameters that can be adjusted. Although the cosmological constant matches the scientific results well and is validated by several experiments, it has failed to characterize the inflationary period of the cosmos. In addition to modified theories, several scalar field models are also introduced in theoretical cosmology, to address these issues of inflationary age and to describe the current expanding era of the cosmos [22, 23]. In these studies, the scalar field (\(\phi\)) is considered as an assumption for the dark energy component which produces the negative pressure along with a reducing potential (\(V(\phi)\)). In literature, various cosmological research depending on the scalar field is suggested to characterize the dynamics of the cosmos [22, 23, 24, 25]. The quintessence is an interesting scalar field model that precisely avoids the conventional issues of fine-tuning and cosmic coincidence and depicts the present cosmic reality [24, 25]. Johri [26] was the first to propose the idea of tracking, indicating a certain direction to explain the current cosmic scenario using the potential of the tracker. This idea was strongly supported by the observational estimates. Numerous quintessence models have been proposed in literary works. For a non-minimal relationship between dark matter and quintessence [27, 28, 29], these concepts include the potential for a scalar field to evolve under the influence of an unconventional kinetic term. The important applications of a variable EoS (Equation of State) parameter in the framework of scalar-tensor theory can be seen in Ref. [30, 31]. The existence of the scalar field in astrophysical investigations is also acknowledged by several fundamental theories. Numerous cosmic models have recently been developed in various scalar field theory frameworks [30, 31, 32, 33, 34]. Kamenshchik et al. examined the Chaplygin gas DE model dark with the aid of a special form of EoS [35]. In the current study, we investigated a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we assumed a variable displacement vector as an element of Lyra's geometry. The model parameters are extracted using the most recent observational data sets from OHD, BAO/CMB, and Pantheon. The manuscript is developed in the following manner. The model and its solutions taking hybrid scalar field density are described in section 2. Observational data and methodology for constraining the model parameters are mentioned in section 3. The features and dynamical characteristics of the model are discussed in section 4. A brief concluding summary of the proposed model is declared in section 5. ## 2 Field equations and its solution Following Sen [9], we consider the action proposed for gravitational field equations in Lyra's geometry. \[S_{\psi}=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\psi^{4}R+\mathcal{L}_{m} \bigg{]} \tag{1}\] where \(\mathcal{L}_{m}\) stands for matter Lagrangian and \(8\pi G=1=c\). The field equations in Lyra's geometry [3, 4, 5, 9, 10] are recast as \[R_{ij}-\frac{1}{2}Rg_{ij}+\frac{3}{4}\psi_{i}\psi_{j}-\frac{3}{2}\psi_{i} \psi^{i}=T_{ij} \tag{2}\] where, perfect fluid's energy-momentum tensor \(T_{ij}\) is described by \(T_{ij}=\frac{-2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{\delta g^{ ij}}\), \(R_{ij}\) represents the Ricci tensor, scalar curvature is denoted by R, and displacement vector \(\psi_{i}\) is the function of time and is defined as \(\psi_{i}=(\beta(t),0,0,0)\). For the purpose of modelling structure of cosmos and to study its evolutionary geometry, we consider a 4-dimensional FRW space-time universe which is flat, homogeneous, and isotropic in nature. \[ds^{2}=a(t)^{2}\left(dx^{2}+dy^{2}+dz^{2}\right)-dt^{2} \tag{3}\] where average scale factor \(a(t)\) is used to estimate the cosmic growth of the universe with time. For the above line element, the Ricci scalar can determined in the form \(R=-(\dot{H}+2H^{2})\), here \(H=\frac{\dot{a}}{a}\) is the Hubble parameter. In our study, we assumed that the universe has a flat geometry because this is a usual forecast of the inflationary model which is also confirmed by several experimental observations, such as the LSS surveys and the CMB measurements [16, 36, 37, 20]. The flat universe model gets more attention for cosmological studies at present because it is simple and it only needs a few extra parameters than the \(\Lambda\)CDM base model. For an ideal fluid, the tensor of energy-momentum can be recasts in terms of energy density, velocity, and fluid pressure as \(T_{ij}^{m}=(p_{m}+\rho_{m})u_{i}u_{j}-p_{m}g_{ij}\), where \(\rho_{m}\) and \(p_{m}\) are the energy density and pressure of the matter. In co-moving coordinate system, the field equations (2) for metric (3) are developed as \[3H^{2}-\frac{3}{4}\beta^{2}=\rho_{m} \tag{4}\] \[2\dot{H}+3H^{2}+\frac{3}{4}\beta^{2}=-p_{m} \tag{5}\] A mathematical statement that demonstrates the scalar field's interaction with gravity provides its action. A fictitious field called the scalar field has been proposed to explain a variety of physics phenomena, including inflation, DE, and the Higgs technique. The action tries to generalize the theory of GR using basic concepts of scalar-tensor gravitational theories. In this situation, the scalar field is vital in adjusting the gravitational force's strength, which results in a wide range of physical events. Usually, for a scalar field, the action is expressed in terms of the scalar field and its derivatives as \[S_{\phi}=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\delta_{\nu}\phi\delta^{\nu} \phi-V(\phi)\bigg{]} \tag{6}\] where \(\phi\) and \(V(\phi)\) are scalar field and scalar potential respectively. In the scalar field background, Klein-Gordon equation is read as \[\frac{d^{2}\phi(t)}{dt^{2}}+3\frac{d\phi(t)}{dt}H+\frac{dV(\phi)}{d\phi}=0 \tag{7}\] For the scalar field, the energy-momentum tensor is developed as \(T^{\phi}_{ij}=(\rho_{\phi}+p_{\phi})u_{i}u_{j}-p_{\phi}g_{ij}\). The pressure \(p_{\phi}\) and energy density \(\rho_{\phi}\) for the scalar field are expressed as [38, 39] \[p_{\phi}=\frac{1}{2}\dot{\phi}^{2}-V(\phi) \tag{8}\] \[\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}+V(\phi) \tag{9}\] where, \(V(\phi)\) and \(\frac{\dot{\phi}^{2}}{2}\) respectively represent the potential and kinetic energies and both are functions of \(\phi\). The scalar field is frequently coupled with gravitational field using the coupling of scalar curvature, which connect the Ricci curvature to the scalar field. The Einstein equations change when this term affects the gravitational constant. The action due to coupling can be stated as follows: \[A = S_{\psi}+S_{\phi} \tag{10}\] \[= \int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\psi^{4}R+\mathcal{L}_{m} +\bigg{(}\frac{1}{2}\delta_{\nu}\phi\delta^{\nu}\phi-V(\phi)\bigg{)}\bigg{]}\] Thus, the Friedmann field equations considering matter and scalar field as the source may be expressed as \[3H^{2}=\rho_{eff}=\rho_{m}+\rho_{\phi}+\rho_{\beta} \tag{11}\] \[2\dot{H}+3H^{2}=p_{eff}=-(p_{m}+p_{\phi}+p_{\beta}) \tag{12}\] where \(p_{\beta}=\rho_{\beta}=\frac{3}{4}\beta^{2}\). Here \(p_{\beta}\) and \(\rho_{\beta}\) are pressure and energy density due to displacement vector \(\beta\). It is crucial to keep in mind that for the stiff fluid \(p_{\beta}=\rho_{\beta}\). Our universe has already experienced a time when the pressure and matter density were equal. Assuming the universe to be dust filled (\(p_{m}=0\)), the equations of energy conservation for scalar field and matter are established as \[\frac{d\rho_{\phi}}{dt}+3(p_{\phi}+\rho_{\phi})H=0 \tag{13}\] \[\frac{d}{dt}\rho_{m}+3\rho_{m}H=0 \tag{14}\] From Eqs. (13) and (14), we get the following solutions \[\omega_{\phi}=-\bigg{[}1+\frac{1}{3H}\bigg{(}\frac{\rho_{\phi}^{\cdot}}{\rho_{ \phi}}\bigg{)}\bigg{]} \tag{15}\] where \(\omega_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}\) denotes the scalar field's EoS parameter. \[\rho_{m}=\rho_{m0}\frac{1}{a^{3}} \tag{16}\] where integrating constant \(\rho_{m0}\) is referred as the present value of matter energy density. In general, it is difficult to solve the system of Eqs. (11) and (12) involving \(\rho_{\phi},\ \ \rho_{\beta},\ p_{\phi},\) and \(H\) as unknows. Thus, we need some more variable or parametrized relations to provide a solution to the system. It is crucial to investigate cosmic models other than the cosmological constant because it is insufficient to fully explain the universe's accelerated expansion. Although various physical explanations are mentioned for choosing such constraints, model-independent approaches are well-known choices that are based on the specific parametrization of the EoS parameter, deceleration parameter, and energy density [40]. In the case of DE cosmic models, the EoS parameter expresses a relation between energy density and pressure. The model with the cosmological constant is assumed as the standard DE model where the EoS parameter remains constant and its value is found as -1. However, the study of variable EoS parameters provides information on the underlying physics of DE. A simplest two-parameter model known as Chevallier-Polarski-Linder (CPL) parametrization has the potential to detect variations from a fixed EoS value [41, 42]. To examine DE possibilities beyond the cosmological constant, parametrizations that are considerably more complex can also be utilized, including the Jassal-Bagla-Padmanabhan (JBP) [43, 44], the hybrid [45], and the BA [46]. Another approach is to parametrize the DE energy density as a cosmic time t (or, alternatively, redshift z) function. Polynomial expansions and principal component analysis are two techniques that can be used to achieve this [47, 48, 49, 45]. These approaches can provide insight into how DE behaved over different cosmic eras. Here, the scalar field's energy density is considered as the source of dark energy and suitably parametrized in the following form \[\rho_{\phi}=\rho_{\phi 0}(1+z)^{\alpha}e^{nz} \tag{17}\] where \(\alpha\) and \(n\) are constants, and \(\rho_{\phi 0}\) is the present value of universe's critical density. These model parameters will be constrained from observational datasets. In the background of scalar fields theory, several cosmological models and high energy theories are explored using hybrid energy density as an explicit choice [45]. In the present study, our focus is to utilize the hybrid parametrization of the potential as a phenomenological method to examine the evolutionary behavior of the cosmos in the framework of scalar-tensor theory. It is crucial to emphasize that our goal is not to describe certain high-energy theories of physics that anticipate the specific form of the hybrid potential mentioned in Eq. (17). We instead use this parametrization as a phenomenological technique to study the behaviour and implications of the scalar field dark energy hypothesis. For redshift transformation, we can utilize the relation \(a=\frac{a_{0}}{1+z}\), where \(a\) is the average value of scale factor and present value \(a_{0}\) is assumed to be 1. Thus, using Eq.(16), the matter energy density in terms of redshift z can be calculated as \[\rho_{m}=\left(1+z\right)^{3}\rho_{m0} \tag{18}\] Now, from Eqs. (17), (18), and (11), we get \[3H^{2}=\left(1+z\right)^{3}\rho_{m0}+\rho_{\phi 0}(1+z)^{\alpha}e^{nz}+\rho_{\beta} \tag{19}\] Using the gauge function \(\rho_{\beta}=\beta_{0}a^{-2k}\), Thus, Eq.(19) can be recast as \[H(z)=H_{0}\sqrt{\left(1+z\right)^{3}\Omega_{m0}+(1+z)^{\alpha}e^{nz}\,\Omega_ {\phi 0}+\Omega_{k0}(1+z)^{2k}} \tag{20}\] where \(\Omega_{m}=\frac{\rho_{m}}{\rho_{\alpha}}\), \(\Omega_{\phi}=\frac{\rho_{\phi}}{\rho_{\alpha}}\), and \(\Omega_{\beta}=\frac{\rho_{\beta}}{\rho_{\alpha}}\) are the unit-less density parameters for the proposed model. These dimensionless parameters play a key role in explaining the whole content of the cosmos. The \(\rho_{c}=3H^{2}\) is the critical density of the universe. Here, \(H_{0}\) denotes the present value of Hubble constant, subscripted \(\Omega_{i0}\) represents the values of density parameters at \(z=0\). Thus, Eq.(2) can be reduced as follows for \(z=0\). \[\Omega_{m0}+\Omega_{\phi 0}+\Omega_{k0}=1 \tag{21}\] From Eq. (15), the EoS parameter of scalar field can be derived as \[\omega_{\phi}=\frac{1}{3}(\alpha+nz+n-3) \tag{22}\] So, for the proposed model the effective EoS parameter can be read as \[\omega_{eff} = \frac{p_{eff}}{\rho_{eff}}=\frac{p_{\phi}+p_{\beta}}{\rho_{m}+ \rho_{\phi}+\rho_{\beta}}\] \[= \frac{\left(2k-3\right)\left(1-\Omega_{m0}-\Omega_{\phi 0}\right) \left(z+1\right)^{2k}+\Omega_{\phi 0}e^{nz}(\alpha+nz+n-3)(z+1)^{\alpha}}{3 \bigg{[}\left(1-\Omega_{m0}-\Omega_{\phi 0}\right)\left(z+1\right)^{2k}+\Omega_{m0}(z+1)^{3}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\bigg{]}}\] Now, for the proposed model the expressions of density parameters for matter, scalar field, and Lyra factor respectively, are given in the following forms. \[\Omega_{m}(z)=\frac{\rho_{m}}{3H^{2}}=\frac{(z+1)^{3}\Omega_{m0}}{ \left(-\Omega_{m0}-\Omega_{\phi 0}+1\right)(z+1)^{2k}+\Omega_{m0}(z+1)^{3}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}} \tag{24}\] \[\Omega_{\phi}(z)=\frac{\rho_{\phi}}{3H^{2}}=\frac{e^{nz}(z+1)^{ \alpha}\Omega_{\phi 0}}{\left(z+1\right)^{2k}\left(-\Omega_{m0}-\Omega_{\phi 0}+1 \right)+(z+1)^{3}\Omega_{m0}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}}\] (25) \[\Omega_{\beta}(z)=\frac{\rho_{\beta}}{3H^{2}}=\frac{(z+1)^{2k} \left(-\Omega_{m0}-\Omega_{\phi 0}+1\right)}{\left(z+1\right)^{2k}\left(-\Omega_{m0}- \Omega_{\phi 0}+1\right)+(z+1)^{3}\Omega_{m0}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}} \tag{26}\] From Eqs. (8), (9), (13), and (14), the kinetic and potential energies of the scalar field are given through the following expressions. \[\frac{\dot{\phi}^{2}}{2} = -\frac{1}{2}H_{0}^{2}\bigg{[}(2k-3)\left(\Omega_{m0}+\Omega_{\phi 0}- 1\right)(z+1)^{2k}-e^{nz}\Omega_{\phi 0}(z+1)^{\alpha}(\alpha+nz+n)\bigg{]} \tag{27}\] \[V(\phi) = \frac{1}{2}H_{0}^{2}\bigg{[}(2k-3)\left(\Omega_{m0}+\Omega_{\phi 0}- 1\right)(z+1)^{2k}-e^{nz}\Omega_{\phi 0}(z+1)^{\alpha}(\alpha+nz+n-6)\bigg{]} \tag{28}\] The parameters (\(H_{0}\), \(\Omega_{m0}\), \(\Omega_{\phi 0}\), \(\alpha\), \(n\), \(k\)) have a significant impact on the model presented in Eq. (20), which determine the behavior and cosmological characteristics of the model. In the next segment, our aim is to analyze current experimental data to better recognize the significance of the present model. We specifically intend to study how the behavior of cosmological parameters is affected by constraining the values of important parameters (\(H_{0}\), \(\Omega_{m0}\), \(\Omega_{\phi 0}\), \(\alpha\), \(n\), \(k\)). ## 3 Datasets and Cosmological Constraints Methods ### Supernovae type Ia We have taken into consideration, the sample consisting of 1048 points of Pantheon compilation with redshifts between 0.01 and 2.26. 276 SNIa points from the PanSTARRSI Medium Deep Survey, Low-\(z\), and HST samples are included in this sample [50, 51]. For the sample of Pantheon predictions, the \(\chi^{2}\) measure is defined by the following relation. \[\chi^{2}_{SN}=(\mu_{obs}-\mu_{th})^{T}\left(C^{-1}_{SN}\right)(\mu_{obs}-\mu_ {th}) \tag{29}\] where \(\mu_{th}=5\log_{10}\frac{cDL}{H_{0}Mpc}+25\), \(\mu_{obs}\) be the observed distance modulus, and for the Pantheon sample, \(C_{SN}\) denotes the covariance matrix [50]. \(H_{0}\) indicates the Hubble rate while \(c\) corresponds to speed for a particle of light. For a flat FRW universe, the luminosity distance is expressed as \(D_{L}(z)=(1+z)H_{0}\int_{0}^{z}\frac{dx^{\prime}}{H(x^{\prime})}\). To limit the parameters of the proposed model for the Pantheon compilation sample, we use the following statistical measure. \[\chi^{2}_{Pantheon}=\Delta\mu^{T}.C^{-1}_{Pantheon}.\Delta\mu \tag{30}\] in which \(\Delta\mu=\mu_{data}-\mu_{obs}-M\) and \(M\) corresponds to a nuisance parameter. The entire collection of full and binned Pantheon supernova data is available online [52]. ## BAO/CMB data To determine the restrictions on parameters of the model, we took into account the BAO [53, 54, 55] and CMB [56, 57] measurements dataset. Six BAO/CMB data points have been considered (Table 1). For the BAO sample, the predictions from a sample of Galaxy Surveys like SDSS DR7 and 6dF, and WiggleZ have been utilized [53, 54, 55]. However, the CMB measurement under consideration is based on WAMP7 observations [56]. A similar explanation of the given sample can be seen in [58, 45], but [58] provides more information on the approach used and sample to constrain the parameters. The angular diameter distance for the sample is defined as \(D_{A}=\frac{D_{L}}{(1+z)2}\), where \(D_{L}\) indicates the proper angular diameter distance [58], and the dilation scale is described by \(D_{V}(z)=\left[D_{L}^{2}(z)*(1+z)^{2}*\frac{cz}{H(z)}\right]^{1/3}\). \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline \multicolumn{10}{|c|}{Values of \(\Upsilon(z)\) for different points of \(z_{BAO}\)} \\ \hline \(z_{BAO}\) & \(0.106\) & \(0.2\) & \(0.35\) & \(0.44\) & \(0.6\) & \(0.73\) \\ \hline \(\Upsilon(z)\) & \(30.95\pm 1.46\) & \(17.55\pm 0.60\) & \(10.11\pm 0.37\) & \(8.44\pm 0.67\) & \(6.69\pm 0.33\) & \(5.45\pm 0.31\) \\ \hline \hline \end{tabular} Here, \(\Upsilon(z)=d_{A}(z_{*})/D_{V}(z_{BAO})\) and \(z_{*}\approx 1091\). For limiting the parameters of the model, the chi-square estimator for the BAO sample is described in the following form [58, 59, 45]. \[\chi^{2}_{BAO/CMB}=X^{T}C^{-1}X \tag{31}\] where \[X=\begin{pmatrix}\frac{d_{A}(z_{*})}{D_{V}(0.106)}-30.95\\ \frac{d_{A}(z_{*})}{D_{V}(0.20)}-17.55\\ \frac{d_{A}(z_{*})}{D_{V}(0.35)}-10.11\\ \frac{d_{A}(z_{*})}{D_{V}(0.44)}-8.44\\ \frac{d_{A}(z_{*})}{D_{V}(0.60)}-6.69\\ \frac{d_{A}(z_{*})}{D_{V}(0.73)}-5.45\\ \end{pmatrix}\] and \(C^{-1}\) is given by [58] \[C^{-1}=\begin{pmatrix}0.48435&-0.101383&-0.164945&-0.0305703&-0.097874&-0.10 6738\\ -0.101383&3.2882&-2.45497&-0.787898&-0.252254&-0.2751\\ -0.164945&-2.454987&9.55916&-0.128187&-0.410404&-0.447574\\ -0.0305703&-0.0787898&-0.128187&2.78728&-2.75632&1.16437\\ -0.097874&-0.252254&-0.410404&-2.75632&14.9245&-7.32441\\ -0.106738&-0.2751&-0.447574&1.16437&-7.32441&14.5022\\ \hline \end{pmatrix}.\] ## Observational Hubble Data (OHD) We have take-over 57 \(H(z)\) datapoints for z ranging in between 0.07 and 2.36 calculated from cosmic chronometric technique, galaxy clusters [32], and differential age procedure. Then the Hubble constant can be realized in the form of redshift as \((1+z)H(z)=-\frac{dz}{dt}\). Now, the estimator \(\chi^{2}\) is taken into consideration for the purpose of limiting the model's parameters by comparing the model's theoretical predictions (\(E_{th}\)) with experimental values (\(E_{obs}\)).' \[\chi^{2}_{OHD}=\sum_{i=1}^{57}\frac{\left[E_{th}(z_{i})-E_{obs}(z_{i})\right]^{ 2}}{\sigma_{i}^{2}} \tag{32}\] where \(\sigma_{i}\) is the error detected in experimental estimations of \(H(z)\). Thus, the joint estimator for a combined sample of experimental predictions including BAO/CMB, OHD, and SNIa samples, the combined statistic measure is defined in the following manner [45, 58, 59]. \[\chi^{2}_{tot}=\chi^{2}_{Pantheon}+\chi^{2}_{BAo/CMB}+\chi^{2}_{OHD} \tag{33}\] The \(\chi^{2}_{tot}\) statistic can be minimized to find the parameter value that best fits the combined sample of the SNIa, OHD, and BAO/CMB datasets. By taking maximum likelihood approach into account, the total likelihood function \(\mathcal{L}_{tot}=exp(-\chi^{2}_{tot}/2)\) may be calculated as the product of individual likelihood functions of each dataset expressed in the form \(\mathcal{L}_{tot}=\mathcal{L}_{Pantheon}*\mathcal{L}_{BAO/CMB}*\mathcal{L}_{OHD}\). The likelihood function \(\mathcal{L}_{tot}(x*)\) is maximized or, alternatively \(\chi^{2}_{tot}(x^{*})=-2\ln\mathcal{L}_{tot}(x^{*})\) is minimized to get the most plausible values of parameters. For the set of cosmic parameters (pointed at \(x^{*}\)), the \(1\sigma\) and \(2\sigma\) contours are constrained and bounded respectively by \(\chi^{2}_{tot}(x)=\chi^{2}_{tot}(x^{*})+2.3\) and \(\chi^{2}_{tot}(x)=\chi^{2}_{tot}(x^{*})+6.17\). We get best-fit parameter values for the derived model by minimizing the \(\chi^{2}\) statistic. Figure 1: Confidence contour plot for joint data set of OHD, Pantheon and BAO. Figure 1 displays the statistical results in confidence contours with \(1\sigma\) and \(2\sigma\) limits for the proposed model utilizing the joint dataset of SN, BAO/CMB, and OHD. The best plausible values of parameters estimated from the joint dataset are summarised in Table 1. The comparative behavior of the suggested model with the existing standard models and relevant datasets is plotted and presented in Fig.2. The Hubble rate ( \(H(z)/(1+z)\)) as a function of \(z\) is plotted for the purpose. In Figure 2(a), the solid red line depicts the behaviour of our suggested model utilizing the values of parameters obtained from the joint dataset, the 57 data points of OHD are represented by error bars of blue colour, and the dashed black line denotes the findings of the traditional \(\Lambda\)CDM model. For a similar reason, Figure 2(b) plots the distance modulus \(mu(z)\) as a function of \(z\). For an observed supernova the difference between the "apparent and absolute magnitude" describes the luminosity distance which can be expressed as \(dL=a_{0}(1+z)r=(1+z)\int_{0}^{z}\frac{dz}{H(z)}\). This distance parameter, \(\mu(z)\) is numerically equal to \(25+5log10(dL/Mpc)\). Here in Figure 2(b), the blue error bar shows the points of the SN data that were taken into consideration as discussed earlier, the dashed black line shows the output of the traditional \(\Lambda\)CDM model, and the solid red line illustrates the characteristics of our derived model for the joint dataset. In our analysis, for the joint dataset, the estimated values of parameters are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\). These findings are consistent with the most recent results [60, 61, 63, 64, 65]. relation to the Hubble parameter, the DP can be realized as \(q=-\frac{\ddot{a}}{H^{2}a}=-1+\frac{1}{H}(1+z)\frac{d}{dz}H(z)\). Hence, for the proposed model the deceleration parameter can be derived as: \[q = -1+\frac{1}{H(z)}(1+z)\frac{dH}{dz} \tag{34}\] \[= \frac{2(k-1)(z+1)^{2k}\Omega_{k0}+(z+1)^{3}\Omega_{m0}+e^{nz}(z+1 )^{\alpha}\Omega_{\phi 0}(\alpha+nz+n-2)}{2\left((z+1)^{2k}\Omega_{k0}+(z+1)^{3} \Omega_{m0}+e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\right)}\] The major evolutionary features of the expanding universe can be analyzed and explained by the study of the Hubble parameter in consort with DP. The study of these parameters can accurately predict the various cosmic features of the evolutionary universe like age and phase transition (deceleration to acceleration or vice versa). The inflationary behavior of the model can be justified by the sign of the DP \(q\). While a negative sign of q depicts the universe's current accelerated expansion, a positive sign of q corresponds to the universe's decelerated expansion. For a combined sample of SNIa, OHD, and BAO/CMB datasets, the trajectory of the deceleration parameter is plotted in Fig.4. The earlier universe is thought to evolve with deceleration dynamics due to dark energy domination as clearly seen in Fig.4. Here, we see that the Universe in the derived model is currently going through an accelerated scenario of expansion. Additionally, for the derived model a signature flipping is observed at \(z_{t}=0.756^{+0.005}_{-0.015}\) and the model evolves in the acceleration phase in late time. At \(z=0\), the present value of DP is observed as \[q = \frac{2(k-1)\Omega_{k0}+\Omega_{m0}+\Omega_{\phi 0}(\alpha+n-2)}{2 \left(\Omega_{k0}+\Omega_{m0}+\Omega_{\phi 0}\right)} \tag{35}\] From the above analysis, the present value of DP (\(q\)) is observed to be \(-0.625^{+0.067}_{-0.085}\), which greatly agrees with the recent findings. It is interesting to notice that \(\frac{dH}{dt}_{t=t_{0}}=0\) for \(q_{0}=-1\), which predicts a rapid expansion of the universe and the largest value of the Hubble parameter. Therefore, the dynamics of the late-time evolution of the observed Universe may be described using the Universe in the derived model. The derived outcomes of our theoretical model are nicely matched with the recent experimental results [1, 19, 45, 52, 58, 59, 61, 62, 65]. Figure 3: Deceleration parameter \(q\) as a function of \(z\) for OHD + Pantheon + BAO data sets. The proposed model describes an expanding universe with the decrease in the densities of the scalar field and matter as shown in Fig. 3. The density of the scalar field approaches to the least value in late time while the matter density advances to zero. Energy conservation in GR accounts for the drop in densities of the scalar field and matter with the expansion of the universe. The future evolution of the Universe will be significantly impacted by the assumption that in the late cosmos, the density of matter becomes zero. A phenomenon known as "heat death" is when the universe progresses gradually cold and dark, and no matter is left to create new galaxies or stars. The thermodynamics second rule, which stipulates that disorder or entropy can continuously increase over time, leads to this scenario. As the scalar field tends to have a smaller value of DE density in late time, the Universe can also keep expanding at an accelerating rate in the future. The scenario in which all the matter including stars and galaxies pulled apart, and the expansion of the universe grows very fast is recognized as the "big rip" scenario. In the same direction, the EoS parameter is another useful tool, which explains the evolutionary dynamics of the cosmos in terms of its rate of expansion. The EoS parameter (\(\omega\)) can be expressed by a relation of the cosmic fluid's energy density and the pressure in the form \(p=\omega\rho\). Based on the nature of pressure, the EoS parameter can be characterized by different cosmic realities. Dark matter is an example of non-relativistic matter for which \(\omega=0\), while for relativistic matter like radiation \(\omega=1/3\). The accelerated or decelerated expanding nature of the cosmos can be characterized by distinct value of \(\omega\). The scenario of accelerated expansion of the cosmos can be categorized into different conceivable DE scenarios, which include (i) quintessence scenario (\(-1<\omega<-1/3\)), (ii) cosmological constant (\(\omega=-1\)), (iii) phantom scenario (\(\omega<-1\)). We have focused on both the effective EoS parameter and the EoS parameter of the scalar field in our model. According to GR, the only prerequisite for inflation in the cosmos is that which results in the cosmos having repulsive energy and jerk. In our investigation, we have validated the quintom behavior of the model by establishing the current values of the scalar field EoS parameter. The current value of the scalar field EoS parameter is estimated as \(\omega_{\phi 0}=-1.042\) for best-constrained values of model parameters for a combined sample of datasets. This conclusion supports prior research in the field and lends substantial support to the quintom behavior of Figure 4: (a) Energy density, (b) EoS. DE [66, 67, 68, 69, 45]. Additionally, we have displayed the reconstructed evolution history of the effective EoS parameter \(\omega_{eff}\) for this model using the combined sample of SN+OHD+BAO/CMB datasets in Figure 4. From the figure, it has been noticed that the model does not suffer any kind of 'future singularity' because, at low redshift \(\omega_{eff}\) attains a negative value (\(\omega_{eff}<-1\)) while at high z, it approaches to zero. Thus, from the trajectory of \(\omega_{eff}\), it has been detected that the model proceeds in the quintessence era during the evolution of the cosmos and approaches to a phantom scenario in late time. Hence, the profile of \(\omega_{eff}\) depicts a quintom-like behavior of the cosmos. Due to the dominance of non-relativistic matter like baryonic and dark matter in the early universe, the value of the scalar field's density parameter is low whereas the value of the matter density parameter is high i.e., \(\Omega_{m}>\Omega_{\phi}\) which renders a strong physical background for the decelerated scenario of the early universe. But, with the evolutionary growth of the universe, the density parameter decreases due to the volumetric increase of the universe and it tends to zero in late time whereas the density parameter of the scalar field becomes dominant in late time and leads to the universe's accelerated expansion. The best-estimated values of density parameters of the proposed model for the combined sample of observation datasets (SN, OHD, and BAO) are found as \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), and \(\Omega_{k0}=0.0615\pm 0.0404\). These outcomes are nicely matched with the findings of the Planck measurements [70, 34]. The total density parameter for the proposed model tends to unity in late time i.e., at the current era. Thus, in the present study, a scalar field is taken as a substitute for DE to describe the accelerated expansion of the cosmos. The development of the scalar field's kinetic and potential energy is also depicted in Figures 8 and 9. From the graphical representation of potential and kinetic energies, we observed that both are positive and decreasing, and the scalar field transit to a low energy scenario from a high energy era over time. Figure 5: Deceleration parameter \(q\) as a function of \(z\) for OHD + Pantheon + BAO data sets. The energy associated with a scalar field, which has a single value at each space point, is described by the theory of the scalar potential in physics [22, 23, 24]. The nature of the scalar field and its potential is highly influenced by the specific physical system taken into consideration. Obviously, a positive scalar potential is associated with stable structures in physics because negative energy can result in non-physical or unstable solutions. For the proposed model, the positive behavior of scalar potential during the entire evolution can be seen in Figure. The nature of kinetic energy is positive and decreasing in the evolution of the universe as depicted in the figure. Thus, the derived model shows a quintessence-like behavior in which massless scalar field \(\phi\) can be assumed with a potential \(V(\phi)\) that mimics the gravitational field and efficiently describes the current inflation of universe [24, 25, 26]. **Statefinders:** Cosmological parameters like Hubble and deceleration are combined sufficiently to depict the evolutionary dynamics of the universe. Scale factor \(a\), its first order derivative (\(\dot{a}\)), and second order derivative (\(\ddot{a}\)) are used to describe both of these parameters. The accuracy characteristics of the proposed theoretical models are suppressed as a result of this dependency, which causes Figure 6: (a)KE, (b)PE. Figure 7: (a)Trajectory in \((r-s)\) plane, (b) Trajectory in \((r-q)\) plane. all the models to converge around the same value of \(q\) and other revealing parameters. Important theoretical model predictions about the accuracy of the outcomes are lost in the process. Two new parameters named as statefinder \(r,s\) are presented to identify the degree of precision among various dark energy cosmic models. This pair of state-finders assists us in enhancing model prediction accuracy by identifying evolutionary trajectory in the \(r-s\) plane. Assuming various forms of dark energies, as mentioned in the literature [71, 72, 73], it is possible to distinguish between the suggested cosmological model and the \(\Lambda\)CDM model from the \((r-s)\) plotting [71, 74]. In terms of \(q\) and \(z\), the parameters \(r\) and \(s\) for the currently suggested model can be elaborated as follows: \[r = q(2q+1)+(1+z)\frac{dq}{dz} \tag{36}\] \[= \left[2(\Omega_{m0}+\Omega_{\phi 0}-1)(2k^{2}-3k+1)(z+1)^{2k}-2(1 +z)^{3}\Omega_{m0}+\Omega_{\phi 0}\bigg{(}2\right.\] \[- \left.e^{nz}(z+1)^{\alpha}\{\big{(}(\alpha-1)+n(z+1)\big{)}^{2}- (\alpha-1)\}\bigg{)}\right]\] \[/ 2\bigg{[}(\Omega_{\phi 0}+\Omega_{m0}-1)(z+1)^{2k}-\Omega_{\phi 0 }e^{nz}(z+1)^{\alpha}-(z+1)^{3}\Omega_{m0}\bigg{]}\] \[s = \frac{r-1}{3(q-\frac{1}{2})} \tag{37}\] \[= \left[2k(3-2k)(z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1)+e^{nz }(z+1)^{\alpha}\Omega_{\phi 0}\bigg{(}\{n(z+1)+(\alpha-)\}^{2}-(\alpha+1) \bigg{)}\right]\] \[/ 3\bigg{[}(3-2k)(z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1)+e^{nz }(z+1)^{\alpha}\Omega_{\phi 0}\{\alpha-3+n(z+1)\}\right]\] Figure 7(a) depicts, the suggested model's evolutionary behavior in \(r-s\) plane utilizing the derived expressions for \(r\) and \(s\). For the suggested model, the present values of \((r,s)\) are calculated as (1.09673, -0.028773) by taking the combined sample of observational datasets into account. Thus, the given model starts with the matter-dominated era (decelerating scenario) cross \(\Lambda\)CDM \((r=1,s=0)\) enter quintessence era \((r<1,s>0)\) and finally approaches toward Chaplygin gas model \((r>1,s<0)\) in late time. Therefore, the suggested model behaves like the standard \(\Lambda\)CDM in the present scenario. Using the best plausible values of parameters estimated from a combined sample of observational datasets, we have plotted the \((q,r)\) trajectory in Figure 6(b) for the proposed model. In the figure horizontal red line \((r=1)\) indicates the \(\Lambda\)CDM line which divides the evolutionary plane into two regions namely the chaplygin gas \((r>1)\) and quintessence DE \((r<1)\). The \((q,r)\) trajectory for our proposed model starts from SCDM (0.5, 1) and shows a chaplygin gas-like behavior in late time. The model exhibits a flipping from a deceleration era to an acceleration scenario as depicted by the trajectory in \((q-r)\) plane. **Jerk Parameter** Another diagnostic that is utilized widely in astrophysical studies is the jerk parameter \((j)\). The cosmic jolt (or jerk) is the basic idea behind the concept of the jerk parameter that creates a transition of the universe to an accelerating scenario from the decelerating era. A physical jerk is the pace at which acceleration changes in relation to time. It is derived by using the third-order term of Tylor's expansion of the scale factor about \(a_{0}\) in cosmology. This parameter offers us an additional edge in identifying kinematically degraded cosmic models [75]. It provides greater accuracy in understanding the expansion of cosmos in comparison to the Hubble parameter because of the involvement of the scale factor's third-order derivative. It is possible to define \(j\) in the suggested model as [76, 77]. \[j(z)=(2q+1)q+(z+1)\frac{dq}{dz} \tag{38}\] From Eqs. (35) and (38), the expression of jerk diagnostic is developed as \[j(z) = \left[2\left(2k^{2}-3k+1\right)(z+1)^{2k}\left(\Omega_{m0}+\Omega _{\phi 0}-1\right)-2(z+1)^{3}\Omega_{m0}\right. \tag{39}\] \[- \left.e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\bigg{(}((\alpha-1)+n(z+1) )^{2}-(\alpha-1)\bigg{)}\right]\] \[/ 2\bigg{[}\left((z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1 \right)-(z+1)^{3}\Omega_{m0}-e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\right) \bigg{]}\] We have presented the graphical behavior of jerk parameter \(j\) in Figure 8, using the best plausible values of free parameters estimated using the combined sample of Pantheon, BAO, and OHD datasets. The positive nature of the jerk parameter during the entire evolution indicates a smooth transition of the cosmos into an accelerated phase (see Fig. 8). The suggested model reflects the characteristics of the \(\Lambda\)CDM model of the universe because, in our study, the jerk parameter's current value is found to be \(j_{0}=1.11^{+0.23}_{-0.16}\) which is \(\approx 1\). It can be also observed from Figure, that our model acts like the standard \(\Lambda\)CDM model in the early time and represents another DE model that is distinct from \(\Lambda\)CDM in late time because \(j_{0}\neq 1\). Figure 8: Jerk paramter ## 5 Concluding remarks In the current study, we investigated a scalar field cosmological model with Lyra's geometry to explain the current cosmic expansion in a flat FRW universe. We assumed a variable displacement vector as a component of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hyperbolic function of redshift \(z\), confirming the essential transition behavior of the universe. The main highlights are summarized as follows: * We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon and taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), \(\alpha=-0.22\pm 0.13\), \(n=0.096\pm 0.079\), and \(k=0.38\pm 0.32\). * The model exhibits a flipping nature, and the redshift transition occurs at \(z_{t}=0.756^{+0.005}_{-0.015}\) and the current value of the decelerated parameter for the proposed model is calculated as \(q_{0}=-0.625^{+0.067}_{-0.085}\) for the combined dataset. * In our investigation, we have validated the quintom behavior of the model by establishing the current values of the scalar field EoS parameter. The current value of the scalar field EoS parameter is estimated as \(\omega_{\phi 0}=-1.042^{+0.068}_{-0.069}\) for combined datasets. From the trajectory of \(\omega_{eff}\), it has been detected that the model stays in the quintessence era during the evolution of the cosmos and approaches to a phantom scenario in late time. Hence, the profile of \(\omega_{eff}\) depicts a quintom-like behavior of the cosmos. * The total density parameter for the proposed model tends to unity in late time i.e., at the current era. Thus, in the present study, a scalar field is taken as a substitute for DE to describe the accelerated expansion of the cosmos. * The nature of kinetic energy is positive and decreasing in the evolution of the universe as depicted in the figure. Thus, the derived model shows a quintessence-like behavior in which massless scalar field \(\phi\) can be assumed with a potential \(V(\phi)\) that mimics the gravitational field and efficiently describes the current inflation of the universe [24, 25, 26]. * The given model starts with the matter-dominated era (decelerating scenario) cross \(\Lambda\)CDM (\(r=1,s=0\)) enter quintessence era (\(r<1,s>0\)) and finally approaches toward Chaplygin gas model (\(r>1,s<0\)) in late time. Therefore, the suggested model behaves like the standard \(\Lambda\)CDM in the present scenario. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline Parameters & \(q_{0}\) & \(z_{t}\) & \(\omega_{\phi 0}\) & \(j_{0}\) & \(\omega_{eff0}\) \\ \hline SN+BAO/CMB+OHD & \(-0.625^{+0.067}_{-0.085}\) & \(0.756^{+0.005}_{-0.015}\) & \(-1.042^{+0.068}_{-0.069}\) & \(1.11^{+0.043}_{-0.040}\) & \(-0.750^{+0.045}_{-0.057}\) \\ \hline \end{tabular} \end{table} Table 2: The numerical findings of derived cosmological model for joint dataset * The suggested model reflects the characteristics of the \(\Lambda\)CDM model of the universe because, in our study, the jerk parameter's current value is found to be \(j_{0}=1.11^{+0.23}_{-0.16}\) which is \(\approx 1\). It can be also observed from Figure, that our model acts like the standard \(\Lambda\)CDM model in the early time and represents another DE model that is distinct from \(\Lambda\)CDM in late time because \(j_{0}\neq 1\). Hence, the current study describes a model of a transitioning universe using the scalar field as a substitute for dark energy. Recent findings support the outcomes of the proposed model.
2305.19615
The non-Hermitian landscape of autoionization
We report on the existence of exceptional points (EPs) in single-resonance autoionization and provide analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We additionally propose a reliable method for the experimental determination of EPs, based solely on information about their ionization probability as a function of the system parameters. The links between EPs, the maxima of the asymmetric profile and the effective decay rate of the ground state are investigated in detail. Quantitative numerical examples pertaining to the doubly excited $2s2p({}^1P)$ state of Helium confirm the validity of our formulation and results. In addition to unveiling hidden aspects of autoionization, our treatment and results provide a benchmark for the exploration of EPs and their properties in a variety of materials exhibiting Fano profiles with a broad perspective of possible applications.
G. Mouloudakis, P. Lambropoulos
2023-05-31T07:31:05Z
http://arxiv.org/abs/2305.19615v2
# The non-Hermitian landscape of autoionization ###### Abstract We report on the existence of exceptional points (EPs) in single-resonance autoionization and provide analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We additionally propose a reliable method for the experimental determination of EPs, based solely on information about their ionization probability as a function of the system parameters. The links between EPs, the maxima of the asymmetric profile and the effective decay rate of the ground state are investigated in detail. Quantitative numerical examples pertaining to the doubly excited \(2s2p(^{1}P)\) state of Helium confirm the validity of our formulation and results. _Introduction \(-\)_ Autoionization (AI) belongs to a broad class of quantum phenomena involving discrete states (resonances) embedded in continua into which they decay. Examples, among others, are the Breit-Wigner resonance in nuclear physics [1], in particle physics [2; 3], in photonics [4] and of course in atoms and molecules [5; 6], where the continuum is ionization or even dissociation; hence the term autoionization. The literature on autoionization spans a vast range of topics, including the time-dependent formation of the autoionization profile [7; 8; 9; 10], strong driving of autoionizing resonances (ARs) [11; 12; 13; 14; 15; 16; 17], the dynamics of doubly-resonant autoionization [18; 19], and the effects of phase [20; 21] and statistical fluctuations [22; 23; 24; 25] of the laser field on the process. ARs can be excited by radiation absorption or collisions and are infinite in number, with the spacing between them decreasing with increasing excitation energy. Yet, there are cases in which one or more resonances are separated in energy by significantly more than their width, qualifying as isolated resonances, with the doubly excited \(2s2p(^{1}P)\) state of Helium being the prototype of an isolated AR, which continues revealing novel aspects, as attested by the ongoing streams of papers to this day [13; 14; 15; 16; 17]. It is in addition a perfect example of an open quantum system, with its dynamics governed by a non-Hermitian effective Hamiltonian. Surprisingly, the natural connection of AI to non-Hermitian physics, a field in the process of explosive activity, has escaped attention. Non-Hermitian physics and its connection to parity-time (\(\mathcal{PT}\)) symmetry, was introduced as an axiomatic theory in the seminal papers of C. Bender et al. [26; 27; 28; 29; 30]. Soon thereafter, it was pointed out that effective Hamiltonians describing the dynamics of open quantum systems, inevitably are non-Hermitian [31]. The boundary between the unbroken and broken \(\mathcal{PT}\) symmetry of such Hamiltonians [32; 33] is marked by the presence of exceptional points (EPs) [34; 35; 36; 37], i.e., points in the parameter space where two or more eigenvalues coalesce, while their corresponding eigenvectors become parallel. Tracking the positions of these points in the parameter space of an open quantum system is crucial, as they provide insight into the range of parameters where the system undergoes abrupt phase transitions [38] and enhanced sensitivity [39; 40; 41; 42; 43]. Several approaches for understanding phenomena related to quasi-bound states embedded in continua using complex spectral analysis have been presented in the past, applied to various systems such as two-channel quantum wires [44; 45], semi-infinite superlattices with embedded impurities [46], discrete states coupled to continua containing Van Hove singularities at their threshold [47], as well as systems involving laser-induced population trapping via strong coupling of ARs in atoms [48]. In this paper, we employ the powerful analysis of EPs in order to unveil hidden aspects of ARs. Focusing on the conditions for encountering EPs in single-resonance autoionization, we derive analytical expressions revealing their positions in parameter space. Moreover, we show how the amount of ionization of the atom, which can be determined experimentally, contains information about the positions of EPs, documented by numerical examples for the \(2s2p(^{1}P)\) state of Helium. Finally, we demonstrate the connection between the presence of EPs, the maxima of the typical asymmetric profile of autoionization and the effective decay rate of the atomic ground state. _Theory \(-\)_ We begin by considering an atom whose ground state \(\left|g\right\rangle\) is coupled to an isolated autoionizing resonance \(\left|a\right\rangle\) through a linearly polarized field with frequency \(\omega\), as well as a continuum of states denoted by \(\left|E\right\rangle\), both coupled to \(\left|g\right\rangle\) and \(\left|a\right\rangle\). The wavefunction of the system at times \(t\geq 0\) is given by: \[\left|\psi(t)\right\rangle=c_{g}(t)\left|g\right\rangle+c_{a}(t)\left|a\right\rangle +\int dEc_{E}(t)\left|E\right\rangle. \tag{1}\] Introducing the transformations \(\tilde{c}_{g}(t)=c_{g}(t)e^{i\omega_{g}t}\), \(\tilde{c}_{a}(t)=c_{a}(t)e^{i(\omega_{g}+\omega)t}\) and \(\tilde{c}_{E}(t)=c_{E}(t)e^{i(\omega_{g}+\omega)t}\) in the time-dependent Schrodinger equation, eliminating as usual the continuum adiabatically and adopting the rotating-wave approximation, as detailed in the Supplementary material (SM), we show that the dynamics of the system under the above conditions, are described by the effective Hamiltonian (\(\hbar=1\)): \[\hat{\mathcal{H}}_{\text{eff}}\equiv\begin{bmatrix}S_{g}-i\frac{\gamma}{2}&\tilde{ \Omega}\left(1-\frac{i}{q}\right)\\ \tilde{\Omega}\left(1-\frac{i}{q}\right)&-\Delta-i\frac{\Gamma}{2}\end{bmatrix}, \tag{2}\] where \(S_{g}\) and \(\gamma\) are, respectively, the light-induced shift and the direct into the continuum ionization rate of the ground state, \(\Gamma\) is the autoionization rate of quasi-bound state \(|a\rangle\), \(\tilde{\Omega}\) the generalized Rabi frequency of the \(|g\rangle\longleftrightarrow|a\rangle\) transition (see SM), and \(q\) the Fano asymmetry parameter [49], expressing the relative strength of the direct transition from \(|g\rangle\) to the continuum compared to the transition to \(|a\rangle\). \(\Delta\equiv\omega-\left(\omega_{a}-F_{a}-\omega_{g}\right)\) is the detuning between the frequency of the driving field and the frequency of the \(|g\rangle\longleftrightarrow|a\rangle\) transition, including the self-energy shift \(F_{a}\) of \(|a\rangle\). Note that the asymmetry parameter is related to the parameters of \(\hat{\mathcal{H}}_{\text{eff}}\) through the strict equation \(q^{2}=4\Omega^{2}/(\gamma\Gamma)\) (See SM). The light-induced shift of the ground state is hereafter neglected as it is of no relevance to our study. A schematic representation of our system is depicted in Fig. 1. The effective Hamiltonian of Eq. (2) is obviously non-Hermitian, not only due to the presence of the diagonal decay terms in the energies of the ground state and \(|a\rangle\), but also due to the presence of non-zero imaginary parts in the off-diagonal terms reflecting the driving of the \(|g\rangle\longleftrightarrow|a\rangle\) transition. Diagonalization of \(\hat{\mathcal{H}}_{\text{eff}}\) leads to the following set of eigenvalues: \[\begin{split}\lambda_{1,2}&=-\frac{1}{2}\left[\Delta+ i\frac{(\gamma+\Gamma)}{2}\right]\\ &\pm\frac{1}{4}\sqrt{16\left(1-\frac{i}{q}\right)^{2}\tilde{ \Omega}^{2}-\left(\gamma-\Gamma+2i\Delta\right)^{2}}.\end{split} \tag{3}\] At first sight, owing to the presence of imaginary parts in the radicands, the spectra of \(\hat{\mathcal{H}}_{\text{eff}}\) appear not to exhibit EPs. However, if the detuning is set to \(\Delta=\Delta^{s}\equiv 2q\gamma\Gamma/(\Gamma-\gamma)\), \(\gamma\neq\Gamma\) and we eliminate \(\gamma\) via the relation \(\gamma=4\Omega^{2}/(q^{2}\Gamma)\), we obtain: \[\lambda_{1,2}=\frac{4\tilde{\Omega}^{2}q\Gamma}{4\tilde{\Omega}^{2}-q^{2} \Gamma^{2}}-i\left(\frac{\tilde{\Omega}^{2}}{q^{2}\Gamma}+\Gamma/4\right)\pm \frac{1}{4q|q|\Gamma}\sqrt{-\left(\frac{4\tilde{\Omega}^{2}+q^{2}\Gamma^{2}} {4\tilde{\Omega}^{2}-q^{2}\Gamma^{2}}\right)^{2}\left[16\tilde{\Omega}^{4}-8 \tilde{\Omega}^{2}\Gamma^{2}q^{2}\left(1+2q^{2}\right)+q^{4}\Gamma^{4}\right]},\tilde{\Omega}\neq\frac{|q|\Gamma}{2} \tag{4}\] Observe now that choosing \(\Delta=\Delta^{s}\) results to a set of eigenvalues with real radicands. Note that Eq. (4) holds for \(\tilde{\Omega}\neq|q|\Gamma/2\) which is equivalent to \(\gamma\neq\Gamma\). For \(\tilde{\Omega}=|q|\Gamma/2\), i.e. \(\gamma=\Gamma\), the radicand is complex for every value of \(\Delta\). The details of the physical significance of \(\Delta^{s}\) for our system will become clear later. We should also note that the value of \(\Delta^{s}\) resulting to real radicands depend on the intensity of the driving field, which in turn determines the value of \(\tilde{\Omega}\). The relation between \(\Delta^{s}\) and \(\tilde{\Omega}\) is \(\frac{\Delta^{s}(\tilde{\Omega})}{\Gamma}=\frac{8q\left(\tilde{\Phi}\right)^{ 2}}{q^{2}-4\left(\frac{\tilde{\Phi}}{\tilde{\Phi}}\right)^{2}}\), \(\tilde{\Omega}\neq|q|\Gamma/2\), which results upon substitution of \(\gamma=4\Omega^{2}/(q^{2}\Gamma)\) in the expression \(\Delta^{s}\equiv 2q\gamma\Gamma/(\Gamma-\gamma)\), \(\gamma\neq\Gamma\). We are interested in the values of the coupling \(\tilde{\Omega}\) that nullify the radicands of Eq. (4). The radicands become zero when \[16\tilde{\Omega}^{4}-8\tilde{\Omega}^{2}\Gamma^{2}q^{2}\left(1+2q^{2}\right)+ q^{4}\Gamma^{4}=0, \tag{5}\] and the positive roots of the above equation are \[\frac{\tilde{\Omega}_{\pm}}{\Gamma}=\frac{1}{2}\left(|q|\sqrt{1+q^{2}}\pm q^{ 2}\right). \tag{6}\] It is easy to verify that for both \(\tilde{\Omega}=\tilde{\Omega}_{+}\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}\), given that \(\Delta=\Delta^{s}\), the eigenvectors of \(\hat{\mathcal{H}}_{\text{eff}}\) coalesce, respectively, to the states \(\left|\psi_{+}\right\rangle=\left(-i\left|g\right\rangle+|a\rangle\right)/\sqrt {2}\) and \(\left|\psi_{-}\right\rangle=\left(i\left|g\right\rangle+|a\rangle\right)/\sqrt {2}\). Therefore the points \(\left(\tilde{\Omega}_{\pm},\Delta_{\pm}^{s}\right)\) in parameter space, where \(\Delta_{\pm}^{s}\equiv\Delta^{s}(\tilde{\Omega}_{\pm})\), are EPs of \(\hat{\mathcal{H}}_{\text{eff}}\). _Results & Discussion -_ Interestingly, the EPs of the system measured in units of the autoionization width \(\Gamma\), depend solely on the asymmetry parameter \(q\), and there are two for any given value of the latter (Fig. 2). Note that the value of \(q\) for a given AR is fixed, as it depends solely upon the corresponding matrix elements of the transitions involved in the process. In particular, for the process involving the driving of the Figure 1: Schematic representation of the system at study. The ground state \(|g\rangle\) of an atom that is ionized with a rate \(\gamma\), is coupled to an AR \(|a\rangle\) via a linearly polarized field that drives the \(|g\rangle\longleftrightarrow|a\rangle\) transition with a generalized Rabi frequency \(\tilde{\Omega}\). The frequency of the driving field is detuned by \(\Delta\) from the energy separation of the two states and the AR decays into the continuum with an autoionization rate \(\Gamma\). \(1s^{2}(^{1}S)\longleftrightarrow 2s2p(^{1}P)\) transition in Helium and the associated autoionization of the \(2s2p(^{1}P)\) AR, it is well established that \(q\approx-2.79\)[23; 50]. Focusing hereafter on that isolated AR, we note that for \(q=-2.79\), according to Eq. (6) and the relation between \(\Delta^{s}\) and \(\tilde{\Omega}\), the theory indicates the existence of two EPs at the positions \((\tilde{\Omega}_{-},\Delta^{s}_{-})=(0.2424\Gamma,-0.1738\Gamma)\) and \((\tilde{\Omega}_{+},\Delta^{s}_{+})=(8.0265\Gamma,5.7538\Gamma)\) in parameter space. In Fig. 3 we plot the real and imaginary parts of the eigenvalues as a function of \(\tilde{\Omega}\) and \(\Delta\) for \(q=-2.79\) and indeed confirm the coalescence of the eigenvalues at the above positions in parameter space. As noted above, tuning \(\Delta\) to \(\Delta^{s}\) is essential in order to ensure that the radicands appearing in the expressions of the eigenvalues become real. We can get a glimpse on the physical significance of \(\Delta^{s}\) in the vicinity of an EP, by solving the time-dependent Schrodinger equation using the effective Hamiltonian \(\hat{\mathcal{H}}_{\text{eff}}\), and plotting the ionization probability of the atom (\(P(t)=1-\left|c_{q}(t)\right|^{2}-\left|c_{a}(t)\right|^{2}\)) as a function of the detuning for \(\tilde{\Omega}=\tilde{\Omega}_{-}\) (Fig. 4). Note that the ionization probability is calculated on \(t=T\), where \(T\) is the interaction time between the atom and the driving field. As expected, the ionization profile is asymmetric, transforming gradually to a "window" profile for sufficiently large interaction times, a phenomenon labelled "time saturation" in [11], reconfirmed most recently in [15]. Interestingly, the position of the maximum of the asymmetric profile, denoted by \(\Delta_{m}\), which is initially increasing as \(T\) increases, eventually stabilizes at \(\Delta^{s}_{-}\), as shown in the inset of Fig. 4. Therefore, for \(\tilde{\Omega}=\tilde{\Omega}_{-}\), \(\Delta^{s}(\tilde{\Omega}_{-})\equiv\Delta^{s}_{-}\) is the detuning which maximizes the ionization probability (to unity) for sufficiently large interaction times, which for the field intensity considered, translates to \(T\approx 20\Gamma^{-1}\) or larger. It is important to note that this occurs only by tuning the parameters of the system to the exceptional point \((\tilde{\Omega}_{-},\Delta^{s}_{-})\). For example, if we choose an intensity such that \(\tilde{\Omega}=0.1\tilde{\Omega}_{-}\), the position of the maximum of the asymmetric profile stabilizes to \(\Delta_{m}\approx-0.195\Gamma\), whereas \(\Delta^{s}(0.1\tilde{\Omega}_{-})=-0.0016\Gamma\). Although in most cases, the EPs of a system can be explored theoretically through diagonalization of the relevant effective Hamiltonian, the experimental determination of EPs most often is quite a challenging task, since in general the eigenenergies of a Hamiltonian are not amenable experimentally. Therefore one needs to identify EPs indirectly by studying their footprints on system observables. To that end, we employ a quantity widely used in the context of the Quantum Zeno effect in open quantum systems, namely, the effective decay rate of a state [51], defined as \(\Gamma_{\text{eff}}^{j}(t)\equiv-\frac{1}{t}\ln[P_{j}(t)]\), \(j=g,a\), where \(P_{j}(t)=\left|c_{j}(t)\right|^{2}\) is the population of state \(\left|j\right>\), \(j=g,a\). The effective decay rate provides information about how the couplings between a given state and a set of other states or a continuum, modify the time evolution of that state's population. It turns out that the effective decay rate of the ground state, which can be readily determined experimentally, is remarkably sensitive to the EPs of our system, pinpointing their positions in parameter space. In Fig. 5(a) we plot the effective decay rate of the ground state as a function of \(\tilde{\Omega}\) for \(\Delta=\Delta^{s}(\tilde{\Omega})\), which implies setting each time the detuning to a different value, depending on the value of \(\tilde{\Omega}\) considered. Note that the effective decay rate is calculated at an interaction time \(t=T\), which should be sufficiently large for the rate to be no longer modified with further increase of \(T\). For \(q=-2.79\), the effective decay rate is stabilized for \(T\approx 20\Gamma^{-1}\) or larger, which is the same time scale as the one discussed in the results of Fig. 4. At such time scales it is easy to show that the population of \(\left|a\right>\) is practically negligible. Therefore the effective decay rate of the ground state is directly related to the measurable ionization probability \(P(t)\), because \(\Gamma_{\text{eff}}^{q}(t)\equiv-\frac{1}{t}\ln[P_{g}(t)]\cong\frac{1}{t}\ln[1 -P(t)]\). Clearly, the effective decay rate of the ground sate provides direct evidence for the positions of the EPs of the system (Fig. 5(a)), in agreement with our theoretical predictions based on diagonalization of \(\hat{\mathcal{H}}_{\text{eff}}\). A short note regarding the experimental detection of the EPs related to the autoionization of the Helium \(2s2p(^{1}P)\) AR, is in place at this point. The EP at \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{-},\Delta_{-}^{s})=(0.2424\Gamma,-0.173 8\Gamma)\) lies in a parameter region that is well within the current capabilities of synchrotron sources and seeded Free-electron lasers [52; 53] of short wavelength radiation, sufficient intensity and small bandwidth that can excite the AR. However, the EP at \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{+},\Delta_{+}^{s})=(8.0265\Gamma,5.753 8\Gamma)\) would require a source of high intensity, as it lies in the strong field regime where \(\tilde{\Omega}>\Gamma\)[11]. Although the required intensity, which is estimated to be around \(1.3\times 10^{16}\) W/cm\({}^{2}\), is available with current Free-electron laser sources, issues such as intensity fluctuations [54; 55] known to affect the excitation of ARs [22; 23; 24; 25] and large bandwidth need to be addressed. Their interplay with EPs pose interesting followup studies. Finally, in Fig. 5(b) we plot the effective decay rate of the ground state as a function of \(\tilde{\Omega}\) and \(\Delta\) at the vicinity of the EP that lies in the weak field regime. The effective decay rate maxima lie on the \(\Delta=\Delta^{s}(\tilde{\Omega})\) line (curved dashed line) over which the eigenvalues have real radicands. At the tip of this maxima curve we find the weak field EP at the position \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{-},\Delta_{-}^{s})=(0.2424\Gamma,-0.173 8\Gamma)\) in parameter space. _Concluding Remarks and Outlook \(-\)_ In summary, we have unveiled the existence of EPs in single-resonance autoionization and provided analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We have further demonstrated the connection between EPs and the maxima of the asymmetric ionization profile, through a numerical study of the \(2s2p(^{1}P)\) resonance in Helium and proposed a reliable method for the observation of EPs, based solely on information about the ionization probability as a function of the parameters of the system, well within the capabilities of current radiation sources. Our results lead to further questions related to the role of pulse shape or field fluctuations in the observation of EPs in autoionization, as well as questions related to the influence of neighboring ARs, beyond the single-resonance autoionization. At the same time, the investigation of potentially impactful effects related to phase changes associated with the encircling of EPs in the parameter space of autoionization, based on the complex topology of the Riemann surfaces in the vicinity of the latter, is a further challenging issue. Overall, our results offer new insights into the interplay between autoionization and non-Hermitian \(\mathcal{PT}\) physics, opening up a novel and potentially fruitful territory for further exploration. _Acknowledgments \(-\)_ GM would like to acknowledge the Hellenic Foundation for Research and Innovation (HFRI) for financially supporting this work under the 3rd Call for HFRI PhD Fellowships (Fellowship Number: 5525). Figure 4: Ionization probability as a function of \(\Delta\) for various interaction times \(T\), \(q=-2.79\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}=0.2424\Gamma\). The vertical dashed line marks the position of the detuning \(\Delta_{-}^{s}=-0.1738\Gamma\). Inset: Position of the peak of the asymmetric profile (\(\Delta_{m}\)) as a function of the interaction time \(T\) (logarithmic scale) for \(q=-2.79\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}\). The horizontal dotted line marks the position of the detuning \(\Delta_{-}^{s}\). We are also grateful to D. Kaltsas for useful discussions concerning this work.
2310.20222
Cosmological singularities in $f(T,φ)$ gravity
The pursuit of understanding the mysteries surrounding dark energy has sparked significant interest within the field of cosmology. While conventional approaches, such as the cosmological constant, have been extensively explored, alternative theories incorporating scalar field-based models and modified gravity have emerged as intriguing avenues. Among these, teleparallel theories of gravity, specifically the $f(T,\phi)$ formulation, have gained prominence as a means to comprehend dark energy within the framework of teleparallelism. In this study, we investigate two well-studied models of teleparallel dark energy and examine the presence of cosmological singularities within these scenarios. Using the Goriely-Hyde procedure, we examine the dynamical systems governing the cosmological equations of these models. Our analysis reveals that both models exhibit Type IV singularities, but only for a limited range of initial conditions. These results could indicate a potential edge for teleparallel cosmological models over their other modified gravity counterparts, as the models we examine seem to be only allowing for weak singularities that too under non general conditions.
Oem Trivedi, Maxim Khlopov, Jackson Levi Said, Rafael C. Nunes
2023-10-31T06:48:58Z
http://arxiv.org/abs/2310.20222v2
# Cosmological singularities in \(f(T,\phi)\) gravity ###### Abstract The pursuit of understanding the mysteries surrounding dark energy has sparked significant interest within the field of cosmology. While conventional approaches, such as the cosmological constant, have been extensively explored, alternative theories incorporating scalar field-based models and modified gravity have emerged as intriguing avenues. Among these, teleparallel theories of gravity, specifically the \(f(T,\phi)\) formulation, have gained prominence as a means to comprehend dark energy within the framework of teleparallelism. In this study, we investigate two well-studied models of teleparallel dark energy and examine the presence of cosmological singularities within these scenarios. Using the Goriely-Hyde procedure, we examine the dynamical systems governing the cosmological equations of these models. Our analysis reveals that both models exhibit Type IV singularities, but only for a limited range of initial conditions. These results could indicate a potential edge for teleparallel cosmological models over their other modified gravity counterparts, as the models we examine seem to be only allowing for weak singularities that too under non general conditions. Introduction Observations of the late-time acceleration of the Universe came as a surprise to the cosmological community [1]. Since then, extensive efforts have been dedicated to explaining this expansion. Standard approaches, such as the Cosmological constant [1, 2, 3, 4, 5], as well as more exotic scenarios like Modified gravity theories [6, 7, 8], and recent proposals for the direct detection of dark energy [9], have been pursued. One fascinating avenue for understanding dark energy is through Quintessence, where a scalar field drives the late-time cosmic acceleration of the universe [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Quintessence is particularly interesting as it represents the simplest scalar field dark energy scenario that avoids issues like ghosts or Laplacian instabilities. In quintessence models, the acceleration of the universe is driven by a slowly varying scalar field with a potential \(V(\phi)\), similar to the mechanism of slow-roll inflation. However, in this case, contributions from non-relativistic matter, such as baryons and dark matter, cannot be neglected. It is worth noting that simple models of Quintessence have been shown to be in conflict with the current H0 tension [21, 22, 23], suggesting that simple Quintessence models may perform worse than \(\Lambda\)-CDM models in light of the current H0 data [24].This leads one to consider other more exotic possibilities for scalar field dark energy models and one such possibility is to consider models in Teleparallel gravity.Teleparallel gravity, a theory based on torsion, provides an alternative description of gravity [25, 26, 27, 28, 29, 30, 31, 32, 33], where gravitation is mediated by torsion. In this approach, the Lagrangian density of Teleparallel Equivalent of General Relativity (TEGR) is proportional to the torsion scalar \(T\). In TEGR, the tetrad field and spin connection pair replace the metric tensor and Levi-Civita connection, respectively, while the teleparallel connection replaces the usual connection [30, 31]. Consequently, at the level of the dynamical equations, curvature-based gravitational theories are equivalent to tensor-based theories [30, 34]. By introducing an arbitrary function \(f(T)\) in place of the torsion scalar \(T\), a generalization of TEGR known as \(f(T)\) gravity is obtained [35, 36, 37, 38, 39, 40, 41], leading to new cosmological models. In this framework, the tetrad fields, which form the orthogonal basis for the tangent space, serve as the dynamical variables of teleparallel gravity. The torsion tensor is constructed using the first derivative of the tetrad product. The field equations are derived by varying the action with respect to the tetrad fields, while the spin connection preserves the local Lorentz invariance and contributes to the equations of motion. For further exploration of \(f(T)\) gravity, refer to [42, 43, 44, 45, 46, 47, 48]. The investigation of scalar-torsion theories with non-minimal coupling between the torsion scalar and a scalar field was carried out in the context of dark energy [49, 50], including studies with arbitrary non-minimal coupling functions and tachyon terms for the scalar field [51, 52]. Another extension of \(f(T)\) gravity is the generalized scalar-torsion \(f(T,\phi)\) gravity, where \(\phi\) represents the canonical scalar field, and the gravitational action incorporates a non-minimal coupling between the scalar field and torsion scalar [53]. Additionally, within the covariant teleparallel framework, a new class of theories has been proposed, where the action depends on the scalar field and an arbitrary function of the torsion scalar [54]. Recently, a significant amount of research has been dedicated to exploring the various types of cosmological singularities that may occur in the present and distant future of the Universe [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]. However, it is often challenging to classify and study cosmological singularities in highly unconventional cosmologies influenced by considerations of quantum gravity or phenomenology. Traditional methods may not be applicable in these cases. Therefore, alternative approaches are necessary to identify cosmological singularities in exotic cosmologies. In this context, the Goriely-Hyde procedure, a particular method in dynamical systems, can be extremely useful [69]. Understanding the singularity structure of dynamical systems is an intriguing aspect, especially when these systems describe significant physical phenomena. Although various approaches have been proposed to investigate the singularity structure of autonomous dynamical systems, the Goriely-Hyde procedure has proven particularly valuable for cosmological studies due to the abundance of interesting dynamical systems in cosmology [70]. Previous applications of the Goriely-Hyde method have explored finite and non-finite time singularities in specific quintessence models [71, 72, 73]. However, a comprehensive analysis of cosmological singularities teleparallel models of dark energy using this approach is still lacking, and our work aims to address this gap.The study of the cosmological dynamics and stability of \(f(t,\phi)\) dark energy was conducted in [74], while an analysis of scalar perturbations was performed in [75]. A recent full dynamical systems analysis of \(f(t,\phi)\) dark energy for two particular models was done in [76] and we intend to use the dynamical systems approach developed there to pursue our singularity analysis. In Section II, we provide a concise overview of the Goriely-Hyde method while in section III we demonstrate the diverse characteristics of singularities in \(f(T,\phi)\) models, including both finite and infinite-time occurrences which can occur for both \(f(t,\phi)\) models considered in [76]. Subsequently, in Section IV, we consider two well-motivated ansatz for the Hubble parameter and classify the types of cosmological singularities (Types I-IV) that can arise within these regimes. Finally, we conclude our work in Section V. ## 2 Teleparallel gravity General relativity (GR) can account for most observed phenomena with appropriate modifications considered in the matter sector. In this context, the widely accepted concordance model combines \(\Lambda\)-CDM cosmology with inflation. However, the enigmatic nature of certain particle species remains a puzzle despite significant progress in physics beyond the standard model of particle physics. It is also plausible that the standard model of particle physics might not require substantial restructuring to address these observational challenges. Instead, it could be the gravitational sector that requires further examination. This could involve extensions of GR or modifications beyond GR as alternatives to its original formulation. The scientific literature has witnessed numerous proposals for new theories of gravity, motivated by various phenomena, theoretical approaches, or even quantum physics. One intriguing possibility that has garnered increasing attention in recent decades is teleparallel gravity, where torsion replaces curvature as the mechanism responsible for generating gravitational fields. This theory replaces the traditional Levi-Civita connection, which is curvature-based, with a teleparallel connection based on torsion. Numerous publications on this topic have emerged in the literature. Among the theories arising from torsion-based approaches to gravity is the teleparallel equivalent of general relativity (TEGR), which is dynamically equivalent to GR and thus indistinguishable from it through classical experiments. In teleparallel gravity (TG), one typically assumes an action of the form: \[\mathcal{S}_{\mathrm{TG}}:=\mathcal{S}_{\mathrm{g}}[e,\omega]+\mathcal{S}_{ \mathrm{m}}[e,\chi]\,, \tag{1}\] Here, the gravitational part \(\mathcal{S}_{\mathrm{g}}\) of the action depends on the tetrad \(e^{A}{}_{\mu}\) and the spin connection \(\omega^{A}{}_{B\mu}\), while the matter part depends on the tetrad \(e^{A}{}_{\mu}\) and arbitrary matter fields \(\chi^{I}\), but not on the spin connection [77, 78]. This is because we assume that the hypermomentum vanishes, thereby preventing this coupling. Introducing a dependence on spin would effectively introduce a second matter tensor, resulting from the variation of the matter Lagrangian with respect to the spin connection. The variation of the matter part of the action, after integration by parts to eliminate derivatives acting on field variations, can be expressed as follows: \[\delta\mathcal{S}_{\mathrm{m}}=\int d^{4}xe(\Theta_{A}{}^{\mu}\delta e^{A}{}_ {\mu}+\Omega_{I}\delta\chi^{I}) \tag{2}\] Here, \(\Omega_{I}=0\) represents the matter field equations, and \(\Theta_{A}{}^{\mu}\) denotes the energy-momentum tensor. The corresponding variation of the gravitational action takes the form: \[\delta\mathcal{S}_{\mathrm{g}}=-\int d^{4}xe(W_{A}{}^{\mu}\delta e^{A}{}_{\mu} +Y_{A}{}^{B\mu}\delta\omega^{A}{}_{B\mu}) \tag{3}\] The tensors \(W_{A}{}^{\mu}\) and \(Y_{A}{}^{B\mu}\) arise from the variation and integration by parts, with their specific form depending on the particular theory under consideration. The explicit expression for \(W_{A}{}^{\mu}\) can be found for several theories in [79]. For brevity, we omit \(Y_{A}{}^{B\mu}\) here, as it turns out to be redundant in deriving the field equations, which can be entirely determined from \(W_{A}{}^{\mu}\) alone. Furthermore, by varying with respect to the tetrad, one can derive the field equations: \[W_{A}{}^{\mu}=\Theta_{A}{}^{\mu}\,. \tag{4}\] An alternative representation of the field equations, more commonly used, is obtained by transforming the first index into a spacetime index with the tetrad while lowering the second index: \[W_{\mu\nu}=e^{A}{}_{\mu}g_{\rho\nu}W_{A}{}^{\rho}\,,\quad\Theta_{\mu\nu}=e^{A} {}_{\mu}g_{\rho\nu}\Theta_{A}{}^{\rho}\,, \tag{5}\] This yields the field equations in the form: \[W_{\mu\nu}=\Theta_{\mu\nu}\,. \tag{6}\] However, deriving the field equations for the spin connection is more complex, as it must satisfy the conditions of being flat, \(R^{\alpha}{}_{\beta\mu\nu}=0\), and metric-compatible, \(\nabla_{\alpha}g_{\mu\nu}=0\), by definition. Various approaches exist to maintain these properties during the variation procedure [80, 81]. For considering cosmological scenarios in teleparallel theories, it is helpful to consider an FLRW metric of the form [79] \[ds^{2}=N(t)^{2}dt^{2}-a(t)^{2}\Big{[}\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\vartheta^{ 2}+\sin^{2}\vartheta d\varphi^{2})\Big{]}\,, \tag{7}\] where \(N(t)\) and \(a(t)\) represent the lapse function and scale factor respectively. In the case of flat universes (\(k=0\)) one can write \[\mathrm{d}s^{2}=N(t)^{2}\mathrm{d}t^{2}-a^{2}(t)\left(\mathrm{d}x^{2}+\mathrm{ d}y^{2}+\mathrm{d}z^{2}\right)\,, \tag{8}\] and this results in the diagonal tetrad \[e^{A}_{\mu}=\mathrm{diag}\left(N(t),\,a(t),\,a(t),\,a(t)\right)\,, \tag{9}\] which turns out to be in the Weitzenbock gauge for the extensions to TEGR. An important remark here is that the above tetrad (with vanishing spin connection) is the only one that has the property that both the tetrad and the teleparallel connection obey cosmological symmetries for flat FLRW. One can also relax the condition that the teleparallel connection enjoys the symmetries of cosmology, but then, the corresponding cosmological equations would not respect the symmetries of cosmology. If we use the diagonal tetrad (9) in Cartesian coordinates one can, for example, find the modified FLRW equations for \(f(T)\) gravity as \[-6H^{2}f_{T}-\frac{1}{2}f =\kappa^{2}\rho\,, \tag{10a}\] \[-2f_{T}(3H^{2}+\dot{H})-2H\dot{f}_{T}-\frac{1}{2}f =-\kappa^{2}p\,, \tag{10b}\] where dots are derivatives with respect to time, so that \(\dot{f}_{T}=f_{TT}\dot{T}\). One can further obtain modified FLRW equations for other teleparallel appraoches, like \(f(T,B)\) gravity being \[3H\dot{f}_{B}-3H^{2}(3f_{B}+2f_{T})-3f_{B}\dot{H}-\frac{1}{2}f(T, B) =\kappa^{2}\rho\,, \tag{11a}\] \[-(3H^{2}+\dot{H})(2f_{T}+3f_{B})-2H\dot{f}_{T}+\ddot{f}_{B}-\frac {1}{2}f(T,B) =-\kappa^{2}p\,. \tag{11b}\] Or for the Teleparallel Gauss-Bonet models being (fixing the gauge such that \(N=1\)) \[-6f_{T}H^{2}-12H^{3}\dot{f}_{T_{G}}+12f_{T_{G}}H^{2}\left(\dot{H}+H^{2} \right)-\frac{1}{2}f(T,T_{G})=\kappa^{2}\rho\,, \tag{12a}\] \[-2H\dot{f}_{T}-2f_{T}\left(\dot{H}+3H^{2}\right)+12f_{T_{G}}H^{2} \left(\dot{H}+H^{2}\right)-8H\dot{f}_{T_{G}}\left(\dot{H}+H^{2}\right)\] \[-4H^{2}\ddot{f}_{T_{G}}-\frac{1}{2}f(T,T_{G})=-\kappa^{2}p\,, \tag{12b}\] where \(f_{T_{G}}=\frac{\partial f}{\partial T_{G}}T_{G}\). It is worth mentioning that \(B_{G}=0\) in flat FLRW and then the standard Gauss-Bonnet term is just \(\mathcal{G}=T_{G}\). While there are a multitude of approaches of dealing with such exotic cosmological systems like reconstruction methods or Noether-symmetries approaches, dynamical systems methods are also an efficient way to understand the dynamics of the models. Dynamical systems allow for the extraction of key features of the cosmology without solving the evolution equations directly (in an exact form). Thus, it then becomes possible to describe the overall nature of the gravitational theory and henceforth determine whether the model can generate a viable cosmological evolution. This therefore serves as a very useful tool especially in models where it is difficult to extract any cosmological solutions from directly solving the field equations such as in f(R) gravity. In the cases we have considered so far, one can for example write the equations (10) as an autonomous dynamical system using the variables \[\tilde{x}=-\frac{\ddot{f}}{H\dot{f}}\,,\quad\tilde{y}=\frac{f}{4H^{2}\dot{f}} \,,\quad\tilde{z}=\frac{3H^{2}+\dot{H}}{H^{2}}\,, \tag{13}\] These variables were considered in [82] and the authors considered the following scenarios: (i) absence of matter fluids and (ii) presence of dust and radiation components. Furthermore, the case when the parameter \(m=-\frac{\ddot{H}}{H^{3}}\) takes on constant values \(m=0\) (quasi-de Sitter evolution) and \(m=-\frac{9}{2}\) (matter dominated evolution) was explored. One can also write (11) in the dynamical systems method using the phase space variables \[\tilde{x}:=\frac{\dot{\phi}}{\sqrt{1+H^{2}}}\,,\quad\tilde{y}:=\frac{V(\phi)} {6H^{2}}\,,\quad\tilde{z}:=\frac{(-T)^{n}}{1+H^{2}}\,,\quad\eta:=\frac{H}{ \sqrt{1+H^{2}}}\,. \tag{14}\] The cosmological dynamics of \(f(T,T_{G})\) gravity was investigated in [83]. In particular, the model \(f(T,T_{G})=-T+\alpha_{1}\sqrt{T^{2}+\alpha_{2}T_{G}}\), where \(\alpha_{1,2}\neq 0\) are constants, was studied. In this case, the presence of a perfect dust fluid was assumed and the following dimensionless phase-space parameters were defined \[\tilde{x}=\sqrt{1+\frac{2\alpha_{2}}{3}\left(1+\frac{\dot{H}}{H^{2}}\right)} \,,\quad\Omega_{\rm m}=\frac{\kappa^{2}\rho_{\rm m}}{3H^{2}}\,. \tag{15}\] While we have briefly discussed several approaches to teleparallel gravity and their status quo in cosmology, what we are most interested in this work are the \(f(T,\phi)\) models and we shall now discuss them in more detail.The action of TEGR (Teleparallel Equivalent of General Relativity) can be generalized to \(f(T)\) gravity by introducing a scalar field \(\phi\). The action, including matter and radiation, can be expressed as [75, 76]: \[S=\int d^{4}x\,e[f(T,\phi)+P(\phi)X]+S_{m}+S_{r}\,, \tag{16}\] Here, \(e=\det[e_{\mu}^{A}]=\sqrt{-g}\) represents the determinant of the tetrad field. The tetrad field, \(e_{\mu}^{A}\), \(A=0,1,2,3\), is related to the metric tensor \(g_{\mu\nu}\) and the Minkowski tangent space metric \(\eta_{AB}\) as \(g_{\mu\nu}=\eta_{AB}e_{\mu}^{A}e_{\nu}^{B}\), where \(\eta_{AB}=(-1,1,1,1)\). The tetrad satisfies the orthogonality condition \(e_{A}^{\mu}e_{\mu}^{B}=\delta_{A}^{B}\), and the spin connection is denoted by \(\omega_{B\mu}^{A}\). The function \(f(T,\phi)\) represents an arbitrary function of the scalar field \(\phi\) and the torsion scalar \(T\), while \(X=-\partial_{\mu}\phi\partial^{\mu}\phi/2\) represents the kinetic term of the field. This general action includes non-minimally coupled scalar-torsion gravity models with the coupling function \(f(T,\phi)\), \(f(T)\) gravity, and minimally coupled scalar field. For a flat FLRW space-time background, the field equations derived from the action are [76]: \[f(T,\phi)-P(\phi)X-2Tf_{,T} = \rho_{m}+\rho_{r} \tag{17}\] \[f(T,\phi)+P(\phi)X-2Tf_{,T}-4\dot{H}f_{,T}-4H\dot{f}_{,T} = -p_{r}\] (18) \[-P_{,\phi}X-3P(\phi)H\dot{\phi}-P(\phi)\ddot{\phi}+f_{,\phi} = 0 \tag{19}\] In these equations, the Hubble parameter is denoted as \(H\equiv\frac{\dot{a}}{a}\), where an overdot represents a derivative with respect to cosmic time \(t\). The energy density for matter and radiation are denoted as \(\rho_{m}\) and \(\rho_{r}\) respectively, and the pressure at the radiation era is \(p_{r}\). The torsion scalar \(T\) is given by \(T=6H^{2}\). The non-minimal coupling function \(f(T,\phi)\) is defined as [54]: \[f(T,\phi)=-\frac{T}{2\kappa^{2}}-G(T)-V(\phi)\,, \tag{20}\] where \(V(\phi)\) is the scalar potential and \(G(T)\) is an arbitrary function of the torsion scalar. In the matter-dominated era, \(\omega_{m}=\frac{p_{m}}{\rho_{m}}=0\), and in the radiation era, \(\omega_{r}=\frac{p_{r}}{\rho_{r}}=1/3\). In this case, Eqs. (17)-(19) reduce to: \[\frac{3}{\kappa^{2}}H^{2}=P(\phi)X+V(\phi)-2TG_{,T}+G(T)+\rho_{m}+\rho_{r}\,, \tag{21}\] \[-\frac{2}{\kappa^{2}}\dot{H}=2P(\phi)X+4\dot{H}(G_{T}+2TG_{,TT})+\rho_{m}+\frac{4} {3}\rho_{r}\,, \tag{22}\] \[P(\phi)\ddot{\phi}+P_{,\phi}(\phi)X+3P(\phi)H\dot{\phi}+V_{,\phi}(\phi)=0\,. \tag{23}\] The modified Friedmann equations, taking into account dark energy, become: \[\frac{3}{\kappa^{2}}H^{2} = \rho_{m}+\rho_{r}+\rho_{de}\,, \tag{24}\] \[-\frac{2}{\kappa^{2}}\dot{H} = \rho_{m}+\frac{4}{3}\rho_{r}+\rho_{de}+p_{de}\,. \tag{25}\] Comparing Eq. (21) with Eq. (24), and Eq. (22) with Eq. (25), we can extract the energy density (\(\rho_{de}\)) and pressure (\(p_{de}\)) for the dark energy sector: \[\rho_{de}=P(\phi)X+V(\phi)-2TG_{,T}+G(T)\,, \tag{26}\] \[p_{de}=P(\phi)X-V(\phi)+2TG_{,T}-G(T)+4\dot{H}(G_{,T}+2TG_{,TT})\,. \tag{27}\] For simplicity we set \(P(\phi)=1\) and consider the well studied exponential potential, \(V(\phi)=V_{0}e^{-\lambda\phi}\), where \(\lambda\) is a constant. In order to proceed further and really carry out the analysis we want to, we need a form for \(G(T)\) and here we will be considering two forms which were studied in [76] and will be carrying out the Goriely-Hyde analysis on both of these models 1. Footnote 1: It is worth discussing any effects of the separation of T from \(\phi\) here in the action (20). If one, for example, considers actions like those in [50] where terms like \(T\phi^{2}\) come into play then one could potentially expect that results on singularities could be affected in some ways, as one would not be wrong to think that such a coupling between the torsion scalar and field terms could have some significant outcomes. Although a definitive answer on this aspect would need one to do a proper analysis similar to the one we have done for the models considered in our paper, we do feel very interesting results may await an endeavour like this. ## 3 The Goriely-Hyde Procedure The Goriely-Hyde technique [69] offers an elegant method for identifying finite-time singularities in dynamical systems. The procedure can be summarized as follows: * We begin by considering a dynamical system governed by \(n\) differential equations given by: \[\dot{x}i=fi(x),\] (28) where \(i=1,2,...,n\). Here, \(t\) represents time, but in quintessence models, it can be better represented as the number of e-foldings, denoted by \(N\). We identify the parts of the equation \(f_{i}\) that become significant as the system approaches the singularity. These significant parts are referred to as "dominant parts" [69]. Each dominant part represents a mathematically consistent truncation of the system, denoted as \(\hat{f}i\). Consequently, the system can be expressed as: \[\dot{x}i=\hat{f}_{i}(x).\] (29) * Without loss of generality, the variables \(x_{i}\) near the singularity can be represented as: \[x_{i}=a_{i}\tau^{p_{i}},\] (30) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. By substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which collectively form the vector \(\mathbf{p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is worth noting that if \(\vec{a}\) comprises solely real entries, it corresponds to finite-time singularities. On the other hand, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each \((a_{i},p_{i})\) set is referred to as a dominant balance of the system. * Next, we compute the Kovalevskaya matrix defined as: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}}{ \partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}} &.&.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}} &.&.&\frac{\partial f_{n}}{\partial x_{n}}\end{pmatrix}-\begin{pmatrix}p_{1} &0&.&.&0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\.&.&.&.&.\\ 0&0&.&.&p_{n}\end{pmatrix}.\] (31) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues take the form \((-1,r_{2},r_{3},...,r_{n})\), where \(r_{2},r_{3},...>0\), then the singularity is regarded as general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. ## 4 Goriely-Hyde analysis of \(f(T,\phi)\) models ### Model I In the first model, we consider a specific form for \(G(T)\) as given in [76, 84]: \[G(T)=\beta T\ln\left(\frac{T}{T_{0}}\right)\,, \tag{32}\] where \(\beta\) is a constant and \(T_{0}\) represents the value of \(T\) at the initial epoch. This model, which has been investigated in [84], exhibits physically favorable critical points and offers an interesting approach for modeling the evolution of the Universe. By substituting this expression into Eqs. (26)-(27), the effective dark energy density and pressure terms are reduced to: \[\rho_{de}=\frac{\dot{\phi}^{2}}{2}+V(\phi)-6\beta H^{2}\ln\left(\frac{6H^{2}} {T_{0}}\right)-12H^{2}\beta\,, \tag{33}\] \[p_{de}=\frac{\dot{\phi}^{2}}{2}-V(\phi)+6\beta H^{2}\ln\left(\frac{6H^{2}}{T_{0}} \right)+12H^{2}\beta+4\dot{H}\left(\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+3 \beta\right)\,. \tag{34}\] In order to analyze the dynamics of the scalar-torsion \(f(T,\phi)\) gravity model, [76] introduced a set of dimensionless phase space variables to represent the system in an autonomous form. These variables are defined as 2 follows: Footnote 2: Note that it is not necessary that the variables will be defined in this same way for all cosmological paradigms, as we shall see later in the paper too. In fact, one can use different variables for the same paradigm too if required or wished for. See, for example, [85, 86, 87] for extended discussions on the same \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,\qquad\quad y=\frac{ \kappa\sqrt{V}}{\sqrt{3}H}\,,\qquad\quad z=-4\beta\kappa^{2}\,,\qquad\quad u= -2\beta\ln\left(\frac{T}{T_{0}}\right)\kappa^{2}\,, \tag{35}\] \[\rho=\frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}\,,\qquad\quad\lambda =-\frac{V_{,\phi}(\phi)}{\kappa V(\phi)}\,,\qquad\quad\Theta=\frac{V(\phi) \,,V_{,\phi\phi}}{V_{,\phi}(\phi)^{2}}\,. \tag{36}\] These dimensionless variables allow for a simplified representation of the system's dynamics and facilitate the analysis of the scalar-torsion \(f(T,\phi)\) gravity model. Using these variables, one can finally write the cosmological equations of this model as a dynamical system as follows : \[\frac{dx}{dN} =-\frac{x\rho^{2}-3x\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}-3x+ \sqrt{\frac{3}{2}}\lambda y^{2}\,,\] \[\frac{dy}{dN} =\frac{-y\rho^{2}+3y\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}- \sqrt{\frac{3}{2}}\lambda yx\,,\] \[\frac{du}{dN} =\frac{z\rho^{2}-3z\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}\,,\] \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+u+3x^{2}-3y^{2}+3z-1\right)}{2u+3z-2}\,,\] \[\frac{dz}{dN} =0\,,\] \[\frac{d\lambda}{dN} =-\sqrt{6}(\Theta-1)x\lambda^{2}\,.\] For our analysis, we would be considering \(\lambda\) to be a constant which is not equal to zero (which would again mean that we are considering an exponential potential form as we remarked earlier ). Furthermore, we consider that \(2u>>3z-2\) ( which can be justified considering the forms of z and u we have described earlier ). This would allow us to write the dynamical equations as \[\frac{dx}{dN} =-\frac{x\rho^{2}-3x\left(u-x^{2}+y^{2}+z-1\right)}{2u}-3x+\sqrt{ \frac{3}{2}}\lambda y^{2}\,, \tag{37}\] \[\frac{dy}{dN} =\frac{-y\rho^{2}+3y\left(u-x^{2}+y^{2}+z-1\right)}{2u}-\sqrt{ \frac{3}{2}}\lambda yx\,,\] (38) \[\frac{du}{dN} =\frac{z\rho^{2}-3z\left(u-x^{2}+y^{2}+z-1\right)}{2u}\,,\] (39) \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+u+3x^{2}-3y^{2}+3z-1\right)}{2u}\,,\] (40) \[\frac{dz}{dN} =0\,,\] (41) \[\frac{d\lambda}{dN} =0,. \tag{42}\] Now we are in the right position to start off our singularity analysis. The first truncation that we consider is given by \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ 3y^{3}/2u\\ 3z/2u\\ \rho^{3}/2u\end{pmatrix} \tag{43}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((1/2,-1/4,1/2,-1/4)\) from which we can get the dominant balances to be \[a_{1}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3}}, \sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{2}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{3}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },-\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{4}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{5}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },-\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{6}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3 }},-\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{7}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3 }},\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{8}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4] {3}},-\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}-\frac{1}{2}&\sqrt{6}\lambda y&0&0\\ 0&\frac{9y^{2}}{2u}+\frac{1}{4}&-\frac{3y^{3}}{2u^{2}}&0\\ 0&0&-\frac{3z^{2}}{2u^{2}}-\frac{1}{2}&0\\ 0&0&\frac{g^{3}}{2u^{2}}&\frac{1}{4}-\frac{3g^{2}}{2u}\end{array}\right) \tag{45}\] Using the dominant balances we introduced in (44), we can now plug them into the Kovalevskaya matrix (45) to get the eigenvalues to be \[r=(-1,-1/2,-1/2,-1/2) \tag{46}\] We note that all the other eigenvalues besides the initial -1 are also negative, which means that according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. Coupled with the fact that the dominant balances (44) have complex entries, this would mean that this truncation tells us that the singularities for this system may occur in non-finite time. The second truncation that we consider is given by \[\hat{f}=\begin{pmatrix}3x^{3}/2u\\ -\sqrt{\frac{3}{2}}\lambda xy\\ \rho^{2}z/2u\\ 3\rho y^{2}/2u\end{pmatrix} \tag{47}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \(p=(-1,-1,-1,-3/2)\) from which we can get the dominant balances to be \[a_{1}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{2}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{3}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{4}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{5}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{6}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{7}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{8}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] We again see that the balances have complex entries 3 while the the Kovalevskaya matrix can be written as Footnote 3: At this point we would like to highlight that complex entries in \(\mathbf{\hat{a}}\) observed for the previous truncation and this one ( and which will be observed for Model II as well for a few truncations) are completely consistent with the fact that the system consists of expansion normalized variables which are real. As mentioned in section 2, complex entries for various \(\mathbf{a}\) suggest that the singularities will be non-finite time in nature and hence these quantities taking up complex values is consistent with the analysis as shown in [69]. Similar case has been for various cosmological systems (for example, see [71, 72]) \[.R=\left(\begin{array}{ccc}\frac{9x^{2}}{2u}+1&0&-\frac{3x^{3}}{2u^{2}}&0\\ -\sqrt{\frac{3}{2}}\lambda y&1-\sqrt{\frac{3}{2}}\lambda x&0&0\\ 0&0&1-\frac{\rho^{2}z}{2u^{2}}&\frac{\rho z}{u}\\ 0&\frac{3\rho y}{u}&-\frac{3\rho y^{2}}{2u^{2}}&\frac{3\rho^{2}}{2u}+\frac{3}{ 2}\end{array}\right) \tag{49}\] Using the dominant balances we introduced in (48), we can now plug them into the Kovalevskaya matrix (49) to get the eigenvalues to be \[r\sim(-1,1.27,-1.5,1.27) \tag{50}\] We note that as one of the other eigenvalues (-1.5) besides the initial -1 is also negative, according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. Coupled with the fact that the dominant balances (48) have complex entries, this would mean that this truncation tells us that the singularities for this system may occur in non-finite time. The third truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\rho^{2}x/2u\\ -\rho^{2}y/2u\\ 3x^{2}z/2u\\ -3\rho z/2u\end{pmatrix} \tag{51}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((1/2,1/2,1,1/4)\) from which we can get the dominant balances to be \[\begin{array}{c}a_{1}=\left(2\sqrt{6z},2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{2}=\left(-2\sqrt{6z},2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{3}=\left(2\sqrt{6z},-2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{4}=\left(2\sqrt{6z},2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{5}=\left(-2\sqrt{6z},-2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{6}=\left(-2\sqrt{6z},2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{7}=\left(2\sqrt{6z},-2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{8}=\left(-2\sqrt{6z},-2\sqrt{6z},-6z,-2\sqrt{3z}\right)\end{array} \tag{52}\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}-\frac{\rho^{2}}{2u}-\frac{1}{2}&0&\frac{\rho^{ 2}x}{2u^{2}}&-\frac{\rho x}{u}\\ 0&-\frac{\rho^{2}}{2u}-\frac{1}{2}&\frac{\rho^{2}y}{2u^{2}}&-\frac{\rho y}{u} \\ \frac{3xz}{u}&0&-\frac{3x^{2}z}{2u^{2}}-1&0\\ 0&0&\frac{3\rho z}{2u^{2}}&-\frac{3z}{2u}-\frac{1}{4}\end{array}\right) \tag{53}\] Using the dominant balances we introduced in (52), we can now plug them into the Kovalevskaya matrix (53) to get the eigenvalues to be \[r\sim(-1,1/2,-0.1,-0.1) \tag{54}\] We note that as two of the other eigenvalues besides the initial -1 are also negative, according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. But in this case we see something which we didn't in the previous two truncations ; the dominant balances (52) do not have complex entries. This means that this truncation tells us that it is definitely possible to have singularities occuring in finite-time for this particular model. While we can go on and evaluate more truncations, we find that in no other truncation would we see something which we have not observed already in these three truncations. Namely that there is no truncation for this system for which the eigenvalues besides -1 are all positive and so it does seem like for this model the singularities will not be general and can only happen for a limited set of initial conditions. ### Model II In this scenario, we consider the function \(G(T)\) to be of the form \(G(T)=T+\alpha T^{2}\), where \(\alpha\) is a constant [88]. This represents a slight extension beyond the Teleparallel Equivalent of General Relativity (TEGR), as \(\alpha=0\) corresponds to the TEGR model. For this particular \(G(T)\), Eqs. (36)-(37) can be expressed as follows: \[\rho_{de} = \frac{\dot{\phi}^{2}}{2}+V(\phi)-T(1+3T\alpha)\,, \tag{55}\] \[p_{de} = \frac{\dot{\phi}^{2}}{2}-V(\phi)+T(1+3T\alpha)+4\dot{H}(1+6T\alpha )\,. \tag{56}\] To establish an independent dynamical system, we introduce dimensionless variables defined as: \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,\qquad\quad y=\frac{ \kappa\sqrt{V}}{\sqrt{3}H}\,,\qquad\quad z=-2\kappa^{2}\,,\qquad\quad u=-36H^{ 2}\alpha\kappa^{2}\,, \tag{57}\] \[\rho=\frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}\,,\qquad\quad\lambda =-\frac{V_{,\phi}(\phi)}{\kappa V(\phi)}\,,\qquad\quad\Theta=\frac{V(\phi)V _{,\phi\phi}}{V_{,\phi}(\phi)^{2}}\,. \tag{58}\] Consequently, the corresponding dynamical system can be obtained as, \[\frac{dx}{dN}=-\frac{x\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1 \right)\right)}{2(2u+z-1)}-3x+\sqrt{\frac{3}{2}}\lambda y^{2}\,,\] \[\frac{dy}{dN}=-\frac{1}{2}y\left(\frac{\rho^{2}-3\left(u-x^{2}+y ^{2}+z-1\right)}{2u+z-1}+\sqrt{6}\lambda x\right)\,,\] \[\frac{du}{dN}=\frac{u\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1 \right)\right)}{2(2u+z-1)}\,,\] \[\frac{d\rho}{dN}=-\frac{\rho\left(\rho^{2}+5u+3x^{2}-3y^{2}+z-1 \right)}{2(2u+z-1)}\,,\] \[\frac{dz}{dN}=0\,,\] \[\frac{d\lambda}{dN}=-\sqrt{6}(\Theta-1)x\lambda^{2}\,.\] We again consider \(\lambda\) to be a constant here, which would mean that we are interested in exponential potentials. Furthermore, we assume that \(2u>>z-1\) which is again not hard to justify considering the definitions of these quantities in (57). By taking these considerations into account, the dynamical system takes the form \[\frac{dx}{dN} =-\frac{x\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right)\right)}{4u}-3 x+\sqrt{\frac{3}{2}}\lambda y^{2}\,, \tag{59}\] \[\frac{dy}{dN} =-\frac{1}{2}y\left(\frac{\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right) }{2u}+\sqrt{6}\lambda x\right)\,,\] (60) \[\frac{du}{dN} =\frac{u\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right)\right)}{4u }\,,\] (61) \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+5u+3x^{2}-3y^{2}+z-1\right)}{4u}\,,\] (62) \[\frac{dz}{dN} =0\,,\] (63) \[\frac{d\lambda}{dN} =0\,. \tag{64}\] We can now start with the Goriely-Hyde analysis of this system, with the first truncation that we consider being \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ -y\rho^{2}/4u\\ 3y^{2}\\ -3\rho x^{2}/4u\end{pmatrix} \tag{65}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((-1,-1,-1,-1)\) from which we can get the dominant balances to be \[a_{1}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac{2}{3} },\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{2}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac{2 }{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{3}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{4}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[. \tag{66}\] \[a_{5}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{6}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[a_{7}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[a_{8}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}1&\sqrt{6}\lambda y&0&0\\ 0&1-\frac{\rho^{2}}{4u}&\frac{\rho^{2}y}{4u^{2}}&-\frac{\rho y}{2u}\\ 0&6y&1&0\\ -\frac{3\rho x}{2u}&0&\frac{3\rho x^{2}}{4u^{2}}&1-\frac{3x^{2}}{4u}\end{array}\right) \tag{67}\] Using the dominant balances we introduced in (66), we can now plug them into the Kovalevskaya matrix (67) to get the eigenvalues to be \[r\sim(-1,1,-2\sqrt{2},2\sqrt{2}) \tag{68}\] As we have one of the eigenvalues besides -1 also being negative, this truncation tells us that the singularities that could appear for this model would also only be occuring for a limited set of initial conditions for the variables. Furthermore given that the dominant balances (66) all have real entries then this would mean that the singularities only appear in finite time. The second truncation that we would be considering is given by \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ -y\rho^{2}/4u\\ 3y^{2}\\ -3\rho x^{2}/4u\end{pmatrix} \tag{69}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \(p=(-1,-1,-1,-1)\) from which we can get the dominant balances to be \[\begin{split} a_{1}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}}, \frac{\sqrt{2}i}{\lambda},-\frac{1}{2\lambda^{2}},\frac{\sqrt{2}i}{\lambda} \right)\\ \\ a_{2}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},\frac{\sqrt{2}i}{\lambda}\right)\\ \\.\\ a_{3}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}i}{\lambda}\right)\\ \\ a_{4}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}i}{\lambda}\right)\\ \end{split} \tag{70}\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}1-\frac{\rho^{2}}{4u}&0&\frac{\rho^{2}x}{4u^{2 }}&-\frac{\rho x}{2u}\\ -\sqrt{\frac{3}{2}}\lambda y&1-\sqrt{\frac{3}{2}}\lambda x&0&0\\ \frac{3x}{2}&0&1&0\\ 0&\frac{3\rho y}{2u}&-\frac{3\rho y^{2}}{4u^{2}}&\frac{3y^{2}}{4u}+1\end{array}\right) \tag{71}\] Using the dominant balances we introduced in (70), we can now plug them into the Kovalevskaya matrix (71) to get the eigenvalues to be \[r\sim(-1,1.6,3.7,-0.2) \tag{72}\] We again see that there are eigenvalues besides -1 which are negative, which again suggests that the model may not have general singularities. Furthermore this truncation also suggests that singularities could take place in non-finite time as shown by the complex entries in the dominant balance (70). While we can again go on for more truncations, what we have found out is that the other truncations do not offer anything new other than what we have seen so far. Namely, no truncation suggests that the model can allow for general singularities and so we are not going to be evaluating for more truncations here. Physical classification of the singularities Until now, we have discussed the singularity structure within the dark energy scenario from a dynamical perspective. However, it is insufficient to merely acknowledge the existence of singularities in this system from a physical standpoint. Thus, it becomes necessary to appropriately classify the potential types of singularities that could occur in this model. Various types of physical singularities for cosmology at a specific time \(t=t_{s}\), where \(t_{s}\) represents the occurrence of the singularities, can be classified as follows [57, 89]: * Type I ("Big Rip"): In this case, the scale factor \(a\), effective energy density \(\rho_{\rm eff}\), and effective pressure density \(p_{\rm eff}\) diverge. * Type II ("Sudden/Quiescent singularity"): In this case, \(p_{\rm eff}\) diverges, as well as the derivatives of the scale factor beyond the second derivative. * Type III ("Big Freeze"): In this case, the derivative of the scale factor from the first derivative onwards diverges. * Type IV ("Generalized sudden singularities"): In this case, the derivative of the scale factor diverges from a derivative higher than the second. Among these classifications, Type I singularities are considered strong singularities since they have the ability to distort finite objects, while singularities of Type II, Type III, and Type IV are regarded as weak singularities as they cannot be perceived as either the beginning or the end of the universe. Although there are other minor types of singularities, such as Type V singularities or "w" singularities, we will focus solely on Type I to Type IV singularities here. The most general form of the Hubble parameter for investigating singularities within the aforementioned classified types is expressed as [72]: \[H(t)=f_{1}(t)+f_{2}(t)(t-t_{s})^{\epsilon} \tag{73}\] Here, \(f_{1}(t)\) and \(f_{2}(t)\) are assumed to be nonzero regular functions at the time of the singularity, and similar conditions apply to their derivatives up to the second order. Additionally, \(\alpha\) is a real number. It is not mandatory for the Hubble parameter (34) to be a solution to the field equations; however, we will consider this case and explore the implications of this assumption on the singularity structure based on our dynamic analysis. First, we observe that none of the variables \(x\), \(y\), or \(z\) as defined in (10) can ever become singular for any cosmic time value. The singularities that can occur considering the Hubble parameter as defined in (34) are as follows: * For \(\epsilon<-1\), a big rip singularity occurs. * For \(-1<\epsilon<0\), a Type III singularity occurs. * For \(0<\epsilon<1\), a Type II singularity occurs. * For \(\epsilon>1\), a Type IV singularity occurs. Another ansatz useful for classifying singularities was introduced in [61] whereby the scale factor was written as: \[a(t)=g(t)(t-t_{s})^{\alpha}+f(t) \tag{74}\] where \(g(t)\) and \(f(t)\) and all their higher-order derivatives with respect to cosmic time are smooth functions of the cosmic time. For this ansatz, according to the values of the exponent \(\alpha\), one can have the following singularities: * For \(\alpha<0\), a Type I singularity occurs. * For \(0<\alpha<1\), a Type III singularity develops. * For \(1<\alpha<2\), a Type II singularity occurs. * For \(\alpha>2\), a Type IV singularity occurs. Again, it is not mandatory for the scale factor in equation (74) to necessarily be a solution to the field equations, but we would like to consider this and equation (73) in order to gain a well-motivated understanding of the types of cosmological singularities we can encounter in the various models we have discussed so far. To proceed further, we need to express the expansion normalized variables that we defined for both models in terms of the Hubble parameter alone. To do this, we realize that we need to express the potential and the derivative of the field parameter in each case in terms of the Hubble parameter as these are the quantities on which the expansion normalized variables really depend in both the scenario ( in this scenario we are talking about representing the x and y variables in both cases in terms of the Hubble parameter). For the model \(G(T)=\beta T\ln\left(\frac{T}{T_{0}}\right)\) (32), we have \[\dot{\phi}_{\beta}^{2}=\frac{2\dot{H}}{\kappa^{2}}-\rho_{m}-\frac{4}{3}\rho_{ r}-4\dot{H}\Biggl{[}\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+3\beta\Biggr{]} \tag{75}\] While the potential for this case be written as \[V_{\beta}=\frac{\dot{H}\left(6\beta+2\beta\ln\left(\frac{6H^{2}}{T_{0}} \right)+\frac{1}{\kappa^{2}}\right)-\frac{\rho_{m}}{2}+\rho_{r}}{H^{2}}+3 \left(4\beta+2\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+\frac{1}{\kappa^{2}}\right) \tag{76}\] For the model \(G(T)=T+\alpha T^{2}\), we have the same quantities to be \[\dot{\phi}_{\alpha}^{2}=\dot{H}\left(-24\alpha H^{2}-\frac{2}{\kappa^{2}}-4 \right)-\rho_{m}-4\rho_{r} \tag{77}\] \[V_{\alpha}=H^{2}\left(12\alpha\dot{H}+\frac{3}{\kappa^{2}}+1\right)+\left(\frac{1} {\kappa^{2}}+2\right)\dot{H}+3\alpha H^{4}-\frac{\rho_{m}}{2}+\rho_{r} \tag{78}\] Using these one can express the dynamical variables used in the Goriely-Hyde analysis of both the models ((35),(36),(57),(58)) completely in terms of the Hubble parameter ( we will not write the variables out explicitly here as they have quite long expressions) and now we can use both the ansatz (73)-(74) to see under what conditions will the variables blow up. Remember that we do not want the dynamical variables to blow up and the values of the exponents of the ansatz for which they do not blow up will tell us the singularities which will be possible for these models. The interesting conclusion that actually comes out when one puts both the ansatz into the dynamical variables is that only Type IV singularities are possible for both models. None of Type I, Type II or Type III singularities can occur for any of the models for any of the ansatz (73)-(74) while Type IV singularities do take place for both the models, for any of the ansatz'. This is quite an interesting behaviour which to the best of our knowledge has only been shown in \(f(T,\phi)\) theories, in that one is only observing Type IV singularities for both of the models considered. This leads one to speculate the possibility that \(f(T,\phi)\) gravity may be better suited for cosmology than some of their other modified gravity counterparts as the theory is only admitting weak singularities. Furthermore, given the analysis from the Goriely-Hyde procedure, one is lead to conclude that such singularities can only occur for a limited set of initial conditions and may occur in finite or even non-finite time. ## 6 Concluding remarks In this paper, we have considered a well studied formulation of teleparallel dark energy in the form of \(f(T,\phi)\) gravity, where the scalar field drives the expansion of the universe. We considered two particular well studied models of this theory and probed cosmological singularities for both the scenarios. For this endeavor, we used a method pioneered by the works of Odintsov in recent years, in which we applied the Goriely-Hyde procedure to the various dynamical systems by which the cosmological equations of these three models could be described. This allowed us to make predictions about whether singularities in these scenarios would be strongly dependent on initial physical conditions and whether they could happen in finite or nonfinite times. After this, we employed two very well-motivated ansatz' for the Hubble parameter and the scale factor to reach the conclusion that one can only have Type IV singularities for both of the models considered in our work, that too only for a limited set of initial conditions.This work propels one to think in the direction that \(f(T,\phi)\) theories may only allow for weak cosmological singularities, which may make them better placed than some of the other modified gravity based dark energy regimes which allow for more singularities and also those of the stronger types. ## Acknowledgements The authors would like to thank Sergei Odintsov for very helpful discussions. The research by M.K. was carried out in Southern Federal University with financial support of the Ministry of Science and Higher Education of the Russian Federation (State contract GZ0110/23-10-IF). RCN thanks the CNPq for partial financial support under the project No. 304306/2022-3. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
2309.14782
Perturbative computation of thermal characteristics of the Stoner phase transition
We apply the thermal (imaginary time) perturbative expansion to the relevant effective field theory to compute characteristics of the phase transition to the ordered state which can occur at low temperatures in the gas of (nonrelativistic) spin 1/2 fermions interacting through a short-range spin independent repulsive binary interaction potential. We show how to obtain a systematic expansion of the system's free energy depending on the densities $n_+$ and $n_-$ of spin-up and spin-down fermions. In this paper we truncate this expansion at the second order and determine, by numerically minimizing the free energy, the equilibrium proportions of $n_+$ and $n_-$ (that is, the system's polarization) as functions of the temperature, the system's overall density $n = n_+ + n_-$ and the strength of the interaction.
Oskar Grocholski, Piotr H. Chankowski
2023-09-26T09:31:02Z
http://arxiv.org/abs/2309.14782v1
# Perturbative computation of thermal characteristics of the Stoner phase transition ###### Abstract We apply the thermal (imaginary time) perturbative expansion to the relevant effective field theory to compute characteristics of the phase transition to the ordered state which can occur at low temperatures in the gas of (nonrelativistic) spin 1/2 fermions interacting through a short range spin independent repulsive binary interaction potential. We show how to obtain a systematic expansion of the system's free energy depending on the densities \(n_{+}\) and \(n_{-}\) of spin up and spin down fermions. In this paper we truncate this expansion at the second order and determine, by numerically minimizing the free energy, the equilibrium proportions of \(n_{+}\) and \(n_{-}\) (that is, the system's polarization) as functions of the temperature, the system's overall density \(n=n_{+}+n_{-}\) and the strength of the interaction. _Keywords_: Diluted gas of interacting fermions, phase transitions, thermodynamic potentials, effective field theory, scattering lengths Introduction There is a qualitative argument that in the gas of spin 1/2 fermions interacting through a short range repulsive spin-independent binary potential a phase transition to the ordered state should occur, if the interaction strength and/or the system's density is sufficiently large. Indeed, at zero temperature, when the entropy factor does not intervene, the configuration of the system in which there are more fermions in one spin state than in the other one may be energetically favoured because, due to the Pauli exclusion principle the \(s\)-wave interaction of fermions in the same spin state is impossible and the resulting decrease of the interaction energy may be greater than the associated increase of the kinetic energy (increase of the Fermi energy of the more populated spin state). Theoretical investigation of this phenomenon, called the Stoner transition, taking into account its temperature dependence, requires the full machinery of statistical mechanics. The standard textbook treatment of the problem [1, 2], equivalent to the so-called mean field approach or the Hartree-Fock approximation, employs the pseudo-potential method which allows to determine in the first order approximation the Hamiltonian spectrum and to compute the Canonical Ensemble partition function of the system. In this approximation the phase transition is continuous (with the divergent magnetic susceptibility characterized by the critical exponent \(\gamma=1\) and a finite discontinuity of the heat capacity) and at low temperatures (where the Sommerfeld expansion can be used to obtain analytical expression for the relevant chemical potentials) it occurs when [3, 1, 2] \[k_{\rm F}a_{0}\geq\frac{\pi}{2}\left[1+\frac{\pi^{2}}{12}\left(\frac{k_{\rm B }T}{\varepsilon_{\rm F}}\right)^{2}+\ldots\right],\] where the (overall) Fermi wave vector and energy \[k_{\rm F}=\left(3\pi^{2}\,\frac{N}{V}\right)^{1/3},\qquad\varepsilon_{\rm F}= \frac{\hbar^{2}k_{\rm F}^{2}}{2m_{f}}\,, \tag{1}\] characterize the density of the system and \(a_{0}>0\) is the \(s\)-wave scattering length characterizing the strength of the (repulsive) interaction. The continuous character of the Stoner transition obtained in this approximation is, however, accidental - it is due to a numerical coincidence specific for a (three-dimensional) system of spin \(s=1/2\) fermions only (in the same approximation the transition is of first order if \(s>1/2\) and/or \(D\neq 3\)). In fact, computing the system's free energy beyond the mean field approximation, using the ordinary second order perturbative expansion, it was found [4] that at low temperatures it is of the first order, just as had been suggested in [5] on the basis of the generic presence of non-analytic terms (resulting from the coupling of the order parameter to the gap-less modes) in the free energy which cause the transition to have the first order character. The character of the considered transition (its dependence on the parameter \(k_{\rm F}a_{0}\)) can be most easily investigated at zero temperature because then the problem reduces to the computation of the ground-state energy density \(E_{\Omega}/V\) of the system of fermions interacting through a binary spin-independent repulsive potential as a function of the system's density \(n=N/V\) and its polarization \[P=(N_{+}-N_{-})/N\,. \tag{2}\] Such a computation is most easily performed using the modern effective field theory approach, the application of which to this problem has been pioneered in [6]. In this approach the underlying spatially nonlocal field-theory interaction (see e.g. [7] for the exposition of the relevant formalism of the second quantization), resulting from the ordinary potential two-body interaction, is replaced by an infinite set of local (contact) effective interactions \[\hat{V}_{\rm int}=C_{0}\!\int\!d^{3}{\bf x}\,\psi_{+}^{\dagger}\psi_{+}\psi_{- }^{\dagger}\psi_{-}+\hat{V}_{\rm int}^{(C_{2})}+V_{\rm int}^{(C_{2}^{\prime})}+\ldots \tag{3}\] \(\psi_{\pm}({\bf x})\) are here the usual field operators of spin up and down fermions; the terms \(V_{\rm int}^{(C_{2})}\), \(V_{\rm int}^{(C_{2}^{\prime})}\) (which will be not needed in this work) represent local operators of lower length dimension with four fermionic fields and two spatial derivatives and the ellipsis stands for other local operators (with more derivatives and/or field operators) of yet lower length dimension (see [6]). The amount of work needed to obtain the systematic expansion of the ground state energy in powers (which can be modified by logarithms) of \(k_{\rm F}R\), where \(R\) is the characteristic length scale of the underlying two-body spin independent interaction potential, is in this way greatly reduced. This is because in this approach the coupling constants, like \(C_{0}\) in (3), of the effective local interactions are directly determined in terms of the scattering lengths \(a_{\ell}\) and effective radii \(r_{\ell}\), \(\ell=0,1,\ldots\), (which are assumed to be of order \(\sim R\)) parametrizing the low energy expansion in powers of the relative momentum \(\hbar|{\bf k}|\) of the elastic scattering amplitude of two fermions. The simplifications brought in by the effective field theory method allowed to easily reproduce [8] and generalize to arbitrary repulsive potentials and arbitrary spins \(s\)[9] the old result of Kanno [10] who computed the order \((k_{\rm F}R)^{2}\) correction to the energy density using the specific hard sphere interaction of spin \(s=1/2\) fermions. The first order character of the phase transition at \(T=0\) is then clearly seen in the form of the energy density obtained in this approximation plotted as a function of the order parameter \(P\): starting from some value of \(k_{\rm F}a_{0}\) the energy density develops the second minimum well away from the one at \(P=0\) and at \(k_{\rm F}a_{0}=1.054\) (for \(s=1/2\)) this second minimum becomes deeper than that at \(P=0\). However the analysis of the dependence on the order parameter of the system's energy density which includes the complete order \((k_{\rm F}R)^{3}\) corrections obtained recently in [11, 12] using the same effective field theory approach shows that, independently of the value \(s\) of the fermion spin, they have the effect of erasing the first order character of the Stoner transition, making it almost indistinguishable from the continuous one. This is reflected in the fact that the height of the hill separating the minimum at \(P\neq 0\) from the one at \(P=0\) is greatly reduced (for higher spins also the position of the nontrivial minimum of \(E_{\Omega}/V\) as a function of the relevant order parameter is strongly shifted towards its small values) compared to the situation without these corrections. Moreover, there are claims [13] based on a resummation of an infinite subclass of Feynman diagrams contributing to the ground-state energy density that the transition (at \(T=0\)) is indeed continuous. Although it is not obvious that the contributions taken into account in this resummation are really the dominant ones [12], the results it leads to seem to agree well, as far as the critical value of \(k_{\rm F}a_{0}\) is concerned, with the numerical quantum Monte Carlo simulations [14]. In view of this situation it is desirable to investigate how the higher order corrections influence the character of the Stoner phase transition at nonzero temperatures. With this goal in mind in this paper we formulate a systematic perturbative expansion of the thermodynamic potentials of the system in question applying the standard imaginary time formalism [7] within the effective field theory. We show that the expansion of the free energy is in this approach particularly simple being given by the same connected vacuum Feynman diagrams which give nonzero contributions to the energy density expressed in terms of the chemical potentials of the noninteracting system. In the numerical analysis we restrict ourselves in this paper only to the second order contributions reproducing the results obtained in [4], but with more labour the computations can be extended to higher orders as well. ## 2 Perturbative expansion of the thermodynamic potential \(\Omega(T,V,\mu_{+},\mu_{-})\) The natural equilibrium statistical physics formalism in which to treat the problem of the gas of fermions the interactions of which preserve their spins, and therefore the numbers \(N_{\sigma}\) of particles with the spin projection \(\sigma\), is the Grand Canonical Ensemble with separate chemical potentials \(\mu_{\sigma}\) associated with the individual spin projections. One is therefore interested in the statistical operator (as usually, \(\beta\equiv 1/k_{\rm B}T\)) \[\hat{\rho}=\frac{1}{\Xi_{\rm stat}}\,e^{-\beta\hat{K}}\,,\ \ \ \ \ {\rm in\ which}\ \ \ \ \hat{K}=\hat{H}_{0}-\sum_{\sigma}\mu_{\sigma}\hat{N}_{\sigma}+\hat{V}_{\rm int }\equiv\hat{K}_{0}+\hat{V}_{\rm int}\,, \tag{4}\] and in computing the statistical sum (we specify the notation to the case of spin \(1/2\) fermions, so that \(\sigma=+,-\)) \[\Xi_{\rm stat}(T,V,\mu_{+},\mu_{-})={\rm Tr}\Big{(}e^{-\beta\hat{K}}\Big{)}\,, \tag{5}\] from which all the necessary thermodynamic potentials can in principle be obtained by performing the standard steps. The free part \(\hat{K}_{0}\) of the operator \(\hat{K}=\hat{K}_{0}+\hat{V}_{\rm int}\), where \(\hat{V}_{\rm int}\) will be taken in the form (3), is \[\hat{K}_{0}=\sum_{{\bf p},\sigma}\left(\varepsilon_{\bf p}-\mu_{\sigma} \right)a^{\dagger}_{{\bf p},\sigma}a_{{\bf p},\sigma}=\sum_{\sigma}\int\! \frac{d^{3}{\bf p}}{(2\pi)^{3}}\left(\varepsilon_{\bf p}-\mu_{\sigma}\right) a^{\dagger}_{\sigma}({\bf p})\,a_{\sigma}({\bf p})\,, \tag{6}\] with \(\varepsilon_{\bf p}\equiv\hbar^{2}{\bf p}^{2}/2m_{f}\), in the normalizations in the finite volume \(V\) and in an infinite space, respectively. To compute perturbatively the statistical sum \(\Xi_{\rm stat}(T,V,\mu_{+},\mu_{-})\) one introduces [7] the (imaginary time) interaction picture evolution operator \[{\cal U}_{I}(\tau_{2},\tau_{1})=e^{\tau_{2}\hat{K}_{0}}\,e^{-(\tau_{2}-\tau_{ 1})\hat{K}}\,e^{-\tau_{1}\hat{K}_{0}}\,, \tag{7}\] which satisfies the differential equation \[\frac{d}{d\tau_{2}}\,{\cal U}_{I}(\tau_{2},\tau_{1})=-V^{I}_{\rm int}(\tau_{2})\,{ \cal U}_{I}(\tau_{2},\tau_{1})\,,\] (\(V^{I}_{\rm int}(\tau_{2})=e^{\tau_{2}\hat{K}_{0}}V_{\rm int}e^{-\tau_{2}\hat{K}_{ 0}}\)) with the "initial" condition \({\cal U}_{I}(\tau,\tau)=\hat{1}\) and which formally can be written in the form \[{\cal U}_{I}(\tau_{2},\tau_{1})={\rm T}_{\tau}\exp\biggl{\{}-\int_{\tau_{1}}^{ \tau_{2}}\!d\tau\,V^{I}_{\rm int}(\tau)\biggr{\}}\,,\] in which \({\rm T}_{\tau}\) is the symbol of the "chronological" ordering. Since \(e^{-\beta\hat{K}}=e^{-\beta\hat{K}_{0}}{\cal U}_{I}(\beta,0)\), the statistical sum can be represented as \[\Xi_{\rm stat}={\rm Tr}\Bigl{(}e^{-\beta\hat{K}_{0}}\,{\cal U}_{I}(\beta,0) \Bigr{)}\equiv\Xi_{\rm stat}^{(0)}\,{\rm Tr}\Bigl{(}\hat{\rho}^{(0)}\,{\cal U} _{I}(\beta,0)\Bigr{)}\,, \tag{8}\] where \(\hat{\rho}^{(0)}\) and \(\Xi_{\rm stat}^{(0)}\) are the statistical operator and the statistical sum of the noninteracting system, respectively. The perturbative expansion of \(\Xi_{\rm stat}\) is then given by the series \[\Xi_{\rm stat}=\Xi_{\rm stat}^{(0)}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\! \int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}{\rm Tr}\left(\hat {\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{I}_{\rm int}( \tau_{1})]\right). \tag{9}\] The corresponding expansion of the potential \(\Omega(T,V,\mu_{+},\mu_{-})=-\frac{1}{\beta}\ln\Xi_{\rm stat}(T,V,\mu_{+},\mu_ {-})\) is \[\Omega=\Omega^{(0)}-\frac{1}{\beta}\ln\Biggl{\{}\sum_{n=0}^{\infty}\frac{(-1) ^{n}}{n!}\!\int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}\,{\rm Tr }\Bigl{(}\hat{\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{ I}_{\rm int}(\tau_{1})]\Bigr{)}\Biggr{\}}\,, \tag{10}\] its first term \(\Omega^{(0)}\) being the textbook expression [1] (\(\varepsilon_{\bf k}=\hbar^{2}{\bf k}^{2}/2m_{f}\)) \[\Omega^{(0)}(T,V,\mu_{\sigma})=-\frac{1}{\beta}\,\sum_{\sigma}V\!\int\!\frac{d ^{3}{\bf k}}{(2\pi)^{3}}\ln\Bigl{(}1+e^{-\beta(\varepsilon_{\bf k}-\mu_{ \sigma})}\Bigr{)}\,. \tag{11}\] Or, since the logarithm picks up connected contributions only, \[\Omega=\Omega^{(0)}-\frac{1}{\beta}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\! \int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}{\rm Tr}\Bigl{(} \hat{\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{I}_{\rm int }(\tau_{1})]\Bigr{)}^{\rm con}\,. \tag{12}\] In this form the expression for \(\Omega\) is just the thermal analog of the expansion of the formula1 Footnote 1: Here \(T\) denotes time and not the temperature. \[E_{\Omega}=E_{\Omega_{0}}-\lim_{T\to\infty}\frac{\hbar}{iT}\,\langle\Omega_{0 }|{\rm T}_{t}\exp\!\left(-\frac{i}{\hbar}\!\int_{-T/2}^{T/2}\!dt\,V^{I}_{\rm int }(t)\right)|\Omega_{0}\rangle^{\rm con}\,, \tag{13}\] used in [6, 8, 11, 12] for computing the ground state energy \(E_{\Omega}\) of the system. It is clear that the correspondence between the two formalisms is \(\beta\leftrightarrow iT/\hbar\) (it transforms the \(K\)-picture operators into the Heisenberg picture ones and vice versa). The formula (13) for the ground state energy is thus obtained from the thermal expansion (12) by taking the limit \(\beta\to\infty\) and simultaneously adjusting the chemical potential \(\mu_{\sigma}\) so that there are \(N_{\sigma}\) particles with the spin projection \(\sigma\) (see below). Evaluation of the successive terms of the expansion (12) reduces, owing to the thermal analog of the Wick formula (see [7]), to drawing all possible connected Feynman diagrams with a given numbers of different interaction vertices arising from \(\tilde{V}_{\rm int}\) joined by the oriented lines and integrating over the positions \({\bf x}\) and "times" \(\tau\) ascribed to these vertices the corresponding products of free thermal propagators \[-{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\tau_{2}-\tau_{1};{\bf x}_{2}-{\bf x}_ {1})=\frac{1}{\beta}\sum_{n}\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,e^{-i\omega _{n}^{F}(\tau_{2}-\tau_{1})}\,e^{i{\bf k}\cdot({\bf x}_{2}-{\bf x}_{1})}\left( -\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\omega_{n}^{F},{\bf k})\right),\] the Fourier transforms \(-\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}\) of which have the form [7] (the definition of \(\omega_{n}^{F}\) as well as of \(\omega_{n}^{B}\) are given in (A.1) and (A.2)) \[-\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\omega_{n}^{F},{\bf k})=\frac{- \delta_{\sigma_{2},\sigma_{1}}}{i\omega_{n}^{F}-(\varepsilon_{\bf k}-\mu_{ \sigma_{1}})}\,,\] associated with the (oriented) lines connecting the vertices of the diagram. The resulting Feynman rules in the "momentum" space are almost identical with the ordinary ones except that the integrations over frequencies are replaced by summations over the (fermionic) Matsubara frequencies \(\omega_{n}^{F}=(\pi/\beta)(2n+1)\), \(n\in{\mathbb{Z}}\). In this way one obtains the expansion of the potential \(\Omega(T,V,\mu_{+},\mu_{-})\) the successive terms of which depend on the chemical potentials \(\mu_{+}\) and \(\mu_{-}\) which must be adjusted in successive orders of the expansion to yield through the relations \[N_{\pm}=-\left(\partial\Omega/\partial\mu_{\pm}\right)_{T,V}\,, \tag{14}\] the prescribed densities \(n_{+}=N_{+}/V\) and \(n_{-}=N_{-}/V\) of particles with the spin projections up and down. It will be instructive to recover first, using this formalism, the textbook results [1, 2] of the mean field approximation. The first correction \(\Omega^{(1)}\) to the Grand potential is given by the single diagram shown in Figure 1. The corresponding expression reads \[\Omega^{(1)}=\frac{1}{\beta}\,C_{0}\!\int_{0}^{\beta}\!d\tau\!\int\!d^{3}{\bf x }\,{\rm Tr}\Big{(}\hat{\rho}^{(0)}{\rm T}_{\tau}[\hat{\psi}_{+}^{\dagger I} \hat{\psi}_{+}^{I}\hat{\psi}_{-}^{\dagger I}\hat{\psi}_{-}^{I}]\Big{)}=C_{0}\, V\,{\cal G}^{(0)}_{++}(0,{\bf 0})\,{\cal G}^{(0)}_{--}(0,{\bf 0})\,. \tag{15}\] Using the summation formula (A.1) one obtains \[{\cal G}^{(0)}_{\pm\pm}(0,{\bf 0})=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\, \Big{[}1+e^{\beta(\varepsilon_{\bf k}-\mu_{\pm})}\Big{]}^{-1}\,. \tag{16}\] Figure 1: The first order correction \(\Omega^{(1)}\) to the thermodynamic potential \(\Omega(T,V,\mu_{+},\mu_{-})\). Solid and dashed lines represent fermions with opposite spin projections. As will be shown in the next section, to the first order in the coupling \(C_{0}\) the free energy \(F(T,V,N_{+},N_{-})\) is given by \[F(T,V,N_{+},N_{-}) = \Omega^{(0)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})+N_{+}\mu_{+}^{(0)}+N _{-}\mu_{-}^{(0)} \tag{17}\] \[+ \Omega^{(1)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})+\dots,\] where \(\mu_{\pm}^{(0)}\) are the zeroth order chemical potentials determined by the conditions analogous to (14) but with \(\Omega\) replaced by \(\Omega^{(0)}\) given by (11). It is convenient to define the function \[f(\nu)\equiv\frac{3}{2}\!\int_{0}^{\infty}\!d\xi\,\frac{\xi^{1/2}}{1+e^{\xi- \nu}}\equiv\frac{3\sqrt{\pi}}{4}\,f_{3/2}(\nu)\,, \tag{18}\] and to rewrite these conditions in the form \[f(\nu_{\pm})=\left(\frac{\varepsilon_{\rm F}^{(0)}(n_{\pm})}{k_{\rm B}T} \right)^{3/2}\,, \tag{19}\] in which \(\nu_{\pm}\equiv\mu_{\pm}/k_{\rm B}T\) and \(\varepsilon_{\rm F}^{(0)}(n)=(6\pi^{2}n)^{2/3}\hbar^{2}/2m_{f}\) is the Fermi energy of the system of \(N=nV\) spin 0 noninteracting fermions enclosed in the volume \(V\). The function \(f(\nu)\), which is a decent monotonically growing function of \(\nu\) mapping \(\mathbb{R}\) onto \(\mathbb{R}_{+}\), has the inverse, so after writing \(n_{\pm}\) as \((n/2)(1\pm P)\) the solutions take the form2 Footnote 2: Inverting the appropriate expansions of the integral (18) given e.g. in [1] it is straightforward to find that asymptotically \[f^{-1}(x)=\left\{\begin{array}{c}\ln\!\left(\sqrt{2}-\sqrt{2-(4x/3)\sqrt{8/ \pi}}\right)+\dots,\quad x\ll 1\\ x^{2/3}\left[1-(\pi^{2}/12)\,x^{-4/3}-(\pi^{4}/80)\,x^{-8/3}-(1511\pi^{6}/20736 0)\,x^{-4}+\dots\right],\quad x\gg 1\end{array}\right.\] \[\frac{\mu_{\pm}^{(0)}}{k_{\rm B}T}=f^{-1}\!\left((1\pm P)\left(\frac{ \varepsilon_{\rm F}(n)}{k_{\rm B}T}\right)^{3/2}\right), \tag{20}\] in which \(\varepsilon_{\rm F}(n)\) the system's overall Fermi energy (1). Expressed in terms of the zeroth order chemical potentials \(\mu_{\pm}^{(0)}\), the first order correction (15) can be simply written as \(\Omega^{(1)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})=C_{0}V(N_{+}/V)(N_{-}/V)\), i.e. it is independent (when expressed in terms of the particle densities) of the temperature. Minimization with respect to \(N_{+}\) of \(F(T,V,N_{+},N-N_{+})\) truncated to the first order in the coupling \(C_{0}\) (at fixed \(N\)) then leads to the equilibrium condition \[\mu_{+}^{(0)}(N_{+})-\mu_{-}^{(0)}(N-N_{+})+\frac{C_{0}}{V}\,(N-2N_{+})=0\,,\] which, because \(N-2N_{+}=-NP\) and (to this order) \(C_{0}=(4\pi\hbar^{2}/m_{f})a_{0}\), can be rewritten in the familiar form [1] \[\mu_{+}^{(0)}(N_{+})-\mu_{-}^{(0)}(N_{-})=\frac{8}{3\pi}\,\varepsilon_{\rm F} \left(k_{\rm F}a_{0}\right)P\,. \tag{21}\] This leads to the continuous phase transition. The effect of the external magnetic field \({\cal H}\) can be also taken into account by simply including the interaction with it in the free part of the Hamiltonian, i.e. by replacing \(\mu_{\pm}\) in (6) by \(\tilde{\mu}_{\pm}=\mu_{\pm}\pm{\cal H}\) (the magnetic moment has been here included in \({\cal H}\) which has therefore the dimension of energy). Since ultimately the free energy will be cast in the form in which its dependence on \(N_{\pm}\) and \({\cal H}\) enters only through \(\tilde{\mu}_{\pm}^{(0)}\) which should be determined from the conditions \[\frac{\tilde{\mu}_{\pm}^{(0)}}{k_{\rm B}T}=\frac{\mu_{\pm}^{(0)} \pm{\cal H}}{k_{\rm B}T}=f^{-1}\Bigg{(}(1\pm P)\left(\frac{\varepsilon_{\rm F}( n)}{k_{\rm B}T}\right)^{3/2}\Bigg{)}\,, \tag{22}\] this prescription remains valid to all orders of the expansion. In particular, in the first order approximation the equilibrium condition, written in the convenient dimensionless variables \[t\equiv\frac{T}{T_{\rm F}}\equiv\frac{k_{\rm B}T}{\varepsilon_{ \rm F}}\,,\ \ \ \ \ h\equiv\frac{{\cal H}}{\varepsilon_{\rm F}}\,,\ \ \ \ \delta_{\pm}\equiv\frac{\mu_{\pm}^{(0)}}{ \varepsilon_{\rm F}}\,, \tag{23}\] takes the form \[\frac{8}{3\pi}\,(k_{\rm F}a_{0})\,P+2h=t\left[f^{-1}\bigg{(}\frac{1+P}{t^{3/2} }\bigg{)}-f^{-1}\bigg{(}\frac{1-P}{t^{3/2}}\bigg{)}\right]. \tag{24}\] If the asymptotic expansion of \(f^{-1}(x)\) for \(x\gg 1\) is used, this reproduces the equilibrium condition derived in [1]. For further applications it will be convenient to write down explicitly the formula (17) (including the external magnetic field \({\cal H}\)) expressing it through the introduced dimensionless variables (23) and the polarization (2): \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{F}{\varepsilon_{\rm F}V} =-\frac{3\sqrt{\pi}}{4}\,t^{5/2}\left[f_{5/2}(\tilde{\nu}_{+})+f_ {5/2}(\tilde{\nu}_{-})\right]\] \[\ \ \ \ \ \ \ \ \ \ \ +(1+P)(\tilde{\delta}_{+}-h)+(1-P)( \tilde{\delta}_{-}+h)+(k_{\rm F}a_{0})\,\frac{4}{3\pi}\,(1-P^{2})+\dots \tag{25}\] Here3 Footnote 3: By the appropriate change of variables and the integration by parts \(\Omega^{(0)}\) given by (11) is written in terms of the standard integral (26) with \(p=5/2\)[1]. \[f_{p}(\nu)=\frac{1}{\Gamma(p)}\int_{0}^{\infty}\frac{d\xi\,\xi^{ p-1}}{1+e^{\xi-\nu}}\,, \tag{26}\] and \(\tilde{\nu}_{\pm}\) (and \(\tilde{\delta}_{\pm}\equiv t\tilde{\nu}_{\pm}\)) are given by (22). In the limit \(T\to 0\) (\(t\to 0\)) in which \(\tilde{\nu}_{\pm}\gg 1\), \(f_{5/2}(\nu)=(4/3\sqrt{\pi})(2/5)\nu^{5/2}+\dots\), while (c.f. the expansion of the function \(f^{-1}(x)\) given in the footnote) \(\tilde{\nu}_{\pm}=(1\pm P)^{2/3}+\dots\) and the right hand side of (25) tends to \[-\frac{2}{5}\left[(1+P)^{5/3}+(1-P)^{5/3}\right]+(1+P)\left[(1+P )^{2/3}-h\right]\] \[\ Expansion of the free energy From the thermodynamic point of view much more convenient to work with than the potential \(\Omega\) is the free energy \(F=\Omega+\mu_{+}N_{+}+\mu_{-}N_{-}\) which canonically depends on \(T\), \(V\) and the particle numbers \(N_{\pm}\). It turns out that the expansion of this potential is also simpler. We will derive it here up to the third order following the method outlined in [15]. To make the notation more transparent we will denote the chemical potentials as \[\mu_{+}\equiv x=x_{0}+x_{1}+x_{2}+\ldots,\ \ \ \ \ \mu_{-}\equiv y=y_{0}+y_{1}+y_{2}+\ldots, \tag{27}\] where the successive terms \(x_{n}\), \(y_{n}\) correspond to the successive terms \(\Omega^{(n)}\) of the expansion of the potential \(\Omega\). Introducing the notation \(\Omega_{x}^{(n)}\), \(\Omega_{y}^{(n)}\), \(\Omega_{xx}^{(n)}\), etc. for the first, second, etc. derivatives of \(\Omega^{(n)}\) with respect to their chemical potential arguments and expanding the right hand side of the relation (\(N_{x}\equiv N_{+}\), \(N_{y}\equiv N_{-}\)) \[F =\Omega^{(0)}(x_{0}+x_{1}+x_{2}+x_{3}+\ldots,\,y_{0}+y_{1}+y_{2}+ y_{3}+\ldots)\] \[+\Omega^{(1)}(x_{0}+x_{1}+x_{2}+\ldots,\,y_{0}+y_{1}+y_{2}+\ldots)\] \[+\Omega^{(2)}(x_{0}+x_{1}+\ldots,\,y_{0}+y_{1}+\ldots)+\Omega^{(3 )}(x_{0}+\ldots,y_{0}+\ldots)+\ldots\] \[+(x_{0}+x_{1}+x_{2}+x_{3}+\ldots)N_{x}+(y_{0}+y_{1}+y_{2}+y_{3}+ \ldots)N_{y}\,,\] one obtains, using the zeroth order relations \(\Omega_{x}^{(0)}=-N_{x}\) and \(\Omega_{y}^{(0)}=-N_{y}\) and the fact that \(\Omega^{(0)}(x_{0},y_{0})=\Omega^{\rm free}(x_{0})+\Omega^{\rm free}(y_{0})\) (cf. the formula (11)), i.e. that \(\Omega_{xy}^{(0)}=0\), \[F =\left(\Omega^{(0)}+x_{0}N_{x}+y_{0}N_{y}\right)+\left(\Omega^{(1 )}\right)\] \[+\left(\Omega^{(2)}+x_{1}\,\Omega_{x}^{(1)}+y_{1}\,\Omega_{y}^{(1 )}+\frac{1}{2}\,x_{1}^{2}\,\Omega_{xx}^{(0)}+\frac{1}{2}\,y_{1}^{2}\,\Omega_{yy }^{(0)}\right)\] \[+\left(\Omega^{(3)}+x_{1}\,\Omega_{x}^{(2)}+y_{1}\,\Omega_{y}^{(2 )}+\frac{1}{2}\,x_{1}^{2}\,\Omega_{xx}^{(1)}+\frac{1}{2}\,y_{1}^{2}\,\Omega_{yy }^{(1)}+x_{1}\,y_{1}\,\Omega_{xy}^{(1)}\right. \tag{28}\] \[\qquad+x_{2}\,\Omega_{x}^{(1)}+y_{2}\,\Omega_{y}^{(1)}+x_{1}\,x_ {2}\,\Omega_{xx}^{(0)}+y_{1}\,y_{2}\,\Omega_{yy}^{(0)}+\frac{1}{6}\,x_{1}^{3} \,\Omega_{xxx}^{(0)}+\frac{1}{6}\,y_{1}^{3}\,\Omega_{yyy}^{(0)}\bigg{)}+\ldots,\] all functions being now evaluated at \(x_{0}\) and \(y_{0}\) (at \(\tilde{x}_{0}=\mu_{+}^{(0)}+{\cal H}\) and \(\tilde{y}_{0}=\mu_{-}^{(0)}-{\cal H}\) if there is an external magnetic field). The terms in the successive brackets are the successive terms of the expansion of the free energy. The first order correction \(F^{(1)}\) used in the preceding section is indeed given by \(\Omega^{(1)}(x_{0},y_{0})\) (by \(\Omega^{(1)}(\tilde{x}_{0},\tilde{y}_{0})\)). Furthermore, expanding around \(x_{0}\) and \(y_{0}\) (or \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)) the right hand side of the relation (14) which determines the chemical potential \(x\) \[-N_{x}=\Omega_{x}^{(0)}+(x_{1}+x_{2})\,\Omega_{xx}^{(0)}+\frac{1}{2}\,x_{1}^{2 }\,\Omega_{xxx}^{(0)}+\Omega_{x}^{(1)}+x_{1}\,\Omega_{xx}^{(1)}+y_{1}\,\Omega_ {xy}^{(1)}+\Omega_{x}^{(2)}+\ldots,\] and the other similar relation for \(y\), and taking into account that \(x_{0}\) and \(y_{0}\) are such that \(-N_{x}=\Omega_{x}^{(0)}\), \(-N_{y}=\Omega_{y}^{(0)}\), one obtains \[x_{1} =-\frac{\Omega_{x}^{(1)}}{\Omega_{xx}^{(0)}}\,,\] \[x_{2} =-\frac{\Omega_{x}^{(2)}}{\Omega_{xx}^{(0)}}+\frac{\Omega_{xx}^{( 1)}\Omega_{x}^{(1)}}{[\Omega_{xx}^{(0)}]^{2}}+\frac{\Omega_{xy}^{(1)}\Omega_ {y}^{(1)}}{\Omega_{xx}^{(0)}\Omega_{yy}^{(0)}}-\frac{\Omega_{xxx}^{(0)}[ \Omega_{x}^{(1)}]^{2}}{2[\Omega_{xx}^{(0)}]^{3}}\,. \tag{29}\] \(y_{1}\) and \(y_{2}\) are given by the analogous formulae. Inserting the corrections to the chemical potentials determined in this way into the formulae for \(F^{(2)}\) and \(F^{(3)}\) one finds that (again all functions are evaluated at \(x_{0}\) and \(y_{0}\) or at \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)) \[F^{(2)}=\Omega^{(2)}-\frac{[\Omega_{x}^{(1)}]^{2}}{2\,\Omega_{xx}^{(0)}}-\frac{ [\Omega_{y}^{(1)}]^{2}}{2\,\Omega_{yy}^{(0)}}\,. \tag{30}\] and (the formulae for \(x_{1}\) and \(y_{1}\) immediately imply that the first four terms in the last line of the formula (28) sum up to zero) that \[F^{(3)}=\Omega^{(3)}-\frac{\Omega_{x}^{(2)}\Omega_{x}^{(1)}}{ \Omega_{xx}^{(0)}}-\frac{\Omega_{y}^{(2)}\Omega_{y}^{(1)}}{\Omega_{yy}^{(0)}} \tag{31}\] \[\qquad\qquad+\frac{\Omega_{xx}^{(1)}[\Omega_{x}^{(1)}]^{2}}{2\,[ \Omega_{xx}^{(0)}]^{2}}+\frac{\Omega_{yy}^{(1)}[\Omega_{y}^{(1)}]^{2}}{2\,[ \Omega_{yy}^{(0)}]^{2}}+\frac{\Omega_{xy}^{(1)}\Omega_{x}^{(1)}\Omega_{y}^{( 1)}}{\Omega_{xx}^{(0)}\Omega_{yy}^{(0)}}-\frac{\Omega_{xxx}^{(0)}[\Omega_{x}^ {(1)}]^{3}}{6\,[\Omega_{xx}^{(0)}]^{3}}-\frac{\Omega_{yyy}^{(0)}[\Omega_{y}^{( 1)}]^{3}}{6\,[\Omega_{yy}^{(0)}]^{3}}\,.\] It will be seen that the extra terms in (30) precisely cancel the contributions to \(\Omega^{(2)}\) of those diagrams which do not contribute to the expansion of the formula (13) for the ground state energy density. The analogous cancellation of the extra terms in (31) and the in \(\Omega^{(3)}\) is demonstrated in Appendix B. ## 4 Computation of \(F^{(2)}\) Diagrams contributing to \(\Omega^{(2)}\) are shown in Figures 2 and 3 (the left one). It is straightforward to check that the contributions \(\Omega^{(2)b}\) and \(\Omega^{(2)c}\) of the ones of Figure 2 cancel against the last two terms in the formula (30). Indeed, with the help of the summation rules collected in Appendix A and taking into account that these contributions are evaluated at \(x_{0}\) and \(y_{0}\) one easily obtains (\(\Omega^{(2)c}\) is given by an analogous formula) \[\Omega^{(2)b}=\frac{C_{0}^{2}V}{2}\left(\frac{N_{-}}{V}\right)^{2}\int\!\frac {d^{3}{\bf p}}{(2\pi)^{3}}\left[\frac{d}{da}\,\frac{1}{1+e^{\beta a}}\right]_ {a=\varepsilon_{\bf p}-x_{0}}=-\frac{1}{2}\,C_{0}^{2}V\beta\,(n_{x}-n_{xx})n_ {y}^{2}\,, \tag{32}\] where the second form of \(\Omega^{(2)b}\) is given in the notation introduced in Appendix B. With the help of the formulae (B.1), (B.2) it is immediately seen that it is canceled by the second term of (30). Thus \[\Omega^{(2)b}+\Omega^{(2)c}-\frac{[\Omega_{x}^{(1)}]^{2}}{2\,\Omega_{xx}^{(0)} }-\frac{[\Omega_{y}^{(1)}]^{2}}{2\,\Omega_{yy}^{(0)}}=0\,.\] Hence, \(F^{(2)}=\Omega^{(2)a}\) evaluated at \(x_{0}\) and \(y_{0}\) (or at \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)). The integrals and sums corresponding to the left diagram of Figure 3 giving \(\Omega^{(2)a}\) can be written in three different forms (corresponding to three different routings of the internal momenta and frequencies) of which two can be composed of two "elementary" blocks \(A\) and \(B\) shown in Figure 3 right: \[\Omega^{(2)a}=-\frac{1}{2}\,C_{0}^{2}V\,\frac{1}{\beta}\sum_{l\in\mathbb{Z}} \!\int\!\frac{d^{3}{\bf q}}{(2\pi)^{3}}\,[A(\omega_{l}^{B},{\bf q})]^{2}=-\frac {1}{2}\,C_{0}^{2}V\,\frac{1}{\beta}\sum_{l\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf q }}{(2\pi)^{3}}\,[B(\omega_{l}^{B},{\bf q})]^{2}.\] where (here \(n_{\pm}({\bf p})\equiv[1+\exp\{\beta(\varepsilon_{\bf p}-\mu_{\pm}^{(0)})\}]^ {-1}\), \(\mu_{+}^{(0)}\equiv x_{0}\), \(\mu_{-}^{(0)}\equiv y_{0}\)) \[A(\omega_{l+1}^{B},{\bf q}) =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf k}}{ (2\pi)^{3}}\,\frac{1}{i\omega_{n}^{F}-(\varepsilon_{\bf k}-x_{0})}\,\frac{1}{ i\omega_{l-n}^{F}-(\varepsilon_{\bf q-k}-y_{0})}\] \[=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,\frac{n_{+}({\bf k})+n_{ -}({\bf q}-{\bf k})-1}{i\omega_{l+1}^{B}-(\varepsilon_{\bf k}-x_{0}+ \varepsilon_{\bf q-k}-y_{0})}\,. \tag{33}\] and \[B(\omega_{l}^{B},{\bf q}) =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf k}}{ (2\pi)^{3}}\,\frac{1}{i\omega_{n}^{F}-(\varepsilon_{\bf k+q}-x_{0})}\,\frac{1 }{i\omega_{n+l}^{F}-(\varepsilon_{\bf k}-y_{0})}\] \[=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,\frac{n_{+}({\bf k}+{\bf q })-n_{-}({\bf k})}{i\omega_{l}^{B}-(\varepsilon_{\bf k}-y_{0}-\varepsilon_{\bf k +q}+x_{0})}\,. \tag{34}\] (The contributions \(\Omega^{(3)a}\) and \(\Omega^{(3)b}\) of the left and right diagrams shown in Figure 7 can be written analogously with \([A(\omega_{l}^{B},{\bf q})]^{3}\) and \([B(\omega_{l}^{B},{\bf q})]^{3}\), respectively [11, 12]). With the help of the sum rule (A.5) the sum over \(l\) of two \(A\)-blocks can be done and gives (the symbol \(\int_{\bf k}\) stands for the integral over the measure \(d^{3}{\bf k}/(2\pi)^{3}\)) \[\int_{\bf k}\!\int_{\bf p}\frac{[n_{+}({\bf k})+n_{-}({\bf q}-{ \bf k})-1][n_{+}({\bf p})+n_{-}({\bf q}-{\bf p})-1]}{\varepsilon_{\bf k}+ \varepsilon_{\bf q-k}-\varepsilon_{\bf p}-\varepsilon_{\bf q-p}}\] \[\times\left(\frac{1}{1-e^{\beta(\varepsilon_{\bf k}-x_{0})}e^{ \beta(\varepsilon_{\bf q-k}-y_{0})}}-\frac{1}{1-e^{\beta(\varepsilon_{\bf p }-x_{0})}e^{\beta(\varepsilon_{\bf q-p}-y_{0})}}\right).\] The identity \[n_{+}({\bf k})+n_{-}({\bf q}-{\bf k})-1=n_{+}({\bf k})\,n_{-}({\bf q}-{\bf k}) \left[1-e^{\beta(\varepsilon_{\bf k}-x_{0})}e^{\beta(\varepsilon_{\bf q-k}-y _{0})}\right]. \tag{35}\] Figure 3: The order \(C_{0}^{2}\) diagram contributing to the correction \(\Omega^{(2)}\) and two “elementary” one-loop diagrams out of which the second order and the third order corrections with the \(C_{0}\) couplings can be constructed. Solid and dashed lines denote propagators of fermions with the spin projections \(+\) and \(-\), respectively. and the fact that the two terms in the bracket above gives equal contributions allows then to write \[\Omega^{(2)a}=C_{0}^{2}V{\int_{\bf q}}{\int_{\bf p}}{\int_{\bf k}}\frac{n_{+}({\bf k })\,n_{-}({\bf q}-{\bf k})[1-n_{+}({\bf p})-n_{-}({\bf q}-{\bf p})]}{\varepsilon _{\bf k}+\varepsilon_{{\bf q}-{\bf k}}-\varepsilon_{\bf p}-\varepsilon_{{\bf q }-{\bf p}}}\,.\] (36) It is interesting to notice that because the integral of the quartic product \(n_{+}({\bf k})\,n_{-}({\bf q}-{\bf k})n_{+}({\bf p})\,n_{-}({\bf q}-{\bf p})\) vanishes (the numerator is even with respect to the interchange \({\bf k}\leftrightarrow{\bf p}\) while the denominator is odd), the expression for \(\Omega^{(2)a}\) can be written (after the change \({\bf p}=-{\bf u}+{\bf s}\), \({\bf k}=-{\bf t}+{\bf s}\), \({\bf q}=2{\bf s}\) of the integration variables) in the form completely analogous to the expression giving \(E_{\Omega}/V\) (see [8]), the only modification being the change in the prefactor and the replacement of \(\theta(k-|{\bf v}|)\) and \(\theta(|{\bf v}|-k)\) by \(n(|{\bf v}|)\) and \(1-n(|{\bf v}|)\), respectively. (Curiously enough, we have found that this simple analogy does not work for the diagrams of Figure 7). It is straightforward to see that the expression (36) is divergent, the divergence arising from the unity in the square bracket in the numerator. In the variables \({\bf s}\), \({\bf t}\) and \({\bf u}\) the integral over \({\bf u}\) is the one evaluated with the cutoff \(\Lambda\) in [8] and using this result and changing once more the variables to \({\bf k}={\bf t}-{\bf s}\), \({\bf p}={\bf t}+{\bf s}\), after adding the contribution \(\Omega^{(1)}\) and expressing \(C_{0}\) in terms of the scattering lengths \(a_{0}\) \[C_{0}(\Lambda)=\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\left(1+\frac{2}{\pi}\,a_{0} \Lambda+\ldots\right), \tag{37}\] [16, 11, 12], one arrives at the finite (to the second order) result \[\Omega^{(1)}+\Omega^{(2)a} =\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\left(1+\frac{2}{\pi}\,\Lambda a _{0}+\ldots\right)V\,\frac{N_{-}}{V}\,\frac{N_{+}}{V}\] \[-\frac{\Lambda}{2\pi^{2}}\left(\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0} \right)^{2}\frac{m_{f}}{\hbar^{2}}\,V\,\frac{N_{-}}{V}\,\frac{N_{+}}{V}+ \Omega^{(2)a}_{\rm finite}=\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\,V\,\frac{N_{-} }{V}\,\frac{N_{+}}{V}+\Omega^{(2)a}_{\rm finite}\,.\] The finite part of \(\Omega^{(2)a}\), \[\Omega^{(2)a}_{\rm fin}=-C_{0}^{2}V{\int_{\bf q}}\int_{\bf k}n_{+}({\bf k})\,n_ {-}({\bf q}-{\bf k})\int_{\bf p}\frac{n_{+}({\bf p})+n_{-}({\bf q}-{\bf p})}{ \varepsilon_{\bf k}+\varepsilon_{{\bf q}-{\bf k}}-\varepsilon_{\bf p}- \varepsilon_{{\bf q}-{\bf p}}}\,,\] upon setting first \({\bf k}={\bf k}_{1}\), \({\bf q}-{\bf k}={\bf k}_{2}\) and then replacing in the term with \(n_{-}({\bf k}_{1}+{\bf k}_{2}-{\bf p})\) the variable \({\bf k}_{1}+{\bf k}_{2}-{\bf p}\) by \({\bf p}^{\prime}\) (upon which \(\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}}\to\varepsilon_{{\bf p}^{\prime}}\) but at the same time \(\varepsilon_{\bf p}\to\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}^{\prime}}\)) can be cast in the convenient symmetric form \[\Omega^{(2)a}_{\rm fin}=F^{(2)}=-C_{0}^{2}V{\int_{\bf k}}\,{\int_{\bf k}}_{2} \,n_{+}({\bf k}_{1})\,n_{-}({\bf k}_{2})\int_{\bf p}\frac{n_{+}({\bf p})+n_{-} ({\bf p})}{\varepsilon_{{\bf k}_{1}}+\varepsilon_{{\bf k}_{2}}-\varepsilon_{ \bf p}-\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}}}\,.\] (38) The expression (38) is very similar4 to the formula (5) used in [4] as the second order contribution to the system's internal energy density \(u\), except that the latter has an extra factor of 2. The foundation of the formula for \(f=u-Ts\) (which, apart from this factor of 2, is equivalent to our one) used in [4] is, however, somewhat unclear: to obtain their second order correction to the energy density \(u\) these authors took the expression (15) given in Section 11.4 of [2] which is obtained by simply using the finite temperature distributions in place of the zero temperature ones in the ordinary second order correction to the ground state energy of the system and have taken the entropy density \(s\) as given by the zeroth order textbook formula. In contrast our expression (38) results from a systematic, well founded expansion and the coefficient in (38) is unambiguously fixed by the cancellation of the divergence. After integrating over the cosine of the angle between \({\bf p}\) and \({\bf k}_{1}+{\bf k}_{2}\) one can write the resulting expression in the form \[F^{(2)}=-V\,\frac{C_{0}^{2}m_{f}}{(2\pi)^{2}\hbar^{2}}\!\int_{{ \bf k}_{1}}\int_{{\bf k}_{2}}\frac{n_{+}({\bf k}_{1})\,n_{-}({\bf k}_{2})}{|{ \bf k}_{1}+{\bf k}_{2}|}\int_{0}^{\infty}\!dp\,p\,[n_{+}({\bf p})+n_{-}({\bf p})]\] \[\times\ln\!\left|\frac{(p-\Delta_{+})(p-\Delta_{-})}{(p+\Delta_{+ })(p+\Delta_{-})}\right|, \tag{39}\] in which \[\Delta_{\pm}\equiv\frac{1}{2}\,|{\bf k}_{1}+{\bf k}_{2}|\pm\frac{1}{2}\,|{\bf k }_{1}-{\bf k}_{2}|\,.\] It is clear that the singularity at \(|{\bf k}_{1}+{\bf k}_{2}|=0\) in (39) is spurious: if \({\bf k}_{1}+{\bf k}_{2}={\bf 0}\) then \(\Delta_{-}=-\Delta_{+}\) and the innermost integral vanishes. ## 5 Numerical evaluation The most difficult part of the computation is the accurate and efficient numerical evaluation of the multiple integrals in the expression (39). Rescaling the momentum integration variables \({\bf k}_{1}=k_{\rm F}{\bf v}_{1}\), etc. and inserting \(C_{0}=(4\pi\hbar/m_{f})a_{0}\) one can write the second order contribution to the right hand side of (25) as \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{F^{(2)}}{\varepsilon_{\rm F }V}=-(k_{\rm F}a_{0})^{2}\,\frac{6}{\pi^{2}}\int_{0}^{\infty}\!dv_{1}\,v_{1}^{ 2}\,n(v_{1},\nu_{+},t)\!\int_{0}^{\infty}\!dv_{2}\,v_{2}^{2}\,n(v_{2},\nu_{-},t)\] \[\times\sum_{\sigma=\pm}\sum_{\sigma^{\prime}=\pm}\!\int_{-1}^{1}\! d\xi\,\frac{I(\Delta_{\sigma},\nu_{\sigma^{\prime}},t)}{\sqrt{v_{1}^{2}+v_{2}^{ 2}+2\xi v_{1}v_{2}}}\,, \tag{40}\] where \(\nu_{\pm}=\mu_{\pm}^{(0)}/k_{\rm B}T\equiv\delta_{\pm}/t\) (\(\delta_{\pm}=\mu_{\pm}^{(0)}/\varepsilon_{\rm F}\)), \[n(v,\nu,t)=\left[1+\exp\!\left(\frac{v^{2}}{t}-\nu\right)\right]^{-1},\] \[I(\Delta,\nu,t)=\int_{0}^{\infty}\!du\,u\,n(u,\nu,t)\,\ln\!\left|\frac{u- \Delta}{u+\Delta}\right|.\] and \[\Delta_{\pm}(v_{1},v_{2},\xi)=\frac{1}{2}\sqrt{v_{1}^{2}+v_{2}^{2}+2\xi v_{1} v_{2}}\pm\frac{1}{2}\sqrt{v_{1}^{2}+v_{2}^{2}-2\xi v_{1}v_{2}}\,.\] The trick allowing to realize the numerical computation is to make first, for fixed values of \(t\) (temperature) and \(P\) (the system's polarization) which together, through (20) determine \(\nu_{+}\) and \(\nu_{-}\), an interpolation of the functions \(I(|\Delta|,\nu_{+},t)\) and \(I(|\Delta|,\nu_{-},t)\) (because, obviously, \(I(-|\Delta|,\nu_{\pm},t)=-I(|\Delta|,\nu_{\pm},t)\)) in the variable \(w=1/(1+|\Delta|)\) (to interpolate on the compact interval \([0,1]\)) and then performing numerically the integrations over \(v_{1}\), \(v_{2}\) and \(\xi\) using these interpolations. In the actual code written in the _Python_ programming language the functions \(I(|\Delta|,\nu_{\pm},t)\) are evaluated with the help of the adaptive integration routine (scipy.integrate.quad; the integration domain is split ted into three subdomains to accurately handle the logarithmic singularity - in the relevant regions near \(w=1/(1+\Delta)\equiv w_{0}\) we substitute \(r^{3}=|w-w_{0}|\) so that the integrand behaves like \(r^{2}\ln(r)\) and can be treated using the quadrature methods - and its sharp falloff, especially for small temperatures \(t\), near \(u^{2}=t\nu\) of the distribution \(n(u,\nu,t))\) and then interpolated using the cubic spline interpolation routines of _Python_. The remaining triple integral over \(v_{1}\), \(v_{2}\) and \(\xi\) are again performed with the help of the Clenshaw-Curtis quadrature in the variables \(w_{1,2}=1/(1+v_{1,2})\) (again to have a compact integration domain and again splitting it into subdomains to better handle the regions \(v_{1}^{2}\approx t\nu_{+}\) and \(v_{2}^{2}\approx t\nu_{-}\)); the spurious singularity at \(|{\bf v}_{1}+{\bf v}_{2}|=0\) is taken care of by simply taking somewhat different numbers for the \(v_{1}\) and \(v_{2}\) grids. To check the correctness of the code we have first compared its results for \(t\to 0\) (replacing the distributions \(n(u,\nu,t)\) by the Heaviside theta functions) with the second order correction \(E_{\Omega}^{(2)}\) to the system's ground-state energy which as a function of \(P\) is known analytically [10, 9] (the function \(J_{K}(x,y)\) is given e.g. by the formula (4) in [12]): \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{E_{\Omega}^{(2)}}{\varepsilon_{\rm F}V} =(k_{\rm F}a_{0})^{2}\,\frac{6}{5\pi^{2}}\,J_{K}((1+P)^{1/3},\,(1-P)^{1/3})\,.\] At \(P=0\) (equal densities of spin up and spin down fermions) \(J_{K}=4(11-\ln 4)/21\) and the right hand side of the above formula (setting in all these comparisons \(k_{\rm F}a_{0}=1\)) equals \(0.22264482\) while the _Python_ code for the right hand side of (40) gives the value \(0.22264522\). For \(P=0.5\) the code gives \(0.17184256\) to be compared with \(0.17184207\) while at \(P=0.9\) the numbers to be compared are \(0.046470057\) and \(0.046470077\) (at \(P=1\) both are zero reflecting the impossibility of the \(s\)-wave interactions of two fermions in the same spin state). For nonzero temperatures the results obtained using the Clenshaw-Curtis quadrature have been compared with the ones obtained using the more accurate (but more time consuming) adaptive integration routine. The comparison shows that the relative uncertainty \(\Delta_{F}\) (the difference of the results of the two methods divided by their mean) is typically of order \(10^{-5}\), varying rather irregularity with \(P\) and increasing somewhat with \(t\); in our further estimates we set \(\Delta_{F}=10^{-5}\) for \(t\stackrel{{<}}{{{}_{\sim}}}0.1\), \(\Delta_{F}=1.5\times 10^{-5}\) for \(0.1<t\leq 0.2\) and \(\Delta_{F}=2\times 10^{-5}\) for \(0.2<t\). While this accuracy superficially looks quite satisfactory, it is, nevertheless, barely sufficient: for values of the parameters (\(t\) and/or \(k_{\rm F}a_{0}\)) at which spontaneous ordering appears there is a very delicate cancellation between different contributions to \(F\) and the (relative) error of order \(10^{-5}\) in \(F^{(2)}\) can, and in some cases indeed does, lead do the appearances of very shallow fake minimum near \(P=0\). ## 6 Results For a fixed value of the temperature, the system's free energy \(F\) as a function of the polarization (and of the parameter \(k_{\rm F}a_{0}\) in which, in the approximation to which our analysis is restricted, it is a polynomial of the second order) can be efficiently obtained by evaluating numerically the integrals in (40) for several values of \(P\) and constructing the cubic spline interpolation. The resulting free energy differences, \(F(P)-F(0)\), are plotted in Figure 4 as functions of the polarization \(P\) for two temperatures: \(t=0.1\) and \(0.15\) and several values of \(k_{\rm F}a_{0}\) (obtained by constructing the interpolation based on 11 points in \(P\) only). In view of the mentioned uncertainty in the computation of \(F^{(2)}\) the critical value of \(k_{\rm F}a_{0}\) and the value of the polarization \(P\) at the transition must be determined by requiring that the value of \(F\) at a minimum developing away from \(P=0\) differs from the one at \(P=0\) at least by \(\Delta_{F}F^{(2)}(0)\). In this way one can properly handle the mentioned fake minima close to \(P=0\) one of which can be observed in the right panel of Fig. 4 (for \(t=0\) that such a minimum is indeed produced by the inaccuracies of the numerical code can be substantiated by comparing with the analytically known dependence of the ground-state energy on \(P\)). The actual procedure which has been adopted to determine the polarization and its uncertainty is as follows. For a fixed value of the parameter \(k_{\rm F}a_{0}\) (which is successively increased from 0 in steps \(\Delta(k_{\rm F}a_{0})=0.001\)) the values of \(F\) on a preliminary grid of \(P\)-values \(P_{n}=n\Delta P\) with \(n=0,\ldots,n_{\rm max}=32\) are obtained. If the minimal value of \(F\) occurs for \(n_{\rm min}=n_{\rm max}\) the polarization is taken as the maximal (\(P=1\)); if \(n_{\rm min}=0\) or \(|F(P_{n_{\rm min}})-F(0)|\leq\Delta_{F}F^{(2)}(0)\) the polarization is taken as vanishing (\(P=0\)). If \(n_{\rm min}\neq 0,n_{\rm max}\) and \(|F(P_{n_{\rm min}})-F(0)|>\Delta_{F}F^{(2)}(0)\) Figure 4: Plots of the differences \(F(P)-F(0)\) in units of \((k_{\rm F}^{3}/6\pi^{2})(\hbar^{2}k_{\rm F}^{2}/2m_{f})\) of the system of spin \(1/2\) fermions as a function of the order parameter \(P\) for two representative values of the temperature \(t\equiv T/T_{\rm F}\) as obtained in the second order of the perturbative expansion. Left: \(t=0.1\); the successive curves (from below) correspond to \(k_{\rm F}a_{0}=1.0718\) (the lowest, blue, line), \(k_{\rm F}a_{0}=1.0723\) (yellow), \(k_{\rm F}a_{0}=1.0728\) (green), \(1.0733\) (red) and \(1.0738\) (the highest, blue, line). Right: \(t=0.15\); the successive curves (from below) correspond to \(k_{\rm F}a_{0}=1.0978\) (the lowest, blue, line), \(k_{\rm F}a_{0}=1.983\) (yellow), \(k_{\rm F}a_{0}=1.0988\) (green), \(1.0993\) (red) and \(1.0998\) (the highest, blue, line). the polarization is taken as truly nonvanishing. If it is nonvanishing for the first time (as far as the increasing values of \(k_{\rm F}a_{0}\) are concerned), one determines \(n_{\rm down}\) and \(n_{\rm up}\) such that \(|F(P_{n_{\rm min}})-F(P_{n_{\rm down/up}})|<2\Delta_{F}F^{(2)}(0)\) (of course, \(n_{\rm down}=0\) and/or \(n_{\rm up}=n_{\rm max}\) if these criteria cannot not be fulfilled for intermediate values of \(n\)) and finds the values of \(F\) on a finer grid of \(P\)-values with \(|P_{j+1}-P_{j}|=0.0001\) and \(P_{n_{\rm down}}\leq P_{j}\leq P_{n_{\rm up}}\). If the minimum of \(F\) found on the finer grid occurs for \(P_{j_{\rm min}}<0.02\), it is assumed that it is a numerical artifact and the polarization is taken as vanishing. In the opposite case the polarization is taken to be nonvanishing and that value of \(k_{\rm F}a_{0}\) is recorded as the critical one (for the considered temperature). In this case on the finer grid one seeks a range (\(P_{j_{\rm down}}\), \(P_{j_{\rm up}}\)) of \(P\) around \(P_{j_{\rm min}}\) in which \(|F(P_{j_{\rm min}})-F(P_{j})|>\Delta_{F}F^{(2)}(0)\) for \(P_{j_{\rm down}}\leq P_{j}\leq P_{j_{\rm up}}\); if such a range cannot be found the transition is classified as continuous (the polarization at the considered temperature is assumed to increase continuously from zero as \(k_{\rm F}a_{0}\) is increased) while if a nontrivial range is obtained, the transition is classified as first order and \(P_{j_{\rm min}}-P_{j_{\rm down}}\) and \(P_{j_{\rm up}}-P_{j_{\rm min}}\) are taken as the uncertainties of the determination of the polarization right at the transition. For values of \(k_{\rm F}a_{0}\) higher than the critical one (determined as described above for the considered temperature) \(F\) is evaluated on a finer grid of points \(P_{j}\) with \(P_{n_{\rm min}-1}\leq P_{j}\leq P_{n_{\rm min}+1}\) and \(P_{j+1}-P_{j}=0.001\) and the corresponding polarization is determined as the position of the minimum of \(F\) on this finer grid. In this way one finds that \((k_{\rm F}a_{0})_{\rm cr}=1.05409\) at \(t=0\) (which perfectly agrees with the known value obtained by computing the system's ground-state energy [11] and with [4]), \((k_{\rm F}a_{0})_{\rm cr}=1.05858\) at \(t=0.05\), \((k_{\rm F}a_{0})_{\rm cr}=1.07282\) at \(t=0.1\), \((k_{\rm F}a_{0})_{\rm cr}=1.09881\) at \(t=0.15\) and \((k_{\rm F}a_{0})_{\rm cr}=1.13845\) at \(t=0.2\). The corresponding values of the polarization right at the transition point are \(P_{\rm cr}=0.575^{+0.017}_{-0.019}\) (again in agreement with the value found in [11]) \(0.558^{+0.017}_{-0.017}\), \(0.477^{+0.019}_{-0.021}\), \(0.325^{+0.035}_{-0.048}\) and \(0.197^{+0.045}_{-0.096}\). The dependence of the polarization as a function of the "gas parameter" \(k_{\rm F}a_{0}\) is shown, for a few values of the temperature \(t\), in the left panel of Figure 5. This is essentially the same plot as the one presented in [4] (the agreement with the critical values of the gas parameters at successive temperatures that can be read off from the plot there seems to be quite good) except that in Figure 5 marked are also the uncertainties in the determination (following from the procedure just described) of the polarization right at the transition. Owing to the efficiency of our numerical code (stemming basically from the trick with the interpolations) the procedure of finding the polarization of the system described above can be applied also at fixed values of \(k_{\rm F}a_{0}\) (replacing the grid in \(k_{\rm F}a_{0}\) by a one in \(t\)). The resulting polarization of the system as a function of the temperature for several fixed values of the gas parameter is shown in the right panel of Figure 5. Knowing the polarization as a function of the other parameters it is possible to construct the free energy \(F(T,V,N)\equiv F(T,V,N,P(T,N/V))\) for several values of \(k_{\rm F}a_{0}\) and to determine also other thermodynamic characteristics of the system. For example, using the grid in \(t\) the second derivative of the free energy \(F(T,V,N))\) with respect to the temperature can in principle be obtained yielding the system's heat capacity. The result of such an exercise is shown in Figure 6 for two values of the "gas parameter". It is shows that the discontinuity of the heat capacity at the transition point grows with the value of \(k_{\rm F}a_{0}\) (i.e. also with the increasing temperature, if \(k_{\rm F}a_{0}\) is varied). However for higher values of \(k_{\rm F}a_{0}\) the numerical inaccuracies do not allow for a reliable computation. Indeed, as the transition at higher temperatures becomes continuous a divergence of the heat capacity probably starts to build up making the numerical computation of the second derivative of the free energy unstable for \(t\stackrel{{>}}{{{}_{\sim}}}0.12\). Similarly, it is in principle possible to determine the system's polarization taking into account an infinitesimally weak external magnetic field (this as explained influences only the determination of the zero-th order chemical potentials \(\tilde{\mu}_{\pm}^{(0)}\) from the conditions (22)) and to compute the system's magnetic susceptibility \(\chi_{T}\) by constructing the derivative of the polarization with respect to \({\cal H}\). While such a computation seems to indicate that at least at low temperatures, at which the transition is (in the approximation to which our computation is restricted) is first order, the susceptibility also has a finite discontinuity at the transition point, it is not sufficiently stable numerically to yield reliable values of \(\chi_{T}\) and it is probably more practical to obtain it by computing the (connected) two point correlation function from the formula \(\chi_{T}=(\beta/V)\tilde{G}_{\rm con}^{(2)}({\bf 0})\). We do not attempt this here. ## 7 Conclusions We have developed a systematic perturbative expansion of the grand thermodynamic potential \(\Omega\) and of the free energy \(F\) of the system of (nonrelativistic) interacting spin \(1/2\) fermions. We have applied this expansion within the effective field theory in which the underlying repulsive spin-independent binary interaction of fermions is replaced by an infinite number of contact interaction terms and which allows to directly express Figure 5: Polarization \(P=(N_{+}-N_{-})/N\) of the system of spin \(1/2\) fermions with a short range repulsive interaction obtained from the free energy \(F\) computed up to the second order of the perturbative expansion. In the left panel as a function of the “gas parameter” \(k_{\rm F}a_{0}\) for several values of the temperature (counting from the left): \(t\equiv T/T_{F}=0\), \(0.1\), \(0.15\) and \(0.2\) (\(T_{\rm F}\equiv\varepsilon_{\rm F}/k_{\rm B}\)). Marked are also uncertainties of the value of \(P\) right at the transition points. In the right panel as a function of the temperature for several fixed values of \(k_{\rm F}a_{0}\). computed quantities in terms of the scattering lengths and effective radii which characterize the underlying interaction potential. We have shown (up to the third order but the result seems to be valid in general) that to the expansion of the free energy effectively contribute only those Feynman diagrams which give nonvanishing contributions to the ground-state energy of the system evaluated at zeroth order chemical potentials (associated with spin up and spin down fermions). Our numerical analysis has been restricted here to the first nontrivial order of the perturbative expansion (i.e. the first one going beyond the textbook mean field approximation) in which the results are still universal, i.e. on the form of the underlying interaction depend only through the \(s\)-wave scattering length \(a_{0}\) (in the next order the results start to depend also on the \(p\)-wave scattering length \(a_{1}\) and the effective radius \(r_{0}\)). We have devised a method for efficient numerical evaluation of the requisite nested integrals and used it to compute the system's polarization and its value right at the transition point paying attention to the uncertainty of the determination of the latter quantity which is crucial is assessing the character of the transition. For low temperatures, \(T\stackrel{{<}}{{\sim}}0.1\,T_{\rm F}\), we have also managed to determine the system's heat capacity encountering, however, some problems with the accuracy of numerical evaluation of the derivatives of the free energy which seem to prevent obtaining (at least without substantial improvements in the method) reliable values of the heat capacity for higher temperatures as well as determining the system's magnetic susceptibility. Of course, since the perturbative computation of the system's ground-state energy agrees with the results obtained (for specific forms of the underlying interaction) using the Quantum Monte Carlo approach only for \(k_{\rm F}a_{0}\stackrel{{<}}{{\sim}}0.5\), the results presented here cannot be taken very seriously. Moreover it is now known that already the inclusion of the third order corrections to the system's ground-state energy (free energy at zero temperature) significantly weaken the first order character of the transition (at zero temperature) to the ordered state. For these reasons our effort summarized here should Figure 6: Heat capacity (in units of \(Nk_{\rm B}\)) of the system of spin 1/2 fermions with a short range repulsive interaction as a function of the temperature for two different fixed values of the parameter \(k_{\rm F}a_{0}\) obtained from the free energy \(F\) computed up to the second order. be treated rather as a preliminary step taken towards extending the computation to a higher order and towards a possible implementation of a resummation of some class of the contributions to the free energy in the spirit of the approach of [13]. Such a resummation can probably also allow to overcome the limitation, inherent in the effective field theory approach, to sufficiently small temperatures only: as this approach relies on the clean separation of the scales (\(R\ll k_{\mathrm{F}}^{-1}\), where \(R\) is the characteristic length of the underlying interaction) it cannot be applied, at least if restricted to a finite order of the perturbative expansion, when \(k_{\mathrm{B}}T\) becomes comparable with the energy scale set by \(\varepsilon_{\mathrm{F}}\). We plan to return to these issues in the forthcoming paper. ## Appendix A The following summation formulae hold [7] (the limit \(\eta\to 0^{+}\) is implicit): \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{e^{i\eta\omega_{n}^{F} }}{i\omega_{n}^{F}-x} =\frac{1}{1+e^{\beta x}}\,, \omega_{n}^{F} \equiv\frac{\pi}{\beta}\left(2n+1\right),\] (A.1) \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{e^{i\eta\omega_{n}^{B} }}{i\omega_{n}^{B}-x} =\frac{1}{1-e^{\beta x}}\,, \omega_{n}^{B} \equiv\frac{2\pi}{\beta}\,n\,,\] (A.2) and, by decomposing into simple fractions, \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n+l}^{F}-y} =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n}^{F}-(y-i\omega_{l}^{B})}\] \[=\frac{1}{i\omega_{l}^{B}-(y-x)}\left(\frac{1}{1+e^{\beta x}}- \frac{1}{1+e^{\beta y}}\right),\] (A.3) \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{l-n}^{F}-y} =-\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n}^{F}-(i\omega_{l+1}^{B}-y)}\] \[=\frac{1}{i\omega_{l+1}^{B}-(y+x)}\left(\frac{1}{1+e^{\beta x}}- \frac{1}{1+e^{-\beta y}}\right).\] (A.4) Similarly, \[\frac{1}{\beta}\sum_{l\in\mathbb{Z}}\frac{1}{i\omega_{l}^{B}-x} \,\frac{1}{i\omega_{l}^{B}-y} =\frac{1}{x-y}\left(\frac{1}{1-e^{\beta x}}-\frac{1}{1-e^{\beta y}} \right).\] (A.5) Useful can be also the formula \[\frac{1}{x-a_{1}}\,\frac{1}{x-a_{2}}\,\dots\,\frac{1}{x-a_{n}} =\sum_{l=1}^{n}\left(\prod_{k\neq l}^{n}\frac{1}{a_{l}-a_{k}}\right) \frac{1}{x-a_{l}}\,.\] (A.6) ## Appendix B Here we demonstrate the cancellation of the additional terms in the formula (31) for \(F^{(3)}\) against the contributions of diagrams which vanish at zero temperature. Analogous cancellation of the contribution of the diagrams shown in Figure 2 against the additional terms in (30) has been checked in the main text. It will be convenient to introduce first the following notation: \[n_{x} \equiv\int_{\bf k}\frac{1}{1+e^{\beta(\varepsilon_{\bf k}-x_{0})}}\,,\] \[n_{xx} \equiv\int_{\bf k}\frac{1}{[1+e^{\beta(\varepsilon_{\bf k}-x_{0}) ]^{2}}}\,,\] \[n_{xxx} \equiv\int_{\bf k}\frac{1}{[1+e^{\beta(\varepsilon_{\bf k}-x_{0}) ]^{3}}}\,,\] etc. From (11) one immediately obtains (all functions are evaluated at \(x_{0}\) and \(y_{0}\)) \[\Omega^{(0)}_{x} =-V\,n_{x}\,,\] \[\Omega^{(0)}_{xx} =-V\beta\,(n_{x}-n_{xx})\,,\] (B.1) \[\Omega^{(0)}_{xxx} =-V\beta^{2}\,(n_{x}-3n_{xx}+2n_{xxx})\,.\] Analogously can be written the derivatives of \(\Omega^{(0)}\) with respect to \(y\). The necessary derivatives of \(\Omega^{(1)}=C_{0}V\,n_{x}\,n_{y}\) take the form \[\Omega^{(1)}_{x} =C_{0}V\beta\,(n_{x}-n_{xx})\,n_{y}\,,\] \[\Omega^{(1)}_{y} =C_{0}V\beta\,n_{x}\,(n_{y}-n_{yy})\,,\] \[\Omega^{(1)}_{xx} =C_{0}V\beta^{2}\,(n_{x}-3n_{xx}+2n_{xxx})\,n_{y}\,,\] (B.2) \[\Omega^{(1)}_{yy} =C_{0}V\beta^{2}\,n_{x}\,(n_{y}-3n_{yy}+2n_{yyy})\,,\] \[\Omega^{(1)}_{xy} =C_{0}V\beta^{2}\,(n_{x}-n_{xx})(n_{y}-n_{yy})\,.\] To \(\Omega^{(3)}\), in addition to the two "mercedes-type" diagrams shown in Figure 7 (the contributions \(\Omega^{(3)a}\) and \(\Omega^{(3)b}\)), contribute also the "mitsubishi-type" diagrams of Figure 8 (the contributions \(\Omega^{(3)c}\) and \(\Omega^{(3)d}\)), the two diagrams of Figure 9 (the contributions \(\Omega^{(3)e}\) and \(\Omega^{(3)f}\)) and the single "audi-type" diagram of Figure 9 (\(\Omega^{(3)g}\)). The computation of \(\Omega^{(3)c}\) and \(\Omega^{(3)d}\) is straightforward (it is analogous to that of \(\Omega^{(2)b}\) and \(\Omega^{(2)c}\)) and yields \[\Omega^{(3)c}+\Omega^{(3)d}=\frac{1}{6}\,C_{0}^{3}V\beta^{2}\left[(n_{x}-3n_{ xx}+2n_{xxx})\,n_{y}^{3}+n_{x}^{3}\,(n_{y}-3n_{yy}+2n_{yy})\right].\] Figure 7: The particle-particle and the particle-hole diagrams (the “mercedes-like” diagrams) contributing in the order \(C_{0}^{3}\) to \(\Omega^{(3)}\). One then readily sees that in (31) this is cancelled by the last two terms: \[\Omega^{(3)a}+\Omega^{(3)b}-\frac{\Omega^{(0)}_{xxx}[\Omega^{(1)}_{x}]^{3}}{6\,[ \Omega^{(0)}_{xx}]^{3}}-\frac{\Omega^{(0)}_{yyy}[\Omega^{(1)}_{y}]^{3}}{6\,[ \Omega^{(0)}_{yy}]^{3}}=0\,.\] One has now to consider the terms \(-\Omega^{(2)}_{x}\Omega^{(1)}_{x}/\Omega^{(0)}_{xx}\) and \(-\Omega^{(2)}_{y}\Omega^{(1)}_{y}/\Omega^{(0)}_{yy}\) in (31). \(\Omega^{(2)}\) is given by three diagrams shown in Figure 3 left and 2. It is convenient to write the contribution the first one, \(\Omega^{(2)a}\), in the form \[\Omega^{(2)a}=-\frac{C_{0}^{2}V}{2}\,\frac{1}{\beta}\sum_{l\in \mathbb{Z}}\!\int_{\mathbf{q}}\!\int_{\mathbf{k}}\!\int_{\mathbf{p}}\frac{1}{ \beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega^{F}_{n}-(\varepsilon_{\mathbf{k}} -x)}\,\frac{1}{i\omega^{F}_{n+l}-(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)}\] \[\times\frac{1}{\beta}\sum_{m\in\mathbb{Z}}\frac{1}{i\omega^{F}_{ m}-(\varepsilon_{\mathbf{p}}-y)}\frac{1}{i\omega^{F}_{m-l}-(\varepsilon_{ \mathbf{p}-\mathbf{q}}-y)}\,.\] (B.3) Differentiating it with respect to \(x\) one obtains the expression which is a sum of the two terms \[\Omega^{(2)a}_{x}=\frac{C_{0}^{2}V}{2}\,\frac{1}{\beta}\sum_{l \in\mathbb{Z}}\!\int_{\mathbf{q}}\!\int_{\mathbf{k}}\!\int_{\mathbf{p}}\frac{1 }{\beta}\sum_{m\in\mathbb{Z}}\frac{1}{i\omega^{F}_{m}-(\varepsilon_{\mathbf{p} }-y)}\frac{1}{i\omega^{F}_{m-l}-(\varepsilon_{\mathbf{p}-\mathbf{q}}-y)}\] \[\times\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\left\{\frac{1}{[i \omega^{F}_{n}-(\varepsilon_{\mathbf{k}}-x)]^{2}}\,\frac{1}{i\omega^{F}_{n+l} -(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)}\right.\] \[+\left.\frac{1}{i\omega^{F}_{n}-(\varepsilon_{\mathbf{k}}-x)}\, \frac{1}{[i\omega^{F}_{n+l}-(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)]^{2}} \right\}.\] These two terms are equal - to see this, it suffices to set in the second term \(\mathbf{q}=-\mathbf{q}^{\prime}\), \(\mathbf{k}=\mathbf{k}^{\prime}+\mathbf{q}^{\prime}\), \(\mathbf{p}=\mathbf{p}^{\prime}-\mathbf{q}^{\prime}\) and \(l=-l^{\prime}\), \(n=n^{\prime}+l^{\prime}\)\(m=m^{\prime}-l^{\prime}\). Thus, multiplying this by \(-\Omega^{(1)}_{x}/\Omega^{(0)}_{xx}\) which simply equals \(C_{0}\,n_{y}\) one readily sees (taking into account that \(\mathcal{G}^{(0)}_{--}(0,\mathbf{0})=n_{y}\)) that this precisely cancels \(\Omega^{(3)e}\). Thus, in \(F^{(3)}\) \[\Omega^{(3)e}+\Omega^{(3)f}-\frac{\Omega^{(1)}_{x}}{\Omega^{(0)}_{xx}}\,\Omega ^{(2)a}_{x}-\frac{\Omega^{(1)}_{y}}{\Omega^{(0)}_{yy}}\,\Omega^{(2)a}_{y}=0\,,\] After these cancellations one is left with \[F^{(3)}=\Omega^{(3)a}+\Omega^{(3)b} +\Omega^{(3)g}+C_{0}\,n_{y}\left(\Omega^{(2)b}_{x}+\Omega^{(2)c} _{x}\right)+C_{0}\,n_{x}\left(\Omega^{(2)b}_{y}+\Omega^{(2)c}_{y}\right)\] \[+\frac{\Omega^{(1)}_{xx}[\Omega^{(1)}_{x}]^{2}}{2\,[\Omega^{(0)}_ {xx}]^{2}}+\frac{\Omega^{(1)}_{yy}[\Omega^{(1)}_{y}]^{2}}{2\,[\Omega^{(0)}_{ yy}]^{2}}+\frac{\Omega^{(1)}_{xy}\Omega^{(1)}_{x}\Omega^{(1)}_{y}}{\Omega^{(0)}_{xx} \Omega^{(0)}_{yy}}\,.\] Explicitly the last line of \(F^{(3)}\) reads \[\frac{1}{2}\,C_{0}^{3}V\beta^{2}\left\{\left(n_{x}-3n_{xx}+2n_{xxx} \right)n_{y}^{3}\right.\] \[\left.+n_{x}^{3}\left(n_{y}-3n_{yy}+2n_{yyy}\right)+2\,n_{x}\left(n _{x}-n_{xx}\right)\left(n_{y}-n_{yy}\right)n_{y}\right\},\] while the contribution \(\Omega^{(3)g}\) of the "audi-type" diagram of Figure 8 can be written as \[\Omega^{(3)g}=C_{0}^{3}V\beta^{2}\,n_{x}\,(n_{x}-n_{xx})(n_{y}-n_{yy})\,n_{y}\,.\] Finally, \[\Omega_{x}^{(2)b}+\Omega_{x}^{(2)c}=-\frac{1}{2}\,C_{0}^{2}V\beta^{2}\left[ \left(n_{x}-3n_{xx}+2n_{xxx}\right)n_{y}^{2}+2\,n_{x}\left(n_{x}-n_{xx}\right) (n_{y}-n_{yy})\right],\] (the sum \(\Omega_{y}^{(2)b}+\Omega_{y}^{(2)c}\) is given by an analogous expression) and after a straightforward algebra all the extra terms cancel out, so that eventually, \(F^{(3)}=\Omega^{(3)a}+\Omega^{(3)b}\) that is, it given solely by the "mercedes-like" diagrams evaluated at the zeroth order chemical potentials \(x_{0}\), \(y_{0}\) (or \(\tilde{x}_{0}\), \(\tilde{y}_{0}\), if there is an external magnetic field). The diagrams canceled by the extra terms in the formulae (30) and (31) are precisely those (see e.g. [6]) which vanish at zero temperature, that is do not contribute to the expansion of the formula (13) for the ground state energy. One can also simplify the formula (29) for the second order correction \(x_{2}\) to the \(\mu_{+}\) chemical potential (and the analogous formula for \(y_{2}\)). After a straightforward algebra one obtains \[x_{2}=-\frac{\Omega_{x}^{(2)a}}{\Omega_{xx}^{(0)}}\,,\] all other terms neatly canceling. Of course, \(x_{1}=C_{0}\,n_{-}\), \(y_{1}=C_{0}\,n_{+}\) but it is perhaps more instructive to write5 Footnote 5: This form clearly shows, since the cancellation of the divergences in the sum \(\Omega^{(1)}+\Omega^{(2)a}\) has already been demonstrated, that the computed perturbatively chemical potentials \(x\) and \(y\) are to this order finite, after the cutoff dependence of the coupling \(C_{0}\) is taken into account. The argument obviously generalizes to all orders: if the free energy \(F\) is made finite by the renormalization of the couplings, so must be the chemical potentials. \[x=x_{0}-\frac{1}{\Omega_{xx}^{(0)}}\left(\Omega_{x}^{(1)}+\Omega_{x}^{(2)a}+ \ldots\right).\] determining the system's polarization. Indeed, only if this cancellation holds is the minimization of the free energy written in the form \[F=\Omega^{(0)}(x_{0},y_{0})+(x_{0}\,n_{x}+y_{0}\,n_{y})\,V+\Omega^{(1)}(x_{0},y_{ 0})+\Omega^{(2)a}(x_{0},y_{0})+\ldots,\] with respect to \(n_{x}\) (keeping \(n_{y}=n-n_{x}\)), which (taking into account that \(\Omega^{(0)}_{x}+n_{x}V=0\), \(\Omega^{(0)}_{y}+n_{y}V=0\)) amounts to \[(x_{0}-y_{0})\,V=-\left[\Omega^{(1)}_{x}+\Omega^{(2)a}_{x}+\ldots\right]\frac {\partial x_{0}}{\partial n_{x}}+\left[\Omega^{(1)}_{y}+\Omega^{(2)a}_{y}+ \ldots\right]\frac{\partial y_{0}}{\partial n_{y}}\,.\] equivalent to the condition \(\mu_{+}=\mu_{-}\) written in the form \(x_{0}+x_{1}+x_{2}+\ldots=y_{0}+y_{1}+y_{2}+\ldots\), that is \[x_{0}-y_{0}=\frac{\Omega^{(1)}_{x}+\Omega^{(2)a}_{x}+\ldots}{\Omega^{(0)}_{xx }}-\frac{\Omega^{(1)}_{y}+\Omega^{(2)a}_{y}+\ldots}{\Omega^{(0)}_{yy}}\,.\] The equivalence follows from noticing that since \(n_{x}=-\Omega^{(0)}_{x}/V\), \(n_{y}=-\Omega^{(0)}_{y}/V\), the derivatives of \(x_{0}\) and \(y_{0}\) are precisely equal to \[\frac{\partial x_{0}}{\partial n_{x}}=-\frac{V}{\Omega^{(0)}_{xx}}\,,\ \ \ \ \ \frac{\partial y_{0}}{\partial n_{y}}=-\frac{V}{\Omega^{(0)}_{yy}}\,.\] From this argument it immediately follows that \(x_{3}=-(\Omega^{(3)a}_{x}+\Omega^{(3)b}_{x})/\Omega^{(0)}_{xx}\) and \(y_{3}=-(\Omega^{(3)a}_{y}+\Omega^{(3)b}_{y})/\Omega^{(0)}_{yy}\). Restricted to the first order the left hand side of the equality \(x_{0}+x_{1}=y_{0}+y_{1}\) reads \[x_{0}+x_{1}+\ldots=k_{\rm B}T\,f^{-1}\!\left(\left(\frac{\varepsilon^{(0)}_{ \rm F}(n_{+})}{k_{\rm B}T}\right)^{3/2}\right)+C_{0}\,n_{-}+\ldots\] The right hand side is given by the analogous formula. If one sets here \(n_{\pm}=(N/2V)(1\pm P)\), this reproduces the condition (21).
2309.04591
An adaptive Bayesian approach to gradient-free global optimization
Many problems in science and technology require finding global minima or maxima of various objective functions. The functions are typically high-dimensional; each function evaluation may entail a significant computational cost. The importance of global optimization has inspired development of numerous heuristic algorithms based on analogies with physical, chemical or biological systems. Here we present a novel algorithm, SmartRunner, which employs a Bayesian probabilistic model informed by the history of accepted and rejected moves to make a decision about the next random trial. Thus, SmartRunner intelligently adapts its search strategy to a given objective function and moveset, with the goal of maximizing fitness gain (or energy loss) per function evaluation. Our approach can be viewed as adding a simple adaptive penalty to the original objective function, with SmartRunner performing hill ascent or descent on the modified landscape. This penalty can be added to many other global optimization algorithms. We explored SmartRunner's performance on a standard set of test functions, finding that it compares favorably against several widely-used alternatives: simulated annealing, stochastic hill climbing, evolutionary algorithm, and taboo search. Interestingly, adding the adaptive penalty to the first three of these algorithms considerably enhances their performance. We have also employed SmartRunner to study the Sherrington-Kirkpatrick (SK) spin glass model and Kauffman's NK fitness model - two NP-hard problems characterized by numerous local optima. In systems with quenched disorder, SmartRunner performs well compared to the other global optimizers. Moreover, in finite SK systems it finds close-to-optimal ground-state energies averaged over disorder.
Jianneng Yu, Alexandre V. Morozov
2023-09-08T20:54:57Z
http://arxiv.org/abs/2309.04591v1
# An adaptive Bayesian approach to gradient-free global optimization ###### Abstract Many problems in science and technology require finding global minima or maxima of various objective functions. The functions are typically high-dimensional; each function evaluation may entail a significant computational cost. The importance of global optimization has inspired development of numerous heuristic algorithms based on analogies with physical, chemical or biological systems. Here we present a novel algorithm, SmartRunner, which employs a Bayesian probabilistic model informed by the history of accepted and rejected moves to make a decision about the next random trial. Thus, SmartRunner intelligently adapts its search strategy to a given objective function and moveset, with the goal of maximizing fitness gain (or energy loss) per function evaluation. Our approach can be viewed as adding a simple adaptive penalty to the original objective function, with SmartRunner performing hill ascent or descent on the modified landscape. This penalty can be added to many other global optimization algorithms. We explored SmartRunner's performance on a standard set of test functions, finding that it compares favorably against several widely-used alternatives: simulated annealing, stochastic hill climbing, evolutionary algorithm, and taboo search. Interestingly, adding the adaptive penalty to the first three of these algorithms considerably enhances their performance. We have also employed SmartRunner to study the Sherrington-Kirkpatrick (SK) spin glass model and Kauffman's NK fitness model - two NP-hard problems characterized by numerous local optima. In systems with quenched disorder, SmartRunner performs well compared to the other global optimizers. Moreover, in finite SK systems it finds close-to-optimal ground-state energies averaged over disorder. ## Introduction Many models in fields of enquiry as diverse as natural and social sciences, engineering, machine learning, and quantitative medicine are described by complex non-linear functions of many variables. Often, the task is to find globally optimal solutions of these models, which is equivalent to finding global minima or maxima of the corresponding model functions. The global optimization problem arises in engineering design, economic and financial forecasting, biological data analysis, potential energy models in physics and chemistry, robot design and manipulations, and numerous other settings. Notable examples include finding the minimum of protein free energy in computer simulations of protein folding [1, 2], finding high-fitness solutions in evolving populations subject to mutation, selection, recombination, and genetic drift [3, 4, 5] (biological fitness quantifies the degree of reproductive success of an organism in an evolving population), and minimizing the error function in deep-learning neural network models [6, 7]. Mathematically, the global optimization problem is defined as finding the maximum (or the minimum) of a real-valued function \(\mathcal{F}(X)\), where \(X\) denotes a collection of discrete or continuous variables that describe the state of the system. The states of the system may be subject to nonlinear constraints. Here we focus on maximizing \(\mathcal{F}(X)\), which we will refer to as the fitness function; with \(\mathcal{F}(X)=-E(X)\), this is equivalent to minimizing an energy or error function \(E(X)\). In the energy function case, \(E(X)\) may signify the energy of a microstate or a free energy of a coarse-grained/mesoscopic state. The number of variables in \(X\) may be large in real-world applications and \(\mathcal{F}(X)\) may be costly to evaluate, making it highly desirable to develop efficient global optimization algorithms which require as few fitness function evaluations as possible to reach high-quality solutions. The set of fitness values assigned to all states of the system forms a fitness landscape - a high-dimensional surface which global optimization algorithms must traverse on their way to the mountain peaks that correspond to high-scoring solutions. If the fitness function is concave everywhere, the fitness landscape consists of a single peak and the global maximum is easy to find. However, in most problems of interest fitness landscapes contain multiple local maxima and saddle points which can trap the optimizer. There is no guarantee of finding the global maximum in this case unless all system states can be examined, which is usually not feasible because their number is exponentially large. A well-known worst-case scenario is a "golf-course" landscape which is flat everywhere apart from a few states that form a basin of attraction for an isolated deep hole, or a tall peak. In protein folding, this scenario is known as Levinthal's paradox [8] - proteins cannot fold on biologically reasonable time scales if they need to sample a sizable fraction of their microscopic configurations. While Levinthal's paradox has been resolved by introducing the concept of a protein folding funnel [1, 2, 9, 10], generally there is no guarantee of finding the global maximum in a reasonable number of steps, and global optimization is demonstrably an NP-hard problem [11]. If the gradient of the fitness function can be computed efficiently, it should be used to guide the search because the gradient vector indicates the direction of the steepest ascent. Here, we focus on systems with discrete or discretized states and assume that the gradient is not available. Namely, we consider an undirected graph with \(N\) nodes or vertices, where \(N\) is the total number of system states which may be astronomically large or even unknown. Each node \(i=1\ldots N\) is assigned a state \(X_{i}\) and a corresponding fitness value \(\mathcal{F}(X_{i})\). This definition describes a vast number of systems that are either naturally discrete (e.g., spin glasses [12]) or discretized by superimposing a lattice on a continuous landscape. Besides the fitness function, a global optimization algorithm requires a move set - a deterministic or stochastic rule for moving between states on the fitness landscape. A move set defines state neighborhoods - a set of states reachable from a given state in a single jump. The size of the neighborhood is typically fixed but may also change in complex ways, e.g. with recombination moves described below. Numerous empirical approaches have been developed over the years to tackle the problem of gradient-free optimization. Usually, these algorithms are based on an analogy with a physical, chemical or biological process in which some kind of optimization is known to occur. For example, the celebrated simulated annealing algorithm [13] is a Monte Carlo technique based on an analogy with a physical annealing process in which the material starts at a high temperature to enable constituent molecules or atoms to move around. The temperature is gradually decreased, allowing the material to relax into low-energy crystalline states. The rate of temperature decrease is a key parameter of the simulated annealing algorithm [14]. Numerous modifications of the basic simulated annealing approach have been developed over the years: parallel tempering Monte Carlo [15], replica Monte Carlo [16], population annealing [17], simulated tempering [18], and many others. Generally speaking, the idea of these algorithms is to overcome free energy barriers by simulating a broad range of temperatures. Besides estimating various thermodynamic quantities by Monte Carlo sampling, some of these algorithms have also been applied to combinatorial optimization problems such as the search for the ground states of Ising spin glasses [19]. Genetic or evolutionary algorithms [20, 21, 22] are based on an analogy with the evolution of a biological population: a population of candidate solutions is subjected to multiple rounds of recombination, mutation, and selection, enabling "the survival of the fittest". Harmony search is a music-inspired algorithm, applying such concepts as playing a piece of music from memory, pitch adjustment, and composing new notes to an evolving population of harmonies [23, 24]. Particle swarm algorithms draw their inspiration from the collective behavior of bird flocks and schools of fish [25, 26]. Taboo search is a deterministic strategy in which all nearest neighbors of the current state are examined and the best move is accepted [27]. To avoid returning to previously examined states via deterministic cycles, a fixed-length "taboo" list is kept of the recently visited states that are temporarily excluded from the search. Stochastic hill climbing employs a procedure in which the moves are accepted or rejected using a sigmoid (two-state) function with a fixed temperature \(T\)[28]. As in simulated annealing, this strategy allows for deleterious moves whose frequency depends on the value of \(T\). Many other heuristic algorithms and variations of the above algorithms are available in the literature [29, 30, 31, 32, 33, 34, 35]. Here we propose a novel global optimization algorithm which we call SmartRunner. SmartRunner is not based on an analogy with a physical, chemical or biological system. Instead, the algorithm uses previously accumulated statistics on rejected and accepted moves to make a decision about its next move. Thus, SmartRunner adapts its search strategy intelligently as a function of both local and global landscape statistics collected earlier in the run, with the goal of maximizing the overall fitness gain. Generally speaking, SmartRunner can be viewed as a stochastic extension of the Taboo search policy. However, unlike the Taboo algorithm, it does not need to evaluate fitness values of every neighbor of the current state, which may be computationally expensive. Moreover, it replaces infinite penalties assigned to the states in the "taboo" list by node-dependent penalties which only become infinite when all the nearest neighbors of the node is question have already been explored. We benchmark SmartRunner on a set of challenging global optimization problems and show that it consistently outperforms several other state-of-the-art algorithms. Moreover, we demonstrate that the SmartRunner approach amounts to hill climbing on a dynamically redefined fitness landscape. This redefinition can be used to enhance the performance of many other global search approaches such as simulated annealing or evolutionary algorithms. ## Materials and Methods **Bayesian estimation of the probability to find a novel beneficial move.** _Unweighted moves._ Consider a fitness landscape with a move set that defines \(\mathcal{N}\) nearest neighbors for each discrete system state \(X_{i}\) (\(i=1\ldots N\)). We divide all neighbors of the state \(X_{i}\) into two disjoint subsets: one set \(S_{p}^{i}\) of size \(U_{p}^{i}\geq 0\) contains all states with fitness \(\leq\mathcal{F}_{i}\), while the other set \(S^{i}\) of size \(U^{i}=\mathcal{N}-U_{p}^{i}\geq 0\) contains all states with fitness \(>\mathcal{F}_{i}\). Moves between \(X_{i}\) and any state in the set \(S_{p}^{i}\) are deleterious or neutral, while moves to any state in the set \(S^{i}\) are beneficial. Generally, we expect the size of \(S^{i}\) to be small: \(U^{i}\ll\mathcal{N}\simeq U_{p}^{i}\) because as a rule it is more difficult to find a beneficial move than a deleterious or neutral one. We assign the system state \(X_{i}\) to the node \(i\) on a network, with nodes representing system states and edges representing nearest-neighbor jumps. We consider a single random walker that explores the network. At each step, the walker is equally likely to initiate a jump to any of the \(\mathcal{N}\) neighbors of the current node. Let us say that the random walker is currently at node \(i\) and has made \(n\) unsuccessful attempts to make a move \(i\to j\in\mathrm{nnb}(i)\), where \(\mathrm{nnb}(i)=S_{p}^{i}\cup S^{i}\) is a set that contains all the nearest neighbors of node \(i\) (for simplicity, let us assume for the moment that all deleterious and neutral moves are rejected while a beneficial move, once found, is immediately accepted). After \(n\) trials, we have data \(\mathcal{D}=\{K_{p},m_{p},K,m\}\), where \(K_{p}\) is the total number of visits to the nodes in \(S_{p}^{i}\) and \(K=n-K_{p}\) is the total number of visits to the nodes in \(S^{i}\). Furthermore, \(m_{p}\leq K_{p}\) and \(m\leq K\) are the number of _unique_ visited nodes in \(S_{p}^{i}\) and \(S^{i}\), respectively. The probability of observing \(\mathcal{D}\) is given by \[P(\mathcal{D}|U^{i})=\binom{n}{K}\left(\frac{U^{i}}{\mathcal{N}}\right)^{K} \left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n-K}. \tag{1}\] Correspondingly, the probability of \(U^{i}\) given the data is \[P(U^{i}|\mathcal{D})=\frac{P(\mathcal{D}|U^{i})P(U^{i})}{\sum_{U^{\prime}=0}^ {\mathcal{N}-m_{p}}P(\mathcal{D}|U^{\prime})P(U^{\prime})}, \tag{2}\] where \(P(U)\) is the prior probability that there are \(U\) nearest neighbors of node \(i\) whose fitness is higher than \(\mathcal{F}_{i}\). Choosing an uninformative prior, we obtain: \[P(U)=\frac{1}{\mathcal{N}+1}. \tag{3}\] Note that \(\sum_{U=0}^{\mathcal{N}}P(U)=1\). Then Eq. (2) yields \[P(U^{i}|\mathcal{D})=\frac{1}{Z}\left(\frac{U^{i}}{\mathcal{N}}\right)^{K} \left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n-K}, \tag{4}\] where \(Z=\sum_{U^{\prime}=0}^{\mathcal{N}-m_{p}}\left(\frac{U^{\prime}}{\mathcal{N}} \right)^{K}\left(1-\frac{U^{\prime}}{\mathcal{N}}\right)^{n-K}\). Focusing on the \(K=0\), \(m=0\) limit (that is, on the case where no beneficial moves have yet been found) and assuming \(U^{i}\ll\mathcal{N}\), we obtain \[P(\mathcal{D}|U^{i})=\left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n}\simeq e^{- \gamma U^{i}}, \tag{5}\] where \(\gamma=n/\mathcal{N}\). Furthermore, \[Z\simeq\sum_{U^{\prime}=0}^{\mathcal{N}-m_{p}}e^{-\gamma U^{\prime}}=\frac{1- e^{-\gamma\widetilde{\mathcal{N}}}}{1-e^{-\gamma}}, \tag{6}\] where \(\widetilde{\mathcal{N}}=\mathcal{N}-m_{p}+1\). Note that the exponential substitution becomes inaccurate for the terms in the sum in which \(U^{\prime}\) approaches \(\mathcal{N}-m_{p}\); however, since \(m_{p}\leq\mathcal{N}\), these terms are suppressed in the \(n\gg 1\) limit compared to the accurately approximated terms with \(U^{\prime}\ll\mathcal{N}-m_{p}\). We observe that \(m_{p}\) is a stochastic variable whose expectation value can be shown to be \[E[m_{p}]=\mathcal{N}\left[1-(1-\frac{1}{\mathcal{N}})^{n}\right]\simeq\mathcal{ N}\left[1-e^{-\gamma}\right], \tag{7}\] where the last approximation requires \(\mathcal{N}\gg 1\). Finally, \[P(U^{i}|\mathcal{D})=\frac{1}{Z}e^{-\gamma U^{i}}=e^{-\gamma U^{i}}\frac{1-e^ {-\gamma}}{1-e^{-\gamma\widetilde{\mathcal{N}}}}. \tag{8}\] If \(\mathcal{N}\gg 1\) and \(\mathcal{N}\gg m_{p}\), Eq. (8) yields \[P(U^{i}=0|\mathcal{D})\simeq\frac{1-e^{-\frac{n}{\mathcal{N}}}}{1-e^{-n}} \simeq 1-e^{-\frac{n}{\mathcal{N}}}, \tag{9}\] where the last approximation is valid for \(n\gg 1\). Thus, the probability to find beneficial moves, \(P(U^{i}>0|\mathcal{D})\simeq e^{-n/\mathcal{N}}\), decreases exponentially with \(n\). Note that if \(n=0\) (no random trials have been made), Eq. (8) yields \(P(U^{i}>0|\mathcal{D})=\mathcal{N}/(\mathcal{N}+1)\), consistent with the prior probability in Eq. (3) which assigns equal weights to all \(\mathcal{N}+1\) values of \(U^{i}\). Thus, to begin with the system is very optimistic that a beneficial move will be found. However, if \(m_{p}=\mathcal{N}\) (that is, all moves have been tried and none are beneficial), Eq. (8) yields \(P(U^{i}>0|\mathcal{D})=0\), as expected. Thus, the system gradually loses its optimism about finding a beneficial move as it makes more and more unsuccessful trials. Finally, we compute the probability of finding a higher-fitness target in the next step: \[p_{f}=\sum_{U^{i}=0}^{\mathcal{N}-m_{p}}\frac{U^{i}}{\mathcal{N}}P(U^{i}| \mathcal{D})=\frac{1}{\mathcal{N}}\frac{e^{-\gamma}-\widetilde{\mathcal{N}}e ^{-\gamma\widetilde{\mathcal{N}}}+e^{-\gamma(\widetilde{\mathcal{N}}+1)}( \widetilde{\mathcal{N}}-1)}{(1-e^{-\gamma})(1-e^{-\gamma\widetilde{\mathcal{N }}})}. \tag{10}\] In the beginning of the search, \(n\ll\mathcal{N}\) and, correspondingly, \(m_{p}\ll\mathcal{N}\). If, in addition, \(n\gg 1\) (which implies \(\mathcal{N}\gg 1\)), Eq. (10) simplifies considerably: \[p_{f}\simeq\frac{1}{\mathcal{N}}\frac{1-n/\mathcal{N}}{n/\mathcal{N}}=\frac{1 }{n}\left[1+\mathcal{O}(\frac{n}{\mathcal{N}})\right]. \tag{11}\] Note that in this limit \(p_{f}\) is independent of \(\mathcal{N}\) to the leading order. If \(m_{p}=\mathcal{N}\), \(\widetilde{\mathcal{N}}=1\) and \(p_{f}=0\) in Eq. (10), as expected. Note that if \(n=m_{p}=0\), \(\widetilde{\mathcal{N}}=\mathcal{N}+1\) and \(\gamma=0\). Then \(Z=\mathcal{N}+1\) from Eq. (6), leading to the following simplification of Eq. (10): \[p_{f}=\frac{1}{\mathcal{N}(\mathcal{N}+1)}\sum_{U^{i}=0}^{\mathcal{N}}U^{i}= \frac{1}{2}. \tag{12}\] Thus, not surprisingly, the probability of finding a higher-fitness target before making any moves is \(1/2\). After making a single move and not finding a higher-fitness target (\(n=1\), \(m_{p}=1\)), \(\widetilde{\mathcal{N}}=\mathcal{N}\) and \(\gamma=1/\mathcal{N}\). With the additional assumption that \(\mathcal{N}\gg 1\), we obtain: \[p_{f}\simeq\frac{1-2e^{-1}}{1-e^{-1}}+\mathcal{O}(\frac{1}{\mathcal{N}})\simeq 0.42. \tag{13}\] In summary, the probability of finding a beneficial move, \(p_{f}\), starts out at \(0.5\) and decreases with the number of trials until either a beneficial move is found (in which case Eq. (10) is no longer applicable) or there are no more novel moves to find (in which case \(p_{f}=0\)). The asymptotic \(\simeq 1/n\) behavior of \(p_{f}\) is universal in the \(n\gg 1\) limit (Eq. (11)). Finally, we observe that the above formalism can be extended to any subsets \(S_{p}^{i}\) and \(S^{i}\) since the initial division into deleterious/neutral moves in \(S_{p}^{i}\) and beneficial moves in \(S^{i}\) was arbitrary. Thus, even if a beneficial move is found, we can add it to \(S_{p}^{i}\) and regard \(S^{i}\) as the set of _remaining_, or _novel_ beneficial moves. _Weighted moves._ The probability to find a novel beneficial move (Eq. (10)) was derived under the assumption that the total number of neighbors \(\mathcal{N}\) is known and that the move set is unweighted - each new move is chosen with equal probability \(1/\mathcal{N}\). However, move sets may be intrinsically weighted: for example, in systems with recombination relative weights of recombination moves depend on the genotype frequencies in the population. In addition, it may be of interest to assign separate weights to classes of moves, such as one- and two-point mutations in sequence systems, or one- and two-spin flips in spin systems. In this section, we relax the assumption of unweighted moves, while still treating \(\mathcal{N}\) as a known constant. Specifically, we consider a set of weights \(\{w_{j}\}_{j=1}^{\mathcal{N}}\) associated with \(i\to j\in\mathrm{nnb}(i)\) moves. The probability of a \(i\to j\) jump is then given by \(p(i\to j)=w_{j}/W\), where \(W=\sum_{j=1}^{\mathcal{N}}w_{j}=\sum_{j=1}^{U_{p}^{i}}w_{j}+\sum_{j=1}^{U^{i} }w_{j}\equiv W_{U^{i}_{p}}+W_{U^{i}}\) is the sum over all nearest-neighbor weights, and \(W_{U^{i}_{p}}\) and \(W_{U^{i}}\) are partial sums over the weights in \(S_{p}^{i}\) and \(S^{i}\), respectively. Consequently, \[P(\mathcal{D}|U^{i},\{w_{j}\})=\binom{n}{K}\left(\frac{W_{U^{i}}}{W}\right)^{K }\left(1-\frac{W_{U^{i}}}{W}\right)^{n-K}, \tag{14}\] which in the \(K=0\) case reduces to \[P(\mathcal{D}|U^{i},\{w_{j}\})=\left(1-\frac{W_{U^{i}}}{W}\right)^{n}\simeq e^ {-\frac{W_{U^{i}}}{W}n}. \tag{15}\] Next, we integrate the likelihood over the edge weights: \[P(\mathcal{D}|U^{i})=\int_{0}^{\infty}dw_{1}\ldots dw_{\mathcal{N}}P(w_{1}) \ldots P(w_{\mathcal{N}})e^{-\frac{n}{W}(w_{1}+\cdots+w_{U^{i}})}. \tag{16}\] We represent the probability distribution of edge weights by a Gaussian mixture model, which can be used to describe multimodal distributions of arbitrary complexity:[36] \[P(w)=\frac{1}{\Omega}\sum_{k=1}^{\mathcal{P}}\frac{p_{k}}{\sqrt{2\pi}\sigma_{ k}}e^{-\frac{(w-\bar{w}_{k})^{2}}{2\sigma_{k}^{2}}}, \tag{17}\] where \(\Omega\) is the normalization constant, \(\mathcal{P}\) is the number of Gaussian components and \(p_{k}\) is the relative weight of component \(k\): \(\sum_{k=1}^{\mathcal{P}}p_{k}=1\). In the \(\mathcal{N}\gg 1\) limit, we expect \(W\simeq\langle W\rangle=\mathcal{N}\sum_{k}p_{k}\bar{w}_{k}\equiv\mathcal{N} \bar{w}\), such that Eq. (16) simplifies to \[P(\mathcal{D}|U^{i})\simeq\prod_{j=1}^{U^{i}}\int_{0}^{\infty}dw_{j}P(w_{j})e^ {-\frac{w_{j}}{\langle W\rangle}n}=e^{-\beta U^{i}}, \tag{18}\] where \[e^{-\beta}=\frac{1}{2\Omega}\sum_{k=1}^{\mathcal{P}}p_{k}\mathrm{erfc}\left(\frac {c_{k}-\bar{w}_{k}}{\sqrt{2}\sigma_{k}}\right)e^{-\alpha_{k}}. \tag{19}\] Here, \(\alpha_{k}=\frac{n\bar{w}_{k}}{\langle W\rangle}-\frac{n^{2}\sigma_{k}^{2}}{2 \langle W\rangle^{2}}=\gamma\frac{\bar{w}_{k}}{\bar{w}}-\gamma^{2}\frac{\sigma_ {k}^{2}}{2\bar{w}^{2}}\), \(c_{k}=\frac{\sigma_{k}^{2}n}{\langle W\rangle}=\gamma\frac{\sigma_{k}^{2}}{\bar {w}}\) and \(\mathrm{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_{x}^{\infty}dte^{-t^{2}}\) is the complementary error function. The normalization constant is given by \[\Omega=\frac{1}{2}\sum_{k=1}^{\mathcal{P}}p_{k}\mathrm{erfc}\left(-\frac{\bar{ w}_{k}}{\sqrt{2}\sigma_{k}}\right). \tag{20}\] Note that if all the Gaussians are narrow (\(\sigma_{k}\ll\bar{w}_{k}\), \(\forall k\)), \(\mathrm{erfc}\left(-\frac{\bar{w}_{k}}{\sqrt{2}\sigma_{k}}\right)\to 2\) and thus \(\Omega\to 1\), as expected. If the edge weights are Gaussian distributed with mean \(\bar{w}\) and standard deviation \(\sigma\) (i.e., \(\mathcal{P}=1\)), Eq. (19) becomes \[e^{-\beta}=\frac{\mathrm{erfc}\left(\frac{c-\bar{w}}{\sqrt{2}\sigma}\right)}{ \mathrm{erfc}\left(-\frac{\bar{w}}{\sqrt{2}\sigma}\right)}e^{-\alpha}, \tag{21}\] where \(\alpha=\gamma-\gamma^{2}\frac{\sigma^{2}}{2\bar{w}^{2}}\) and \(c=\gamma\frac{\sigma^{2}}{\bar{w}}\). If in addition all weights are equal, \(\frac{\sigma}{\bar{w}}\to 0\) and \(\beta\to\gamma\) in Eq. (21), such that Eq. (18) for the likelihood reduces to Eq. (5). Thus, the difference between \(\beta\) and \(\gamma\) is due to fluctuation corrections. The model evidence \(Z\), the posterior probability \(P(U^{i}|\mathcal{D})\) and \(p_{f}\), the probability of finding a higher-fitness target in the next step, are found by substituting \(\gamma\to\beta\) into Eqs. (6), (8) and (10), respectively. Note that if \(n\to 0\), \(\beta\to 0\) as well and therefore \(p_{f}\to 1/2\) since the argument leading to Eq. (12) still holds. Moreover, Eq. (10) still yields \(p_{f}=0\) when all the neighbors have been explored (\(m_{p}=\mathcal{N}\)). Finally, if \(n,m_{p}\ll\mathcal{N}\) and \(n\gg 1\), \(\alpha\simeq\gamma\) and the ratio of complementary error functions in Eq. (21) is \(\simeq 1\). Then the argument leading to Eq. (11) also holds, yielding \(p_{f}\simeq 1/n\) asymptotically even in the weighted move case. Thus, introducing weighted moves does not lead to qualitative differences in the \(p_{f}\) dependence on \(n\) - any substantial differences are localized to the intermediate region: \(1\leq n\leq 30\) or so, and in many systems the \(p_{f}\) curves for weighted and unweighted moves overlap almost completely (cf. red and blue curves in Fig. S1A-D). Next, we consider the exponential probability distribution of edge weights - an archetypical distribution often found in natural and artificial networks:[37] \[P(w)=\frac{1}{\bar{w}}e^{-\frac{w}{\bar{w}}}, \tag{22}\] where \(\bar{w}\) denotes the mean of the exponential distribution, such that \(\langle W\rangle=\mathcal{N}\bar{w}\). It is easy to show that the likelihood \(P(\mathcal{D}|U^{i})\) is given by Eq. (18) with \(\beta_{\mathrm{exp}}=\log\left(1+\frac{n}{N}\right)\). Consequently, as in the case of the Gaussian mixture model, the model evidence \(Z\), the posterior probability \(P(U^{i}|\mathcal{D})\) and \(p_{f}\) are given by Eqs. (6), (8) and (10), respectively, but with \(\beta_{\mathrm{exp}}\) instead of \(\gamma\). Clearly, \(\beta_{\mathrm{exp}}\to 0\) as \(n\to 0\) and therefore \(p_{f}\to 1/2\) as in the Gaussian mixture case. Similarly, \(p_{f}=0\) once \(m_{p}=\mathcal{N}\). Lastly, the \(n,m_{p}\ll\mathcal{N}\) limit yields \(\alpha\simeq\gamma\), which in turn leads to \(p_{f}\simeq 1/n\) under the additional assumption of \(n\gg 1\). Thus, the dependence of \(p_{f}\) on \(n\) for exponentially distributed weights is again qualitatively similar to the \(p_{f}\) functions in the corresponding unweighted cases (cf. red and blue curves in Fig. S1E,F). _Simplified treatment of \(p_{f}\)._ The computation of \(p_{f}\) for the unweighted and the exponentially distributed cases requires the knowledge of \(m_{p}\) and \(\mathcal{N}\) besides the number of trials \(n\). For weighted move sets in the Gaussian mixture model, one would additionally require \(p_{k}\), \(\bar{w}_{k}\) and \(\sigma_{k}\) for each Gaussian component. Unless these parameters are known _a priori_, they would have to be estimated from a sample of edge weights, increasing the computational burden. Moreover, this extra effort may not be justified, in the view of the nearly universal dependence of \(p_{f}\) on \(n\) in all three cases considered above. Even keeping track of \(\mathcal{N}\) may be complicated for some move sets, e.g. a recombination+mutation move set employed in genetic algorithms.[20, 21, 22] With recombination, \(\mathcal{N}\) depends on the current state of the population and therefore generally changes with time. Hence, computing \(\mathcal{N}\) at each step would increase the complexity of the algorithm. We propose to capitalize on the approximate universality of \(p_{f}\) by creating a minimal model for it which depends only on the number of trials \(n\). Specifically, we define \[p_{f}(n)=\begin{cases}\frac{n^{2}}{250}-\frac{2n}{25}+\frac{1}{2}&\text{if }0 \leq n\leq 5,\\ \frac{1}{n}&\text{if }n>5.\end{cases} \tag{23}\] This model has the right asymptotics at \(n=0\) and in the \(n,m_{p}\ll\mathcal{N}\), \(n\gg 1\) limit, but does not go to \(0\) identically when \(m_{p}=\mathcal{N}\) because enforcing this condition requires the knowledge of \(\mathcal{N}\). However, if \(\mathcal{N}\gg 1\), as can be expected with complex move sets, \(n\gg 1\) at \(m_{p}=\mathcal{N}\) and the difference between the small but non-zero value of \(p_{f}\) in Eq. (23) and zero will be immaterial (cf. green curves in Fig. S1 for \(p_{f}(n)\) in several model systems). **Implementation of the optimal search policy: the SmartRunner algorithm.** Given Bayesian probabilistic estimates of finding novel moves between a given node and its nearest neighbors, we need to formulate an optimal search policy in order to maximize the expected fitness gain over the course of the run with \(l_{\text{tot}}\) random trials. Assuming that the walker is currently at node \(i\), there are two options after each random trial: stay at node \(i\) (thereby rejecting the move) or jump to a neighboring node \(j\). If the walker stays at node \(i\), we expect it to search for \(l_{i}\) steps before finding a higher-fitness node which has not been detected before. Then the value of the policy of staying at \(i\) can be evaluated as \[\mathcal{S}_{i}=\overline{\Delta\mathcal{F}_{b}}+\overline{R}(l_{\text{rem}}- l_{i})=-\overline{R}l_{i}+\mathcal{C}, \tag{24}\] where \(\overline{\Delta\mathcal{F}_{b}}=\overline{\mathcal{F}_{\underline{k}}- \mathcal{F}_{i}}\) is the expected fitness gain of the newly found beneficial move to a node \(k\in\text{nnb}(i)\) and \(\overline{R}\) is the expected rate of fitness gain per step times the number of steps remaining in the run. Furthermore, \(l_{\text{rem}}\leq l_{\text{tot}}\) is the number of steps remaining in the simulation, and \(l_{i}\) is the expected number of steps needed to find \(k\): \[l_{i}=\text{rnd}[\frac{1}{p_{f}^{i}}], \tag{25}\] where \(p_{f}^{i}\) is given by Eq. (10) or Eq. (23) (with the dependence on the node index \(i\) made explicit for clarity) and \(\text{rnd}[\ ]\) is the rounding operator. Finally, \(\mathcal{C}=\overline{\Delta\mathcal{F}_{b}}+\overline{R}\,l_{\text{rem}}\) denotes a constant contribution independent of the node index, under the assumption that \(\overline{\Delta\mathcal{F}_{b}}\) is the same for all nodes. The value of the policy of jumping to a downstream node \(j\) from \(i\) is given by \[\mathcal{L}_{i\to j}=(\mathcal{F}_{j}-\mathcal{F}_{i})-\overline{R}(l_{j}+1)+ \mathcal{C}, \tag{26}\] where the extra factor of \(-\!\!R\) accounts for the jump between the current node \(i\) and the new node \(j\), which reduces the total number of remaining steps by \(1\). We represent each visited state as a node and each rejected or accepted move as an edge on a directed graph \(\mathcal{G}\), implemented using the DiGraph class from NetworkX1. Thus, node \(i\) is part of the directed graph \(\mathcal{G}\) which contains information about all previously attempted moves. Depending on which previous moves have been explored, node \(i\) may be connected to multiple downstream nodes by directed edges; in general, these edges may form directed cycles. To see if the jump to one of the nodes downstream of node \(i\) will yield a more beneficial policy, we traverse \(\mathcal{G}\) recursively starting from the node \(i\), for up to \(l_{\max}\) steps. For computational convenience, \(G\) is implemented with two types of nodes:'regular' nodes \(i,j,k,\dots\) which denote states on the fitness landscape and are therefore assigned fitness values \(\mathcal{F}_{i},\mathcal{F}_{j},\mathcal{F}_{k},\dots\). (black circles in Fig. 1), and 'terminal' nodes \(i_{t},j_{t},k_{t},\dots\) which are assigned fitness values \(-\!\!Rl_{i},-\!\!Rl_{j},-\!\!\overline{R}l_{k},\dots\) (green circles in Fig. 1). Note that we have omitted the node-independent contribution \(\mathcal{C}\) in Eqs. (24) and (26). The edges of \(\mathcal{G}\) connecting two regular nodes (solid black arrows in Fig. 1): \(m\to n\) are assigned a weight of \(\mathcal{F}_{n}-\mathcal{F}_{m}\). The edges of \(\mathcal{G}\) connecting a regular node to a terminal node (dashed green arrows in Fig. 1): \(m\to m_{t}\) are assigned a weight of \(-\!\!R\,l_{m}\). By construction, terminal nodes do not have descendants and each regular node has exactly one terminal descendant. Footnote 1: [https://networkx.org/documentation/stable/reference/classes/digraph.html](https://networkx.org/documentation/stable/reference/classes/digraph.html) The policy values are computed using a set of recursive paths on the directed graph \(\mathcal{G}\). All Figure 1: **A schematic representation of the directed graph \(\mathcal{G}\) that represents the search process.** Regular nodes (system states) are represented by black circles with the corresponding fitness values; terminal nodes are shown as green circles. Directed edges connecting regular and terminal nodes are shown as dashed green lines and assigned a value of \(-\!\!\overline{R}l_{m}\), where \(l_{m}\) is the expected number of steps to find a novel beneficial move starting from regular node \(m\), and \(\overline{R}\) is the expected rate of fitness gain per step. Directed edges connecting two regular nodes are shown as solid black lines and assigned a value of the fitness difference between the two nodes. Note that the set of children of a given regular node always has one terminal node and \(m_{p}=(0\dots\mathcal{N})\) regular nodes depending on how much exploration has been done. In general, \(\mathcal{G}\) may contain directed cycles. valid paths must start from node \(i\) and end at one of the terminal nodes reachable with \(\leq l_{\max}\) steps. The goal is to identify a path which has the maximum weight among all paths. Note that with a single step, the only valid path is \(i\to i_{t}\) and its weight is given by \(\mathcal{S}_{i}\) from Eq. (24). The minimum allowed value of \(l_{\max}\) is thus equal to 2 because this enables computations of the path weight as \(\mathcal{L}_{i\to j}\) in Eq. (26) for \(j\in\operatorname{nnb}(i)\). Larger values of \(l_{\max}\) will enable longer jumps if any are available; longer jumps entail repeated application of Eq. (26) to compute the total path weight. If the winning path is \(i\to i_{t}\), the walker stays at the node \(i\) and makes another random trial, updating its \(p_{f}^{i}\) accordingly. If the winning path is \(i\to j_{t}\) (where \(j\) may be several steps away depending on the value of \(l_{\max}\)), the walker jumps to the node \(j\) and makes a new random trial from that node. The node \(j\) statistics such as \(n\) and \(m_{p}\) are initialized if the node has not been visited before in the run, and updated otherwise. Note that if Eq. (10) is used to compute \(p_{f}^{i}\), it is possible to obtain \(p_{f}^{i}=0\) and therefore \(l_{i}=\infty\) in Eq. (25), which is represented computationally by a large positive constant. The case in which both node \(i\) and all its neighbors \(j\) reachable in \(\leq l_{\max}\) steps are in this category requires special treatment because the \(\overline{R}\,l_{i}\) and \(\overline{R}\,l_{j}\) penalties cancel out and SmartRunner essentially becomes a local optimizer driven solely by fitness differences. To avoid being trapped in local fitness maxima in this special case, SmartRunner employs two alternative strategies. In the first strategy, a random path is chosen in the ensemble of all paths with \(\leq l_{\max}\) steps, instead of the path with the maximum sum of edge weights. In the second strategy, a longer random path to the boundary of the \(p_{f}=0\) region is constructed explicitly; the random path can have up to \(10^{3}\) steps. In both strategies, if the boundary of the \(p_{f}=0\) region is not reached, the procedure is repeated at subsequent steps, resulting in an overall random walk to the boundary of the "maximally-explored" region. The SmartRunner algorithm depends on \(\overline{R}\), the expected rate of fitness gain per step. Larger positive values of \(\overline{R}\) will encourage jumps to less-explored regular nodes even if those have slightly lower fitness values and will therefore promote landscape exploration. Smaller positive values of \(\overline{R}\) will encourage more thorough exploration of the current node but will not fully prevent deleterious moves. Negative values of \(\overline{R}\) however will prevent all further exploration. We adjust the value of \(\overline{R}\) adaptively as follows. The algorithm starts out with a user-provided initial value \(\overline{R}_{\text{init}}\). For each move, either accepted or rejected, the corresponding fitness value is recorded in a fitness trajectory array. Once \(M\) values are accumulated in the array, a linear model is fit to the fitness trajectory, yielding the fitted slope \(R^{\text{fit}}\). Finally, \(\overline{R}\) is computed as \[\overline{R}=\begin{cases}\alpha R^{\text{fit}}&\text{if }R^{\text{fit}}\geq \epsilon,\\ \alpha\epsilon\exp(R^{\text{fit}}-\epsilon)&\text{if }R^{\text{fit}}<\epsilon, \end{cases} \tag{27}\] where \(\epsilon\) is a small positive constant. Note that the second line serves to'rectify' the values of \(R^{\text{fit}}\) that follow below the threshold \(\epsilon\), preventing \(\overline{R}\) from ever reaching negative values. The value of \(\overline{R}\) is recomputed every \(M\) steps using Eq. (27), providing adaptive feedback throughout the run. The positive hyperparameter \(\alpha\) is the level of 'optimism' - how much more optimistic the system is about its future success compared to past performance. As discussed above, larger values of \(\alpha\) will promote landscape exploration. The SmartRunner algorithm can be summarized as the following sequence of steps: **SmartRunner Algorithm** **INPUT:** Initial state: \(X_{0}\) Fitness landscape function: \(X\to\mathcal{F}\) Move set function: \(X^{\mathrm{old}}\to X^{\mathrm{new}}\) Total number of iterations: \(l_{\mathrm{tot}}\) Maximum length of paths explored from each state: \(l_{\mathrm{max}}\) Initial guess of the fitness rate: \(\overline{R}_{0}\) Optimism level: \(\alpha\) Length of sub-trajectory for recomputing \(\overline{R}\): \(M\) 1. Initialize directed graph \(\mathcal{G}\). 2. Initialize regular node \(X_{0}\) with \(\mathcal{F}(X_{0})\). 3. Initialize terminal node \(X_{0,t}\). 4. Initialize \(\overline{R}=\overline{R}_{0}\). 5. Initialize \(l=0\). 6. Add an edge \(X_{0}\to X_{0,t}\) with a weight \(-\overline{R}l_{X_{0}}\). **do:** 1. Generate a random move: \(X\to X^{\prime}\). 2. If \(X^{\prime}\notin\mathcal{G}\): add \(X^{\prime}\) to \(\mathcal{G}\) with \(\mathcal{F}(X^{\prime})\); add a terminal node \(X^{\prime}_{t}\); add an edge \(X^{\prime}\to X^{\prime}_{t}\) with a weight \(-\overline{R}l_{X^{\prime}}\). 3. If \(X\to X^{\prime}\notin\mathcal{G}\): add an edge \(X\to X^{\prime}\) with a weight \(\mathcal{F}(X^{\prime})-\mathcal{F}(X)\). 4. Update statistics for \(X\), recompute \(l_{X}\) and update the \(X\to X_{t}\) edge weight. 5. Recursively compute sums of edge weights for all paths of length \(\leq l_{\mathrm{max}}\) starting at \(X\) and ending at downstream terminal nodes. If \(l_{X}=\infty\) for the \(XX_{t}\) path and \(l_{Y_{k}}=\infty\) for all other paths \(X\ldots Y_{k}Y_{k,t}\) in the ensemble, initiate a random walk; otherwise, stay at \(X\) or jump to \(Y_{k}\) according to the path with the maximum sum of edge weights. 6. If \(l=M,2M,3M,\ldots\): recompute \(\overline{R}\) using Eq. (27). **while**\(l\leq l_{\mathrm{tot}}\) **OUTPUT:** Globally best state: \(X^{\mathrm{best}}\), \(\mathcal{F}(X^{\mathrm{best}})\) Fitness trajectory: \(\{\mathcal{F}\}_{l=1}^{l_{\mathrm{tot}}}\) Total number of fitness function evaluations: \(f_{\mathrm{eval}}\) **Adaptive fitness landscape.** The stay or leave policy defined by Eqs. (24) and (26) amounts to an adaptive redefinition of the fitness landscape: \[\mathcal{F}_{i}\to\widetilde{\mathcal{F}}_{i}=\mathcal{F}_{i}-\overline{R}l_{ i}, \tag{28}\] where \(\overline{R}\,l_{i}\) is a positive occupancy penalty whose overall magnitude is controlled by the hyperparameter \(\overline{R}\). The penalty increases as the node \(i\) is explored more and more, resulting in progressively larger values of \(l_{i}\). Note that if Eq. (23) is used to estimate \(p_{f}\), the only additional piece of information required to compute \(\widetilde{\mathcal{F}}_{i}\) from \(\mathcal{F}_{i}\) is the total number of trials \(n_{i}\) at the node \(i\), which is easy to keep track of. Thus, \(\widetilde{\mathcal{F}}_{i}\) can serve as input not only to SmartRunner, which in this view amounts to hill climbing on the \(\widetilde{\mathcal{F}}\) landscape, but to any global optimization algorithm. In algorithms where individual moves are accepted or rejected sequentially (e.g., Simulated Annealing, Stochastic Hill Climbing), we compare \(\widetilde{\mathcal{F}}_{i}\) with \(\widetilde{\mathcal{F}}_{j}-\overline{R}\) to account for the fact that jumping from node \(i\) to node \(j\) decreases the total number of remaining steps by 1 (cf. Eq. (26)). In algorithms which involve non-sequential scenarios (e.g, Evolutionary Algorithm), modified fitnesses \(\widetilde{\mathcal{F}}\) from Eq. (28) are used directly instead of \(\mathcal{F}\). ## Results SmartRunner can climb out of deep local maxima.To demonstrate the ability of SmartRunner to traverse local basins of attraction leading to suboptimal solutions, we have constructed a 2D fitness landscape defined by a weighted sum of two Gaussians (Fig. 2). The left basin of attraction leads to a local maximum (\(\mathcal{F}^{\star}=50.17\)) which is much smaller compared to the global maximum on the right (\(\mathcal{F}^{\star}=78.48\)). The two basins of attraction are separated by a steep barrier. We start the SmartRunner runs from the left of the local basin of attraction, making sure that the walker rapidly falls there first, reaching the local maximum in a few thousand steps (Fig. 2A). Afterwards, the walker explores the local basin of attraction more and more extensively (Fig. 2B,C) until the barrier is finally overcome and the global maximum is found (Fig. 2D). The exploration strategy is automatically adapted to the fitness landscape features rather than being driven by external parameters such as the simulated annealing temperature. SmartRunner performance on 4D test functions.Next, we have explored SmartRunner performance on three standard 4D test functions often used to benchmark global optimization algorithms:[29] Rastrigin, Ackley and Griewank (se SI Methods for function definitions). The test functions are defined in standard hypercube ranges and supplemented with periodic boundary conditions. The resulting fitness landscapes are discretized using the same step size \(\Delta x\) in all 4 directions, resulting in \(1.63\times 10^{9}\), \(1.17\times 10^{10}\) and \(2.08\times 10^{12}\) distinct fitness states for Rastrigin, Ackley and Griewank functions, respectively. All three test functions are characterized by multiple local maxima; the unique global maximum is located at \(\mathfrak{X}=(0,0,0,0)\) and corresponds to \(\mathcal{F}=0\). The landscapes are explored by randomly choosing one of the directions and then increasing or decreasing the corresponding coordinate by \(\Delta x\) (the _nnb_ moveset). Fig. 3 shows the performance of SmartRunner on the Rastrigin test function: Fig. 3A is a hyperparameter scan which shows no consistent trend in the dependence of the average best fitness values \(\langle\mathcal{F}_{\text{best}}\rangle\) on \(\overline{R}_{\text{init}}\), the initial rate of fitness gain per step. This is expected because the value of \(\overline{R}\) is reset adaptively during the run (cf. Eq. (27)). In contrast, there is a slight preference for lower values of optimism \(\alpha\). Fig. 3B shows the corresponding average of function evaluations - unique fitness function calls which can be used as a measure of algorithm performance, especially in cases where fitness function calls are expensive, making it advisable to focus on maximizing the average fitness gain per function evaluation. As expected, the optimal values of \(\alpha\) correspond to the lower number of function evaluations since lower values of \(\alpha\) tend to favor exploitation (i..e., a more thorough search of the neighbors of the current state) over exploration (which favors more frequent jumps between landscape states). Figs. S2A,B and S2C,D show the corresponding results for Ackley and Griewank test functions, respectively. Lower values of \(\alpha\) work better for Ackley, while \(\alpha\geq 5\) are preferable for Griewank, indicating that in general a scan over several values of \(\alpha\) may be required. Since the Griewank landscape is considerably larger and the global maximum is not always found, we also show the maximum best-fitness value found over 50 independent runs, and the corresponding number of function evaluations (Fig. S2E,F). For lower values of \(\alpha\), the global maximum is not always found but rather another high-fitness solution. With reasonable hyperparameter settings, all 50 SmartRunner runs find the global maximum of the Rastrigin landscape (Fig. 3C), requiring \(\simeq 15500\) function evaluations on average. Fig. 3E shows three representative fitness trajectories - rapid convergence to the vicinity of the global maximum. The best fitness values of \(\alpha\) are also found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run maximum is observed in \(\leq 4\times 10^{4}\) steps, regardless of the starting state. The dynamics of global optimization strongly depend on the moveset type. To explore whether SmartRunner can adapt to movesets with non-local moves, we have considered the _spmut_ moveset in which a randomly chosen coordinate is changed to an arbitrary new value on the discretized landscape. Thus, instead of updating a given coordinate in \(\pm\Delta x\) increments, most moves change the coordinate by many multiples of \(\Delta x\), creating a densely connected landscape: for example, the number of nearest neighbors is \(200\times 4=800\) for the Rastrigin function, instead of just 8 with the _nnb_ moveset (the abbreviation _spmut_ stands for single-point mutations, since a given \(x_{i}\) can'mutate' into any other \(x_{j}\), \(j\neq i\) from a discrete set). Fig. 4 shows that SmartRunner reliably finds the global maximum with the _spmut_ moveset. The dependence on \(\overline{R}_{\rm init}\) is weak and the lower values of \(\alpha\) are preferable (Fig. 4A). The number of fitness function calls is much higher for the same total number of steps (\(10^{5}\)) as with the _nnb_ moveset (Fig. 4B,D). All 50 runs find the global maximum with optimal or nearly-optimal hyperparameter settings (Fig. 4C), and fitness trajectories quickly converge to high-quality solutions (Fig. 4E). Similar behavior is observed with Ackley and Griewank functions: lower values of \(\alpha\) work better and the number of function evaluations is several times larger compared to the _nnb_ moveset (Fig. S3). Thus, using the _nnb_ moveset is preferable for all three landscapes. Figure 3: **SmartRunner exploration of the Rastrigin test function: _nnb_ moveset.** (A) A scan over SmartRunner hyperparameters (\(l_{\rm max}=2\)): the initial value of the expected rate of fitness gain per step \(\overline{R}_{\rm init}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) A histogram of best fitness values for the heatmap cell with \(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\). (D) A histogram of the number of unique fitness function calls for the heatmap cell with \(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\). (E) Plots of 3 representative SmartRunner trajectories (\(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\)). Next, we have asked whether the observed differences in SmartRunner performance at different hyperparameter values are statistically significant. Using the Rastrigin function as an example, we have employed one-sided Kolmogorov-Smirnov (KS) tests for the best-fitness distributions (Fig. S4). The distribution with the highest average of best-fitness values in Figs. 3A and 4A was compared with all the other distributions. We find that the differences between the distributions are not statistically significant with the _nnb_ moveset (Fig. S4A). In contrast, using \(\alpha\geq 6\) with the _spmut_ moveset leads to statistically significant degradation of SmartRunner performance (Fig. S4B). We have also used KS tests to investigate the effects of the \(l_{\max}\) hyperparameter (Fig. S5). Since for all three test functions best-fitness distributions with \(l_{\max}=3\) are not significantly better than the corresponding \(l_{\max}=2\) distributions with the same \(\alpha\) and \(\overline{R}_{\mathrm{init}}\) hyperparameter settings, we typically use \(l_{\max}=2\) as it is less expensive computationally. **The effects of the occupancy penalty on other global optimization algorithms.** As mentioned above, the SmartRunner algorithm can be viewed as hill climbing on a fitness landscape \(\widetilde{\mathcal{F}}\) modified with the occupancy penalties (Eq. (28)). However, the modified fitness landscape can also be explored using other empirical global optimization approaches. Here, we focus on three widely used algorithms: Simulated Annealing (SA),[13] Stochastic Hill Climbing (SHC),[28] and Evolutionary Algorithm (EA)[20, 21, 22] (see SI Methods for implementation details). SA is based on an analogy with a metallurgy technique involving heating followed by controlled cooling of a material to alter its physical properties.[13] The algorithm is implemented as a series of Metropolis Monte Carlo move trials[38] with a slowly decreasing temperature. SA's hyperparameters are the initial temperature \(T_{i}\) and the final temperature \(T_{f}\), plus the expected rate of fitness gain \(\overline{R}\) when the occupancy penalty is included. We use a linear cooling schedule in Figure 4: **SmartRunner exploration of the Rastrigin test function: _spmut_ moveset.** Same as Fig. 3 (including SmartRunner settings and hyperparameter value settings in panels C-E), but with the \(spmut\) moveset. this work. SHC is a version of hill climbing which accepts downhill moves with the probability \(p=1/(1+\exp\left[(\mathcal{F}_{\text{current}}-\mathcal{F}_{\text{new}})/T\right])\).[28] Thus, \(p\simeq 0\) in the \(\mathcal{F}_{\text{new}}/T\ll\mathcal{F}_{\text{current}}/T\) limit, and \(p\simeq 1\) in the opposite limit. SHC's search strategy is controlled by the temperature \(T\), along with \(\overline{R}\) in the case of modified landscapes. Finally, EA is inspired by the process of biological evolution.[21, 22] It involves creating a population of \(N_{\text{pop}}\) 'organisms' (i.e., putative solutions; we use \(N_{\text{pop}}=50\) in this work). The population is initialized randomly and subjected to repeated rounds of recombination, mutation and selection. Besides the population size, EA's hyperparameters are the crossover (recombination) rate \(r_{x}\), the mutation rate \(\mu\) and, for modified landscapes, the expected rate of fitness gain \(\overline{R}\). The original algorithm names (SA, SHC, EA) are reserved for runs with \(\overline{R}=0\); runs with modified landscapes are referred to as 'enhanced' (ESA, ESHC, EEA). Fig. S6 shows the performance of ESA as a function of the initial temperature \(T_{i}\) and the expected rate of fitness gain \(\overline{R}\) for our three test functions, with the _nnb_ moveset (although we have also performed a scan over the final temperature \(T_{f}\), the dependence is weak and the results are not shown). We observe that \(T_{i}\simeq 1\) values are more preferable and, as expected, are accompanied by the lower number of function evaluations. Strikingly, the hyperparameter settings with the best average performance always have non-zero \(\overline{R}\): \(T_{i}=1.0\), \(T_{f}=0.002\), \(\overline{R}=0.1\) for the Rastrigin function (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-0.017\)). For the Ackley function, \(T_{i}=1.0\), \(T_{f}=0.001\), \(\overline{R}=0.15\) (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-2.078\)). For the Griewank function, \(T_{i}=1.0\), \(T_{f}=0.001\), \(\overline{R}=0.2\) (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-0.067\)). Thus, ESA outperforms SA - when using simulated annealing, the best global optimization strategy is to augment the original fitness values with the occupancy penalties. Fig. S7 shows that these observations are statistically significant. Occupancy penalties dramatically improve SA's performance when it is run with the suboptimal values of the initial temperature (Fig. 5A, Fig. S8). In fact, the results are better than those with the higher, SA-optimal values of \(T_{i}\): \(\langle\mathcal{F}_{\text{best}}\rangle=-0.005\) for Rastrigin (\(T_{i}=0.02\), \(T_{f}=0.003\), \(\overline{R}=0.2\)), \(\langle\mathcal{F}_{\text{best}}\rangle=0.000\) for Ackley (\(T_{i}=0.01\), \(T_{f}=0.001\), \(\overline{R}=0.25\)), \(\langle\mathcal{F}_{\text{best}}\rangle=-0.015\) for Griewank (\(T_{i}=0.01\), \(T_{f}=0.003\), \(\overline{R}=0.25\)). Thus, the best overall strategy is to run SA at very low temperatures (where it reduces to simple hill climbing), but on the modified fitness landscape. This is precisely the strategy implemented in SmartRunner. Qualitatively similar results are obtained with SHC: non-zero values of \(\overline{R}\) are preferable at higher, SHC-optimal values of \(T\) (Fig. S9); the effect is statistically significant (Fig. S10). However, as Fig. 5B and Fig. S11 demonstrate, the enhancement is especially dramatic when the values of \(T\) become very low, much lower than the SHC-optimal values explored in Fig. S9. Similar to SA, low-\(T\) runs with \(\overline{R}\neq 0\) yield the highest-quality solutions, again indicating that in the presence of occupancy penalties the best strategy is straightforward hill ascent. Finally, occupancy penalties can rescue EA from being stuck in the local maxima (Fig. 5C, Fig. S12A,C) - with the _nnb_ moveset, the population tends to condense onto a local maximum and become monomorphic. Local mutations of population members in such locally optimal states are mostly deleterious and therefore tend to get eliminated from the population. The population as a whole is therefore unable to keep exploring new states, as evidenced by the low number of function evaluations in Fig. S12B,D,F compared to the other algorithms. This drawback is fixed by making the fitness landscape adaptive with the help of the occupancy penalty. SmartRunner can be viewed as stochastic generalization of the Taboo Search (TS) - a deterministic policy in which all nearest neighbors of the current state are explored one by one and the move to the neighbor state with the best fitness is accepted [27]. To prevent backtracking to already-explored states, a list of 'taboo' states is kept to which jumps are forbidden; the length of this list, \(L_{\rm tabu}\), is a hyperparameter. By construction, TS avoids visiting neighbor states more than once and is always guaranteed to find the best neighboring state to jump into; however, we expect it to lose efficiency in systems characterized by very large numbers of neighbors, since all of these neighbors have to be tried and most of them do not correspond to good solutions. In contrast, SmartRunner can make a decision to accept a move before all neighbors are explored, based on the move/neighbor statistics collected up to that point. In any event, with \(L_{\rm tabu}\geq 400\) Figure 5: **Occupancy penalties enhance performance of global optimization algorithms: _nnb_ movest.** (A) A scan over ESA hyperparameters (linear cooling schedule) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the initial temperature \(T_{i}\) (the final temperature is set to \(T_{f}=0.001\)). See Fig. S8 for details and for the Ackley and Griewank functions. (B) A scan over ESHC hyperparameters for the Ackley function: the expected rate of fitness gain per step \(\overline{R}\) and the temperature \(T\). See Fig. S11 for details and for the Rastrigin and Griewank functions. (C) A scan over EEA hyperparameters for the Griewank function: the expected rate of fitness gain per step \(\overline{R}\) and the mutation rate \(\mu\) (the crossover rate is set to \(r_{x}=0.2\) and the population size to \(N_{\rm pop}=50\)). See Fig. S12 for details and for the Rastrigin and Ackley functions. Each heatmap cell represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=10^{5}\) steps each and randomly chosen starting states. TS demonstrates high performance on all three test functions, requiring a relatively low number of function evaluations to achieve this result (Fig. S13). Interestingly, the situation is reversed with the _spmut_ moveset - with SA and SHC, better performance is achieved when \(\overline{R}=0\) (Fig. S14). This observation is not surprising given the somewhat special nature of the test functions we consider. As it turns out, with Rastrigin and Ackley functions it is possible to use TS to reach the global maximum in exactly 4 steps, regardless of the initial state. Each step sets one of the coordinates to 0.0 until the global maximum is attained (see Fig. S15A,B for representative trajectories). With the Griewank function, optimization depends on the initial conditions and, as a rule, additional steps are required since the first 4 steps only bring the system to the vicinity of the global maximum (Fig. S15C). Thus, with this landscape structure it is not beneficial to jump to a new node before all neighbors of the current node are explored. In other words, premature jumping between nodes simply resets the search. In this case, \(\overline{R}=0\) is indeed preferable and correctly identified by our methods; however, this is a special case which we do not expect to hold true in general. **Comparison of global optimization algorithms.** Different global optimization algorithms use different notions of a single step. While SA, SHC and SmartRuner define a single stochastic trial as an elementary step, a TS step involves querying all nearest neighbors, and an EA step involves rebuilding a population subjected to crossover and mutation. To ensure a fair comparison, we have allocated a fixed number of _novel_ fitness function evaluations to each algorithm and observed the resulting performance (SA had to be left out because its performance depends on the cooling schedule, such that stopping SA at \(T>T_{f}\) puts it at an unfair disadvantage). We note that with the _nnb_ moveset, SmartRuner consistently shows the best performance (Fig. 6). As expected, the worst performance with this moveset is exhibited by EA as it is unable to utilize more and more function evaluations to find states with better fitness - in fact, EA often uses fewer function evaluations than was allocated to it, terminating instead when the maximum number of steps is exceeded. **SmartRunner tests on SK spin glass and Kauffman's NK models: quenched disorder.** Next, we have turned to two challenging discrete-state systems with complex fitness landscapes. One is the Sherrington-Kirkpatrick (SK) spin glass model[12], with \(N\pm 1\) spins coupled by random interactions that are independently sampled from the standard Gaussian distribution (SI Methods). The other is Kauffman's NK model used in evolutionary theory[39, 40], in which each of the \(N\)\((0,1)\) sites interacts with \(0\leq K\leq N-1\) other sites chosen by random sampling. The fitness function for a given binary sequence is a sum over \(N\) single-site contributions; each single-site contribution is obtained by sampling from the standard uniform distribution (SI Methods). The model parameter \(K\) serves to tune the degree of landscape ruggedness: the number of local maxima increases rapidly as \(K\) goes up. In both systems, the moveset consists of changing the binary state at a single site. Thus, each of the \(2^{N}\) states has \(N\) nearest neighbors. First, we focus on systems with quenched disorder, where random parameters of the system are generated only once and subsequently used in all comparisons of global optimization algorithms. We have carried out a SmartRunner hyperparameter search for the SK model with \(N=200\) spins (Fig. S16). We find that among all of the values tried, \(\alpha=0.1\) is clearly preferable (Fig. S16A,C), with a statistically significant improvement in performance (Fig. S16E). On the other hand, the dependence on \(\overline{R}_{\rm init}\) is very weak. As expected, the number of function evaluations increases with \(\alpha\) as novel states are explored more frequently (Fig. S16B,D). The same conclusions are reached with the NK model with \(N=200\) sites and \(K=8\) couplings per site, with \(\alpha=0.1\) identified again as the optimal value (Fig. S17). We have also explored the \(\alpha\) settings in models with 200, 500, and 1000 spins/sites (Fig. S18). While \(\alpha=0.1\) is confirmed as the optimal choice for \(N=200\) models, \(\alpha=0.01\) is preferable for \(N=500,1000\). Finally, we carry out a side-by-side comparison of the performance of all 5 algorithms: SR, TS, SA, SHC, and EA on the SK models (Table 1) and the NK models (Table 2). To mimic a realistic situation in which computer resources are a limiting factor and the fitness landscapes are exceedingly large, we have chosen a single set of hyperparameter settings for each algorithm. Thus, SR was run with \(\alpha=0.01\), even though the above analysis shows that \(\alpha=0.1\) is in fact a better choice for \(N=200\). The only exception to this rule is SHC, where we carried out a mini-scan over the values \(T\) to optimize performance. All algorithms except for SmartRunner were run on the original landscapes without occupancy penalties. For SA, \(T_{f}\simeq 0\) should be reasonable, while \(T_{i}=1.0\) is dictated by the overall scale of the landscape. For EA, a 3D scan over \(\mu\), \(r_{x}\), \(N_{\rm pop}\) is not feasible, so that we had to settle for 'typical' values. Thus, more complex algorithms with several hyperparameters are implicitly penalized, as they are likely to be in a Figure 6: **Comparison of the algorithms conditioned on the number of unique fitness function calls: _nnb_ moveset. The maximum number of allowed function calls was set to \(\{6250,12500,25000,37500,50000\}\) for all algorithms. In panels A-C we show the best fitness values found in each run averaged over 100 independent runs with randomly chosen starting states. (A) Rastrigin function. (B) Ackley function. (C) Griewank function. (D) Same as C but for the globally best fitness values obtained over all 100 runs instead of the averages. EA – Evolutionary Algorithm (\(\mu=0.1\), \(r_{x}=0.1\), \(N_{\rm pop}=50\)), SHC – Stochastic Hill Climbing (\(T=0.5\)), SR – SmartRunner (\(l_{\rm max}=2\), \(\overline{R}^{\rm init}=0.1\), \(\alpha=1.0\) for Rastrigin and Ackley, \(\alpha=10.0\) for Griewank), TS – Taboo Search (\(L_{\rm tabu}=500\)).** realistic research setting. We find that SmartRunner ranks the highest overall in this competition. For the SK models, it is in the second place for \(N=200\) and the first place for \(N=500,1000\) if judged by the average of all solutions (Table 1). If judged by the globally best solution, the SmartRunner shares the first place with SHC for \(N=200\) and again takes the first place for \(N=500,1000\). Similar results are seen with the NK model (Table 2): by both the average and the globally \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\langle\mathcal{F}_{best}\rangle\pm\sigma_{\mathcal{F}_{best}}\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.715\pm 0.012\) & \(0.711\pm 0.009\) & \(\mathbf{0.723\pm 0.003}\) & \(0.719\pm 0.009\) & \(0.688\pm 0.020\) \\ \hline **500** & \(\mathbf{0.746\pm 0.008}\) & \(0.722\pm 0.009\) & \(0.710\pm 0.010\) & \(0.735\pm 0.008\) & \(0.668\pm 0.010\) \\ \hline **1000** & \(\mathbf{0.733\pm 0.005}\) & \(0.684\pm 0.010\) & \(0.492\pm 0.015\) & \(0.557\pm 0.006\) & \(0.598\pm 0.009\) \\ \hline \multicolumn{5}{c}{\(\max(\mathcal{F}_{best})\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.727\) & \(0.725\) & \(\mathbf{0.727}\) & \(\mathbf{0.727}\) & \(0.717\) \\ \hline **500** & \(\mathbf{0.757}\) & \(0.734\) & \(0.729\) & \(0.744\) & \(0.684\) \\ \hline **1000** & \(\mathbf{0.740}\) & \(0.704\) & \(0.517\) & \(0.566\) & \(0.614\) \\ \hline \end{tabular} \end{table} Table 1: **Comparison of the algorithm performance on the SK model.**\(\langle\mathcal{F}_{best}\rangle\) is the average of the best-fitness values found in each run, averaged over 10 independent runs with randomly chosen starting states; \(\sigma_{\mathcal{F}_{best}}\) is the corresponding standard deviation; \(\max(\mathcal{F}_{best})\) is the largest of the best-fitness values; \(\mathrm{N_{s}}\) is the number of spins. SR – SmartRunner (\(l_{\max}=2\), \(\overline{R}^{\mathrm{init}}=0.01\), \(\alpha=0.01\)), TS – Taboo Search (\(L_{\mathrm{tabu}}=5000\)), SA – Simulated Annealing (\(T_{i}=0.01\), \(T_{f}=0.001\), linear cooling schedule), SHC – Stochastic Hill Climbing (\(T=10^{-3}\)), EA – Evolutionary Algorithm (\(\mu=0.2\), \(r_{x}=0.5\), \(N_{\mathrm{pop}}=100\)). In SR, SA and SHC the total number of steps \(l_{\mathrm{tot}}=1.5\times 10^{6},10^{6},5\times 10^{5}\) for the models with 200, 500 and 1000 spins, respectively. In TS, the total number of steps is rescaled by the number of nearest neighbors (\(l_{\mathrm{tot}}=7.5\times 10^{3},2\times 10^{3},5\times 10^{2}\)); in EA, the total number of steps is rescaled by the population size (\(l_{\mathrm{tot}}=1.5\times 10^{4},10^{4},5\times 10^{3}\)). The best result in each row is highlighted in boldface. For consistency, all runs employed a single random realization of the SK model (quenched disorder). \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\langle\mathcal{F}_{best}\rangle\pm\sigma_{\mathcal{F}_{best}}\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.778\pm 0.005\) & \(0.772\pm 0.005\) & \(0.771\pm 0.005\) & \(\mathbf{0.785\pm 0.005}\) & \(0.751\pm 0.008\) \\ \hline **500** & \(\mathbf{0.776\pm 0.003}\) & \(0.748\pm 0.007\) & \(0.675\pm 0.005\) & \(0.702\pm 0.003\) & \(0.727\pm 0.007\) \\ \hline **1000** & \(\mathbf{0.770\pm 0.001}\) & \(0.732\pm 0.004\) & \(0.601\pm 0.004\) & \(0.616\pm 0.002\) & \(0.700\pm 0.005\) \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\max(\mathcal{F}_{best})\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.789\) & \(0.781\) & \(0.782\) & \(\mathbf{0.795}\) & \(0.761\) \\ \hline **500** & \(\mathbf{0.784}\) & \(0.758\) & \(0.683\) & \(0.706\) & \(0.739\) \\ \hline **1000** & \(\mathbf{0.772}\) & \(0.738\) & \(0.606\) & \(0.620\) & \(0.708\) \\ \hline \end{tabular} \end{table} Table 2: **Comparison of the algorithm performance on the NK model.**\(\mathrm{N_{s}}\) is the number of sites (each site has 8 randomly chosen intra-sequence couplings per site); all other quantities and parameter settings are as in Table 1. For consistency, all runs employed a single random realization of the NK model (quenched disorder). best measures, SmartRunner is second for the \(N=200\) model and first for the larger models with \(500\) and \(1000\) sites. The somewhat weaker performance of SmartRunner on the \(N=200\) systems could be improved by switching to \(\alpha=0.1\) (Figs. S16,S17). However, this would give SmartRunner an unfair advantage in the context of this competition, in which every algorithm was run with a single reasonable set of hyperparameters. **Prediction of the SK ground state energies averaged over disorder.** Next, we have investigated the ability of SmartRunner to reproduce finite-size corrections to average ground state energies of the SK model (Fig. 7). The ground state energy per spin averaged over random spin couplings is known theoretically to be \(-0.7633\) in the \(N\rightarrow\infty\) limit of the SK model,[42] with the \(2/3\) scaling exponent for finite-size corrections (i.e., \(\langle E_{best}(N)\rangle\sim N^{-2/3}\)) available from both theoretical[43] and numerical[41] investigations. This provides a baseline against which SmartRunner's ability to find the global minima of the SK energy can be judged. We find that the average ground-state energy per spin predicted by SmartRunner is reasonably close to the expected straight line in Fig. 7, although there are statistically significant deviations for the three largest systems (\(N=300,350,400\)), indicating that SmartRunner does not quite reach Figure 7: **Finite-size corrections to the ground state energy per spin in the SK model.** Cyan dots: average of the best-energy values found by SmartRunner on SK landscapes with \(N=50,100,150,200,250,300,350,400\) spins. For each value of \(N\), \(18\) to \(23\) independent runs with \(l_{\rm max}=2,\overline{R}^{\rm init}=0.01\), \(\alpha=0.1\) were carried out, each one with randomly generated spin couplings and starting from a random spin configuration. The total number of steps \(l_{\rm tot}\) ranged from \(4.5\times 10^{6}\) to \(3.0\times 10^{7}\) depending on \(N\). Error bars represent the errors of the mean. Blue dots: numerical results for the finite-size corrections to the SK ground state energy reported by S. Boettcher using the Extremal Optimization algorithm (Table 1 in Ref.[41]). Dashed green line: a linear fit to Boettcher’s ground state energies yielding \(\langle E_{best}(N)\rangle=\langle E_{best}(\infty)\rangle+mN^{-2/3}\), where \(m=0.7047\) is the slope and \(\langle E_{best}(\infty)\rangle=-0.7633\) is the asymptotic Parisi energy for the infinite system.[42] All energies are divided by the number of spins \(N\) to produce intensive quantities. the true ground states in these cases. Overall, SmartRunner's performance on these systems is less reliable than that of Extremal Optimization, a heuristic algorithm specifically adapted to the SK model and requiring a simplified probabilistic model for spin couplings [44, 41]. ## Discussion and Conclusion In this work, we have developed a novel approach to global optimization called SmartRunner. Instead of relying on qualitative similarities with physical, chemical or biological systems, SmartRunner employs an explicit probabilistic model for accepting or rejecting a move on the basis of the immediate previous history of the optimization process. The key quantity guiding SmartRunner decisions is \(p_{f}\), the probability of finding a higher-fitness target in the next random trial. This probability has nearly universal asymptotics and can be effectively represented by a function that depends only on \(n\), the number of previously rejected attempts to change the current state of the system. In other words, the dependence of SmartRunner's behavior on such details of the systems as the number of nearest neighbors and the transition rates is fairly weak, making our approach applicable to a wide range of objective functions and movesets. Overall, SmartRunner can be viewed as an adaptive search policy designed to maximize fitness gain per step. Interestingly, SmartRunner's global optimization policy amounts to hill ascent on a fitness landscape modified with an easily computed adaptive occupancy penalty. The occupancy penalty makes rejecting moves less and less favorable as the number of unsuccessful attempts to change the current state grows. Ultimately, one of the nearest neighbors is accepted even if the step is deleterious on the original fitness landscape. This behavior allows SmartRunner to climb out of local basins of attraction (Fig. 2). In principle, the adaptive fitness landscape given by Eq. (28) can be explored using any global optimization algorithm. We have tested SmartRunner's performance on a standard set of functions routinely used to evaluate the performance of global optimization algorithms [29]. These 4D functions are characterized by numerous local maxima that make it challenging to find a single global maximum. We find that SmartRunner exhibits the highest fitness gain per novel fitness function evaluation compared to three other state-of-the-art gradient-free algorithms (Fig. 6). This is especially important in situations where fitness function calls are computationally expensive. Interestingly, when adaptive fitness landscapes were given as input to other global optimization algorithms, the best results were obtained when the other algorithms' policy for accepting and rejecting moves closely resembled the SmartRunner policy of hill climbing on the modified fitness landscape (Fig. 5). For example, with simulated annealing the globally best strategy was to set the initial temperature to a very low value, essentially reducing simulated annealing to hill ascent. Finally, we observe that the SmartRunner approach is flexible enough to adapt to substantial changes in the moveset, from \(\mathcal{O}(10^{0})\) local moves to \(\mathcal{O}(10^{2}-10^{3})\) random updates of a single randomly chosen coordinate (Figs. 3,4). We have also tested SmartRunner on two challenging models with long-range couplings and multiple local minima or maxima: the Sherrington-Kirkpatrick spin glass model [12] and the Kauffman's NK model of fitness [39, 40]. In systems with quenched disorder, SmartRunner performs very well compared with four other general-purpose global optimization algorithms (Tables 1,2). It is also fairly reliable in locating ground-state energies averaged over disorder in the SK model, although the results are inferior to those obtained by Extremal Optimization, a heuristic algorithm specifically adapted to finding the ground states in the SK model [41, 44] (Fig. 7). In summary, SmartRunner implements a novel global optimization paradigm which offers a viable alternative to current algorithms. The SmartRunner approach described here works on discrete or discretized fitness landscapes and does not make use of the gradient of the objective function in implementing its stochastic policy. In the future, we intend to adapt SmartRunner to carry out global optimization on continuous landscapes where the gradient of the objective function can be computed efficiently. Such optimization will be of great interest in modern machine learning. For example, training artificial neural networks relies on the differentiability of objective functions and optimization methods based on stochastic gradient descent [6, 7], which may get trapped in local minima. ## Software Availability The Python3 code implementing SmartRunner and four other gradient-free global optimization algorithms discussed here is available at [https://github.com/morozov22/SmartRunner](https://github.com/morozov22/SmartRunner). ## Acknowledgements We gratefully acknowledge illuminating discussions with Stefan Boettcher. JY and AVM were supported by a grant from the National Science Foundation (NSF MCB1920914). ## References * [1] Onuchic, J. N. and Wolynes, P. G. (2004) _Curr. Op. Struct. Biol._**14**, 70-75. * [2] Dill, K. A., Ozkan, S. B., Shell, M. S., and Weikl, T. R. (2008) _Ann. Rev. Biophys._**37**, 289-316. * [3] Crow, J. F. and Kimura, M. (1970) An Introduction to Population Genetics Theory, Harper and Row, New York. * [4] Kimura, M. (1983) The Neutral Theory of Molecular Evolution, Cambridge University Press, Cambridge, UK. * [5] Gillespie, J. (2004) Population Genetics: A Concise Guide, The Johns Hopkins University Press, Baltimore, USA. * [6] Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning, MIT Press, Cambridge, MA. * [7] Mehta, P., Bukov, M., Wang, C.-H., Day, A. G., Richardson, C., Fisher, C. K., and Schwab, D. J. (2019) _Physics Reports_**810**, 1-124. * [8] Zwanzig, R., Szabo, A., and Bagchi, B. (1992) _Proc. Natl. Acad. Sci. USA_**89**, 20-22. * [9] Bryngelson, J., Onuchic, J., Socci, N., and Wolynes, P. (1995) _Proteins: Struc. Func. Genet._**21**, 167-195. * [10] Dill, K. A. and Chan, H. (1997) _Nat. Struct. Mol. Biol._**4**, 10-19. * [11] Danilova, M., Dvurechensky, P., Gasnikov, A., Gorbunov, E., Guminov, S., Kamzolov, D., and Shibaev, I. Recent Theoretical Advances in Non-Convex Optimization pp. 79-163 Springer International Publishing Cham, Switzerland (2022). * [12] Sherrington, D. and Kirkpatrick, S. (1975) _Phys. Rev. Lett._**35**, 1792-1796. * [13] Kirkpatrick, S., Gelatt, Jr., C., and Vecchi, M. (1983) _Science_**220**, 671-680. * [14] Cohn, H. and Fielding, M. (1999) _SIAM J. Optim._**9**, 779-802. * [15] Hukushima, K. and Nemoto, K. (1996) _J. Phys. Soc. Jpn._**65**, 1604-1608. * [16] Swendsen, R. H. and Wang, J.-S. (1986) _Phys. Rev. Lett._**57**, 2607-2609. * [17] Wang, W., Machta, J., and Katzgraber, H. G. (2015) _Phys. Rev. E_**92**, 063307. * [18] Marinari, E. and Parisi, G. (1992) _Europhys. Lett._**19**, 451-458. * [19] Wang, W., Machta, J., and Katzgraber, H. G. (2015) _Phys. Rev. E_**92**, 013303. * [20] Goldberg, D. (1989) Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA. * [21] Vikhar, P. A. (2016) In 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC) : pp. 261-265. * [22] Slowik, A. and Kwasnicka, H. (2020) _Neur. Comp. Appl._**32**, 12363-12379. * [23] Geem, Z. W., Kim, J. H., and Loganathan, G. V. (2001) _Simulation_**76(2)**, 60-68. * [24] Lee, K. S. and Geem, Z. W. (2005) _Comp. Meth. Appl. Mech. Eng._**194**, 3902-3933. * [25] Kennedy, J. and Eberhart, R. (1995) _Proc. IEEE Intern. Conf. Neur. Netw._**4**, 1942-1948. * [26] Eberhart, R. and Kennedy, J. (1995) In MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science : pp. 39-43. * [27] Cvijovic, D. and Klinowski, J. (1995) _Science_**267**, 664-666. * [28] Juels, A. and Wattenberg, M. (1995) In D. Touretzky, M.C. Mozer, and M. Hasselmo, (ed.), Advances in Neural Information Processing Systems, volume **8**, Cambridge, MA: MIT Press. pp. 430-436. * [29] Torn, A. and Zilinskas, A. (1989) Global Optimization, Springer-Verlag, Berlin, Germany. * [30] Berg, B. (1993) _Nature_**361**, 708-710. * [31] Hesselbo, B. and Stinchcombe, R. (1995) _Phys. Rev. Lett._**74**, 2151-2155. * [32] Dittes, F.-M. (1996) _Phys. Rev. Lett._**76**, 4651-4655. * [33] Barhen, J., Protopopescu, V., and Reister, D. (1997) _Science_**276**, 1094-1097. * [34] Wenzel, W. and Hamacher, K. (1999) _Phys. Rev. Lett._**82**, 3003-3007. * [35] Hamacher, K. (2006) _Europhys. Lett._**74**, 944-950. * [36] Bishop, C. M. (2006) Pattern Recognition and Machine Learning, Springer, New York, NY. * [37] Kion-Crosby, W. B. and Morozov, A. V. (2018) _Phys Rev Lett_**121**, 038301. * [38] Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E. (1953) _J. Chem. Phys._**21**, 1087-1092. * [39] Kauffman, S. A. and Weinberger, E. D. (1989) _J. Theor. Biol._**141**, 211-245. * [40] Kauffman, S. (1993) The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, New York. * [41] Boettcher, S. (2005) _Eur. Phys. J. B_**46**, 501-505. * [42] Parisi, G. (1980) _J. Phys. A: Math. Gen._**13**, L115-L121. * [43] Parisi, G., Ritort, F., and Slanina, F. (1993) _J. Phys. A: Math. Gen._**26**, 3775-3789. * [44] Boettcher, S. (2010) _J. Stat. Mech._**2010**, P07002. # Supplementary Materials: An adaptive Bayesian approach to gradient-free global optimization Jianneng Yu\({}^{1,2}\) and Alexandre V. Morozov\({}^{1,2,}\) \({}^{1}\) Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, USA \({}^{2}\) Center for Quantitative Biology, Rutgers University, Piscataway, NJ 08854, USA Corresponding author: [email protected] ## Supplementary Methods ### Standard test functions We employ 3 standard test functions [1], including the 4D Rastrigin function: \[\mathcal{F}(\vec{\mathbf{x}})=-4-\sum_{i=1}^{4}\big{(}x_{i}^{2}-\cos(18x_{i}) \big{)},\ x_{i}\in[-5,5],\forall i, \tag{1}\] the 4D Ackley function: \[\mathcal{F}(\vec{\mathbf{x}})=-20-e+20\exp\Big{(}-0.2\sqrt{\frac{1}{4}\sum_{i =1}^{4}x_{i}^{2}}\Big{)}+\exp\Big{(}\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi x_{i}) \Big{)},\ x_{i}\in[-32.8,32.8],\forall i, \tag{2}\] and the 4D Griewank function: \[\mathcal{F}(\vec{\mathbf{x}})=-1-\frac{1}{4000}\sum_{i=1}^{4}x_{i}^{2}+\prod_ {i=1}^{4}\cos\big{(}\frac{x_{i}}{\sqrt{i}}\big{)},\ x_{i}\in[-600,600],\forall i \tag{3}\] All three functions have multiple local maxima and a unique global maximum located at \(\vec{\mathbf{x}}=\vec{\mathbf{0}}\) (\(\mathcal{F}(\vec{\mathbf{0}})=0\)). The fitness landscapes are discretized, with periodic boundary conditions and \(\Delta x=(0.05,0.2,1.0)\) steps in all four directions for Rastrigin, Ackley and Griewank functions, respectively. ### Sherrington-Kirkpatrick spin glass model Consider a system of \(N\) spins which can point either up or down: \(s_{i}=\pm 1\), \(i=1\dots N\). The Sherrington-Kirkpatrick (SK) model is defined by the Hamiltonian [2]: \[\mathcal{H}_{\text{SK}}(s)=-\frac{1}{\sqrt{N}}\sum_{1\leq i<j\leq N}J_{ij}s_{ i}s_{j}, \tag{4}\] where \(s=(s_{1},s_{2},\dots,s_{N})\) and \(J_{ij}\) are random spin couplings independently sampled from a standard Gaussian distribution: \[P(J_{ij})=\frac{1}{\sqrt{2\pi}}e^{-\frac{J_{ij}}{2}}. \tag{5}\] The SK model is characterized by a large number of local minima and therefore presents a challenging problem to global optimization algorithms. The ground state energy per spin of the SK model (the global minimum) asymptotically approaches \(\mathcal{H}_{\text{SK}}/N\to-0.7633\) in the \(N\to\infty\) limit [3]. All SK energies are divided by the number of spins \(N\) to produce intensive quantities. ### Kauffman's NK model In Kauffman's NK model [4, 5], each of the \(N\) sites in the gene (or genes in the genome) interacts with \(K\) other sites chosen by random sampling. The fitness of the genotype \(s=(s_{1},s_{2},\dots,s_{N})\) (\(s_{i}=\{0,1\}\), \(i=1\dots N\)) is given by \[\mathcal{F}_{\text{NK}}(s)=\sum_{\mu=1}^{N}\mathcal{F}_{\mu}(s_{\mu},s_{n_{1} (\mu)},\dots,s_{n_{K}(\mu)}), \tag{6}\] where \(n_{1}(\mu),\ldots,n_{K}(\mu)\) are \(K\) randomly-chosen interaction partners of the site \(s_{\mu}\). The single-site fitnesses \({\cal F}_{\mu}\) are obtained by sampling from a standard uniform distribution (\(U(0,1)\)); each combination of the \(2^{K+1}\) possible states of the argument corresponds to an independent sample from this distribution. When \(K=0\), the NK landscape becomes fully additive. Because in this limit the landscape is smooth and has a single maximum, it is sometimes called the "Mount Fuji" model.[6] The amount of landscape ruggedness can be tuned by increasing \(K\) to the maximum value of \(N-1\). With \(K=N-1\), the fitnesses of different sequences are uncorrelated; this model is called the "House of Cards"[7] due to the unpredictable fitness effects of mutations. Numerous results are available for the statistical mechanics of NK landscapes.[8, 9, 10, 11, 4] In particular, in the \(K=N-1\) limit the average number of local maxima is \(2^{L}/(L+1)\) for the binary alphabet considered here.[12] As with the SK model, we use NK models with quenched disorder - for given values of \(N\) and \(K\), all interaction partners and single-site fitnesses are generated once and then used in all subsequent comparisons of global optimization algorithms. All fitnesses are divided by the number of sites \(N\) to produce intensive quantities. ### Alternative global optimization algorithms We have implemented four alternative global optimization algorithms: Simulated Annealing (SA), Stochastic Hill Climbing (SHC), Evolutionary Algorithm (EA), and Taboo Search (TS) using Solid, a gradient-free Python optimization package ([https://github.com/100/Solid](https://github.com/100/Solid)). The modified Solid code for these four algorithms, implementing fitness functions augmented with the occupancy penalties, is available as part of the SmartRunner GitHub distribution package ([https://github.com/morozov22/SmartRunner](https://github.com/morozov22/SmartRunner)). SA, SHC and TS search strategies were left exactly as implemented in Solid, while in EA fitnesses of all population members were made positive by subtracting the minimum fitness in the current population; these fitnesses were then used to compute selection probabilities. Apart from this change, Solid's EA crossover and mutation strategies remained the same as in the original package. ## Supplementary Figures Figure S2: **SmartRunner exploration of the Ackley and Griewank test functions: _nnb_ moveset.** (A) A scan over SmartRunner hyperparameters (\(l_{\text{max}}=2\), \(nnb\) moveset) for the Ackley function: the initial value of the expected rate of fitness gain per step \(\overline{R}_{\text{init}}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) Same as A but for the Griewank function. (D) Same as B but for the Griewank function. (E) Same as C but with the globally best fitness value obtained over all 50 runs shown instead of the average. (F) Same as D but with the number of fitness function evaluations corresponding to the globally best run from E shown instead of the average over 50 runs. Figure S3: **SmartRunner exploration of the Ackley and Griewank test functions: _spmut_ moveset.** Same as Fig. S2A-D (including SmartRunner settings) but with the _spmut_ moveset. Figure S4: **Kolmogorov-Smirnov (KS) p-value analysis of the SmartRunner hyperparameter settings: Rastrigin test function.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent SmartRunner runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. SmartRunner was used with the \(nnb\) moveset. (B) Same as A but with the \(spmut\) moveset. Figure S5: **Kolmogorov-Smirnov (KS) p-value analysis of the SmartRunner \(l_{\max}\) hyperparameter setting.** (A) Rastrigin test function. One-sided KS tests were used to investigate the significance of the differences between distributions of best-fitness values yielded by SmartRunner runs with \(l_{\max}=3\) and \(l_{\max}=2\). Each KS test compared two best-fitness distributions with the same values of \(\alpha\) and \(\bar{R}_{\mathrm{init}}\). Low p-values (\(<0.05\)) indicate that \(l_{\max}=3\) runs have yielded a significantly better distribution of best-fitness values than the runs with \(l_{\max}=2\). High p-values (\(\geq 0.05\)) indicate that \(l_{\max}=3\) runs have yielded a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution with \(l_{\max}=2\). (B) Same as A but for the Ackley test function. (C) Same as A but for the Griewank test function. All runs employed the \(nnb\) moveset. Figure S6: **Enhanced simulated annealing (ESA) hyperparameter scan:**_nnb_ **moveset.** (A) A scan over ESA hyperparameters (\(nnb\) moveset, linear cooling schedule) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the initial temperature \(T_{i}\) (the final temperature is set to \(T_{f}=0.001\) in all plots). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each ESA run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S7: **Kolmogorov-Smirnov (KS) p-value analysis of the ESA hyperparameter settings: _nnb_ moveset.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent ESA runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. ESA was run on the Rastrigin function with the \(nnb\) moveset and a linear cooling schedule (Fig. S6A). (B) Same as A but for the Ackley function (Fig. S6C). (C) Same as A but for the Griewank function (Fig. S6E). Figure S8: **Enhanced simulated annealing (ESA) hyperparameter scan: low \(T_{i}\), _nnb_, **moveset.** Same as Fig. S6 but with suboptimally low initial temperatures \(T_{i}\) employed with a linear cooling schedule. Figure S9: **Enhanced stochastic hill climbing (ESHC) hyperparameter scan: _nnb_** **moveset.** (A) A scan over ESHC hyperparameters (\(nnb\) moveset) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the temperature \(T\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each ESHC run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S10: **Kolmogorov-Smirnov (KS) p-value analysis of the ESHC hyperparameter settings:**_nnb_ **moveset.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent ESHC runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. ESHC was run on the Rastrigin function with the \(nnb\) moveset (Fig. S9A). (B) Same as A but for the Ackley function (Fig. S9C). (C) Same as A but for the Griewank function (Fig. S9E). Figure S11: **Enhanced stochastic hill climbing (ESHC) hyperparameter scan: low \(T\), \(nnb\) moveset.** Same as Fig. S9 but with suboptimally low temperatures \(T\). Figure S12: **Enhanced evolutionary algorithm (EEA) hyperparameter scan: _nnb_moveset.** (A) A scan over EEA hyperparameters (\(nnb\) moveset) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the mutation rate \(\mu\) (the crossover rate is set to \(r_{x}=0.2\) and the population size to \(N_{\rm pop}=50\) in all plots). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=2000\) steps each and randomly chosen starting states (each EEA step involves evaluating fitness functions for all 50 population members). (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each EEA run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S13: **Taboo search (TS) hyperparameter scan: _nnb_ moveset. (A) A scan over the length of the taboo list \(L_{\rm tabu}\) (\(nnb\) moveset) for the Rastrigin function. Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=12500\) steps each and randomly chosen starting states (each TS step involves evaluating fitness functions for all 8 nearest neighbors of the current node). (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each TS run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function.** ## Appendix A Simulated Annealing Figure S15: **Representative TS fitness trajectories: _spmut_ moveset.** Shown is the \(L_{2}\) distance \(D\) between the current 4D state and the global maximum as a function of the number of steps \(\ell\) for 3 randomly chosen TS trajectories. TS was run with \(L_{\rm tabu}=100\). (A) Rastrigin function. (B) Ackley function. (C) Griewank function. Figure S16: **SmartRunner (SR) hyperparameter scan: SK model with \(\mathbf{N=200}\) spins.** (A) A scan over SmartRunner hyperparameters (\(l_{\max}=2\)): the initial value of the expected rate of fitness gain per step \(\overline{R}_{\text{init}}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents the best fitness values found in each run, averaged over 100 independent runs with \(l_{\text{tot}}=1.5\times 10^{6}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) Same as A but with the globally best fitness value obtained over all 100 runs shown instead of the average. (D) Same as B but with the number of fitness function evaluations corresponding to the globally best run from C shown instead of the average over 100 runs. (E) One-sided KS tests used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values in panel A vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. Figure S17: **SmartRunner (SR) hyperparameter scan: NK model with \(\mathbf{N=200}\) sites and \(\mathbf{K=8}\) nearest neighbors.** Same as Fig. S16 but for the NK model. Figure S18: **SmartRunner (SR) hyperparameter scan: SK and NK models.** The SK models have \(N=200\), \(500\) and \(1000\) spins, while the NK models have \(N=200\), \(500\) and \(1000\) sites, with \(K=8\) randomly chosen intra-sequence couplings per site. All SR runs were carried out with \(l_{\text{max}}=2\) and \(\overrightarrow{R}^{\text{init}}=0.01\), and with the total number of steps \(l_{\text{tot}}=1.5\times 10^{6},10^{6},5\times 10^{5}\) for the models with \(200\), \(500\) and \(1000\) spins/sites, respectively. (A) SK model: shown are the best fitness values found in each run, averaged over \(10\) independent runs with randomly chosen starting states. Error bars show standard deviations. (B) SK model: shown are the globally best fitness values obtained over all \(10\) runs. (C) Same as A but for the NK model. (D) Same as B but for the NK model. The dashed horizontal line is drawn at \(0.7\) to guide the eye.
2301.13817
Patch Gradient Descent: Training Neural Networks on Very Large Images
Traditional CNN models are trained and tested on relatively low resolution images (<300 px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited.
Deepak K. Gupta, Gowreesh Mago, Arnav Chavan, Dilip K. Prasad
2023-01-31T18:04:35Z
http://arxiv.org/abs/2301.13817v1
# Patch Gradient Descent: Training Neural Networks ###### Abstract Traditional CNN models are trained and tested on relatively low resolution images (\(<300\) px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited. ## 1 Introduction Convolutional neural networks (CNNs) are considered among the most vital ingredients for the rapid developments in the field of computer vision. This can be attributed to their capability of extracting very complex information far beyond what can be obtained from the standard computer vision methods. For more information, we refer the reader to the recently published comprehensive reviews (Khan et al., 2020; Li et al., 2021; Alzubaidi et al., 2021). With the recent technological developments, very large images are obtained from data acquisition in the fields of microscopy (Khater et al., 2020; Schermelleh et al., 2019), medical imaging (Aggarwal et al., 2021), and earth sciences (Huang et al., 2018; Amani et al., 2020), among others. Recently, there has been a drive to use deep learning methods in these fields as well. In particular, several deep learning methods have been proposed to handle the images from the microscopy domain (Orth et al., 2017; Dankovich and Rizzoli, 2021; Sekh et al., 2020, 2021), however, the big data challenge of applying CNNs to analyze such images is immense, as we demonstrate in Figure 1. High content nanoscopy involves taking nanoscopy images of several adjacent fields-of-view and stitching them side-by-side to have a full perspective of the biological sample, such as a patient's tissue biopsy, put under the microscope. There is information at multiple scales embedded in these microscopy images (Villegas-Hernandez et al., 2022), with the smallest scale of features being only a few pixels in size. Indeed, such dimensions of images and levels of details are a challenge for the existing CNNs. Figure 1: Example nanoscopy image (left) of a mouse kidney cryosection approximately 1/12th of the area of a single field-of-view of the microscope, chosen to illustrate the level of details at different scales. The bottom right images show that the smallest features in the image of relevance can be as small as a few pixels (here 5-8 pixels for the holes)(Villegas-Hernández et al., 2022). Existing deep learning models using CNNs are predominantly trained and tested on relatively low resolution regime (less than \(300\times 300\) pixels). This is partly because the widely used image benchmarking datasets such as ILSVRC(ImageNet dataset) [10] for classification and PASCAL VOC [1] for object detection/segmentation consist of low-resolution images in a similar range, and most of the existing research has been towards achieving state-of-the-art (SOTA) results on these or similar datasets. Using these models on high-resolution images leads to quadratic growth of the associated activation size, and this in turn leads to massive increase in the training compute as well as the memory footprint. Further, when the available GPU memory is limited, such large images cannot be processed by CNNs. There exist very limited works that address the issue of handling very large images using CNNs. The most common approach among these is to reduce the resolution of the images through downscaling. However, this can lead to a significant loss of information associated with the small-scale features, and it can adversely affect the semantic context associated with the image. An alternate strategy is to divide the image into overlapping or non-overlapping tiles and process the tiles in a sequential manner. However, this approach does not assure that the semantic link across the tiles will be preserved and it can hinder the learning process. Several similar strategies exist that attempt to learn the information contained in the large images, however, their failure to capture the global context limits their use. In this paper, we present a novel CNN training pipeline that is capable of handling very large images. We point out here that 'large images' should not be plainly interpreted in terms of the number of pixels that they comprise, rather an image should be considered too large to be trained with CNNs if the respective computational memory budget available for it is small. For example, while training a a ResNet50 classification model with images of size \(10,000\times 10,000\) might be hardly possible on a GPU card of 48 GB memory, a GPU memory of 12 GB could be good enough to train the same model on \(512\times 512\) size images. Further, when the same \(512\times 512\) size images are trained on a GPU memory limit of 4 GB, these might be looked at as too large. Figure 2 presents a better understanding of the problem outlined above. We consider here the task of classification of the UltraMNIST digits [11] into one of the 10 predefined classes labelled from 0-9. UltraMNIST images used here comprise 3-5 MNIST digits of extremely varying scale and the sum of the digits ranges between 0-9. The label class of each image corresponds to the sum of the contained digits. More details related to the UltraMNIST classification problem are presented in Appendix B.2. We consider here images of size \(512\times 512\) pixels and pose the problem to be solved at two different computational memory budgets. We consider the two cases of GPU memory limits of 4 GB and 16 GB. For the base CNN model, we use ResNet50 [12] architecture and employ the standard training approach. We refer to this approach as Gradient descent (GD). We further present results obtained using the proposed training pipeline, referred as _PatchGD_. Abbreviated for Patch Gradient Descent, it is a scalable training method designed to build neural networks with either very large images, or very low memory compute or a combination of both. The efficacy of PatchGD is evident from the results in Figure 2 where PatchGD outperforms the conventional GD method for 16 GB as well as 4 GB memory limit. While the difference in performance is 4% at 16 GB, it grows to a remarkable margin of 13% difference in the accuracy measure at 4 GB. The classification problem at 4 GB memory compute is intended to replicate the real-world challenges when dealing with large images. With only 4 GB in hand, the image size of \(512\times 512\) is already too large to be used for training a ResNet50 model, and this leads to the inferior performance shown in Figure 2. However, PatchGD is stable even at this low memory regime, and this can be attributed to its design that makes it invariant to image size to a large extent. We describe the details of the method later in the paper as well as demonstrate through experimental results on a variety of image sizes that PatchGD is capable of adapting the existing CNN models to work with very large images even if the available GPU memory is limited. **Contributions.** To summarize, the contributions of this paper can be listed as follows. * We present _Patch Gradient Descent (PatchGD)_, a novel strategy to train neural networks on very large images in an end-to-end manner. Figure 2: Performance comparison of standard CNN and PatchGD (ours) for the task of classification of UltraMNIST digits of size \(512\times 512\) pixels using ResNet50 model. Two different computational memory budgets of 16 GB and 4GB are used, and it is demonstrated that PatchGD is relatively stable for the chosen image size, even for very low memory compute. * Due to its inherent ability to work with small fractions of a given image, PatchGD is scalable on small GPUs, where training the original full-scale images may not even be possible. * PatchGD reinvents the existing CNN training pipeline in a very simplified manner and this makes it compatible with any existing CNN architecture. Moreover, its simple design allows it to benefit from the pre-training of the standard CNNs on the low-resolution data. ## 2 Related Work This paper aims at improving the capability of CNNs in handling large-scale images in general. To our knowledge there is only very limited research work in this direction and we discuss them in this section. Most works that exist focus on histopathological datasets since these are popular sources of large images. The majority of existing works employ pixel-level segmentation masks, which are not always available. For example, Iizuka et al. (2020); Liu et al. (2017) perform patch-level classification based on labels created from patchwise segmentation masks available for the whole slide images (WSI), and then feed it to a RNN to obtain the final WSI label. Braatz et al. (2022) use goblet cell segmentation masks to perform patch-level feature extraction. However, these approaches require labelled segmentation data, are computationally expensive, feature learning is very limited, and the error propagation is higher. Another set of methods focus on building a compressed latent representation of the large input images using existing pretrained models or unsupervised learning approaches. For example, Lai et al. (2022) use U-Net autoencoder and stack them into a cube, which is then fed to another module to obtain slide-level predictions. Tellez et al. (2018) explore the use of different encoding strategies including reconstruction error minimization, contrastive learning and adversarial feature learning to map high-resolution patches to a lower-dimensional vector. Tellez et al. (2020) extend this work and use multi-task learning to get better representations of patches than their unsupervised counterparts. One important limitation of this class of methods is that the encoding network created from unsupervised learning is not always the strong representative of the target task. There exist several methods that use pretrained models derived from other other tasks as feature extractors and the output is then fed to a classifier. Example methods include using Cancer-Texture Network (CAT-Net) and Google Brain (GB) models as feature extractors (Kosaraju et al., 2022), or additionally using similar datasets for fine-tuning (Brancati et al., 2021). Although these methods gain advantage from transfer learning, such two-stage decoupled pipelines propagate errors through under-represented features and the performance of the model on the target task is hampered. In this paper, we propose a single step approach that can be trained in an end-to-end manner on the target task. Several research works have focused on identifying the right patches from the large images and use them in a compute-effective manner to classify the whole image. Naik et al. (2020) propose to construct the latent space using randomly selected tiles, however, this approach does not preserve the semantic coherence across the tiles and fails to extract features that are spread across multiple tiles. Campanella et al. (2019) consider this as a multi-instance learning approach, assigning labels to top-K probability patches for classification. Pinckaers et al. (2022); Huang et al. (2022) propose a patch-based training, but make use of streaming convolution networks. Sharma et al. (2021) cluster similar patches and performs cluster-aware sampling to perform WSI and patch classification. Cordonnier et al. (2021) use a patch scoring mechanism and patch aggregator network for final prediction, however they perform downsampling for patch scoring which may cause loss of patch-specific feature important for WSI. Papadopoulos et al. (2021) progressively increases the resolution and localize the regions of interest dropping the rest equivalent to performing hard adaptive attention. DiPalma et al. (2021) train a teacher model at high-resolution and performs knowledge distillation for the same model at lower resolution. Katharopoulos and Fleuret (2019) perform attention sampling on downsampled image and derive an unbiased estimator for the gradient update. However their method involves downsampling for attention which may loose out some vital information. It is important to note that all such methods which employ patch selection and knowledge distillation are orthogonal to our work and can be easily combined with our work. However, this is beyond the scope of this paper. With the recent popularity of Transformer-based methods for vision-based tasks, Chen et al. (2022) proposed a self-supervised learning objective for pre-training large-scale vision transformer at varying scale. Their method involves a hierarchical vision transformer which leverages the natural hierarchical structure inherent in WSI. However their method requires a massive pre-training stage which is not always feasible. Also their method is specific to WSI rather than more general image classification and involves training multiple large-scale transformers. Our method on the other hand, targets more general image classification task and does not involve large scale pre-training, rather it directly works over any existing CNN model. ## 3 Approach ### General description _Patch Gradient Descent (PatchGD)_ is a novel CNN training strategy that can train networks with high-resolution images. It is based on the hypothesis that, rather than performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. However, even if only a portion of the image is used, the model is still trainable end-to-end with PatchGD. Figure 3 presents a schematic explanation of the PatchGD method. At the core of PatchGD lies the construction or filling of \(\mathbf{Z}\) block, a deep latent representation of the full input image. Irrespective of which parts of the input are used to perform model updates, \(\mathbf{Z}\) builds an encoding of the full image based on information acquired for different parts of it from the previous few update steps. We further explain the use of the \(\mathbf{Z}\) block using the diagram shown in Figure 2(a). As can be seen, \(\mathbf{Z}\) is primarily an encoding of an input image \(\mathbf{X}\) obtained using any given model parameterized with weights \(\mathbf{\theta}_{1}\). The input image is divided into \(m\times n\) patches and each patch is processed as an independent image using \(\mathbf{\theta}_{1}\). The size of \(\mathbf{Z}\) is always enforced to be \(m\times n\times s\), such that patch \(\mathbf{x}_{ij}\) in the input space corresponds to the respective \(1\times 1\times s\) segment in the \(\mathbf{Z}\) block. The process of \(\mathbf{Z}\)-filling spans over multiple steps, where every step involves sampling \(k\) patches and their respective positions from \(\mathbf{X}\) and passing them as a batch to the model for processing. The output of the model combined with the positions are then used to fill the respective parts of \(\mathbf{Z}\). Once all the \(m\times n\) patches of \(\mathbf{X}\) are sampled, the filled form of \(\mathbf{Z}\) is obtained. The concept of filling \(\mathbf{Z}\) is employed by PatchGD during model training as well as inference stages. To build an end-to-end CNN model, we add a small subnetwork comprising convolutional and fully-connected layers that processes the information contained in \(\mathbf{Z}\) and transforms it into a vector of \(c\) probabilities as desired for the task of classification. It is important to note that the cost of adding this small sub-network is generally negligible. The pipelines for model training and inference are shown in Figure 2(b). During training, model components \(\mathbf{\theta}_{1}\) as well as \(\mathbf{\theta}_{2}\) are updated. Based on a fraction of patches sampled from the input image, the respective encodings are computed using the latest state of \(\mathbf{\theta}_{1}\) and the output is used Figure 3: Schematic representations of the pipelines demonstrating working of different components of the PatchGD process. to update the corresponding entries in the already filled \(\mathbf{Z}\). The partially updated \(\mathbf{Z}\) is then used to further compute the loss function value and the model parameters are updated through back-propagation. ### Mathematical formulation In this section, we present a detailed mathematical formulation of the proposed PatchGD approach and describe its implementation for the model training and inference steps. For the sake of simplicity, we tailor the discussion towards training of a CNN model for the task of classification. Let \(f_{\mathbf{\theta}}:\mathbb{R}^{M\times N\times C}\to\mathbb{R}^{c}\) denote a CNN-based model parameterized by \(\mathbf{\theta}\) that takes an input image \(\mathbf{X}\) of spatial size \(M\times N\) and \(C\) channels, and computes the probability of it to belong to each of the \(c\) pre-defined classes. To train this model, the following optimization problem is solved. \[\underset{\mathbf{\theta}}{\text{min}}\ \ \mathcal{L}(f(\mathbf{\theta};\mathbf{X}), \mathbf{y}), \tag{1}\] where \(\{\mathbf{X},\mathbf{y}\}\in\mathcal{D}\) refers to the data samples used to train the network and \(\mathcal{L}(\cdot)\) denotes the loss function associated with the training. Traditionally, this problem is solved in deep learning using the popular mini-batch gradient descent approach where updates are performed at every step using only a fraction of the data samples. We present below the formulation of standard gradient descent followed by the formulation our PatchGD method. **Gradient Descent (GD).** Gradient descent in deep learning involves performing model updates using the gradients computed for the loss function over one or more image samples. With updates performed over one sample at a time, referred as stochastic gradient descent method, the model update at the \(i^{\text{th}}\)step can be mathematically stated as \[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\alpha\frac{\mathrm{d}\mathcal{L}}{ \mathrm{d}\mathbf{\theta}^{(i-1)}}, \tag{2}\] where \(\alpha\) denotes the learning rate. However, performing model updates over one sample at a time leads to very slow convergence, especially because of the noise induced by the continuously changing descent direction. This issue is alleviated in mini-batch gradient descent method where at every step, the model weights are updated using the average of gradients computed over a batch of samples, denoted here as \(\mathcal{S}\). Based on this, the update can be expressed as \[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\frac{\alpha}{N(\mathcal{S})}\sum_{ \mathbf{X}\in\mathcal{S}}\frac{\mathrm{d}\mathcal{L}^{(\mathbf{X})}}{\mathrm{d }\mathbf{\theta}^{(i-1)}} \tag{3}\] and \(N(S)\) here denotes the size of the batch used. As can be seen in Eq. 3, if the size of image samples \(s\in\mathcal{S}\) is very large, it will lead to large memory requirements for the respective activations, and under limited compute availability, only small values of \(N(\mathcal{S})\), sometimes even just 1 fits into the GPU memory. This should clearly demonstrate the limitation of the gradient descent method, when handling large images. This issue is alleviated by our PatchGD approach and we describe it next. **PatchGD.** As described in Section 3.1, PatchGD avoids model updates on an entire image sample in one go, rather it computes gradients using only part of the image and updates the model parameters. In this regard, the model update step of PatchGD can be stated as \[\mathbf{\theta}^{(i,j)}=\mathbf{\theta}^{(i,j-1)}-\frac{\alpha}{k\cdot N(\mathcal{S}_{ i})}\sum_{\mathbf{X}\in\mathcal{S}_{i}}\sum_{p\in\mathcal{P}_{\mathbf{X},j}} \frac{\mathrm{d}\mathcal{L}^{(\mathbf{X},p)}}{\mathrm{d}\mathbf{\theta}^{(i,j-1)}}. \tag{4}\] In the context of deep learning, \(i\) here refers to the index of the mini-batch iteration within a certain epoch. Further, \(j\) denotes the inner iterations, where at every inner iteration, \(k\) patches are sampled from the input image \(\mathbf{X}\) (denoted as \(\mathcal{P}_{\mathbf{X},j}\)) and the gradient-based updates are performed as stated in Eq. 4. Note that for any iteration \(i\), multiple inner iterations are run ensuring that the the majority of samples from the full set of patches that are obtained from the tiling of \(\mathbf{X}\) are explored. In Eq. 4, \(\mathbf{\theta}^{(i,0)}\) denotes the initial model to be used to start running the inner iterations on \(\mathcal{S}_{i}\) and is equal to \(\mathbf{\theta}^{(i-1,\zeta)}\), the final model state after \(\zeta\) inner iterations of patch-level updates using \(\mathcal{S}_{i-1}\). For a more detailed understanding of the step-by-step model update process, please see Algorithm 1. As described earlier, PatchGD uses an additional sub-network that looks at the full latent encoding \(\mathbf{Z}\) for any input image \(\mathbf{X}\). Thus the parameter set \(\mathbf{\theta}\) is extended as \(\mathbf{\theta}=[\mathbf{\theta}_{1},\mathbf{\theta}_{2}]^{\intercal}\), where the base CNN model is \(f_{\mathbf{\theta}_{1}}\) and the additional sub-network is denoted as \(g_{\mathbf{\theta}_{2}}\). ``` 1:\(\mathbf{\theta}^{(i)}\), \(\mathbf{\theta}^{(i-1)}\), \(\bm affects the convergence process, we have observed that gradient update per inner-iteration leads to sometimes poor convergence. Thus, we introduce gradient accumulation over \(\epsilon\) steps and update the model accordingly. Note that gradients are allowed to backpropagate only through those parts of \(\mathbf{Z}\) that are active at the \(j^{\text{th}}\) inner-iteration. During inference phase, \(\mathbf{Z}\) is filled using the optimized \(f_{\mathbf{\theta}_{2}}\) as stated in Algorithm 2 and then the filled version of \(\mathbf{Z}\) is used to compute the class probabilities for input \(\mathbf{X}\) using \(g_{\mathbf{\theta}_{2}^{*}}\). ``` Input: Batch of input images \(\mathcal{X}\in\mathbb{R}^{B\times M\times N\times C}\), Pre-trained feature trained feature extractor \(f_{\mathbf{\theta}_{1}}\), Classifier head \(g_{\mathbf{\theta}_{2}}\), Patch size \(p\), Inner iterations \(\zeta\), Patches per inner iteration \(k\), Batch size \(B\), Learning rate \(\alpha\), Grad. Acc. steps \(\epsilon\) Initialize: \(\mathbf{Z}=\mathbf{0}^{B\times m\times n\times c};\mathbf{U}_{1}=\mathbf{0}, \mathbf{U}_{2}=\mathbf{0}\) \(\mathbf{Z}\leftarrow\mathbf{Z}\text{-filling}(\mathbf{X},f_{\mathbf{\theta}_{1}},p)\) for\(\mathbf{X}\in\mathcal{X}\) \(f_{\mathbf{\theta}_{1}}\leftarrow\texttt{start\_gradient}(f_{\mathbf{\theta}_{1}})\) for\(j:1\text{ to }\zeta\)do for\(\mathbf{X}\) in \(\mathcal{X}\)do \(\{\mathcal{P}_{\mathbf{X},j},v\}=\texttt{patch\_sampler}(\mathbf{X},k),\) \(\mathcal{P}_{\mathbf{X},j}\in\mathbb{R}^{p\times p\times C\times k}\) \(\mathbf{z}=f_{\mathbf{\theta}_{1}}(\mathcal{P}_{\mathbf{X},j})\) \(\mathbf{Z}[v]=\mathbf{z}\) // Update the positional embeddings \(\mathbf{y}_{\text{pred}}=g_{\mathbf{\theta}_{2}}(\mathbf{Z})\) \(\mathcal{L}=\texttt{calculate\_loss}(\mathbf{y},\mathbf{y}_{\text{pred}})\) \(\mathbf{U}_{1}=\mathbf{U}_{1}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{1}, \mathbf{U}_{2}=\mathbf{U}_{2}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{2}\) endfor if\(j\%\epsilon=0\)then \(\mathbf{U}_{1}=\mathbf{U}_{1}/\epsilon\), \(\mathbf{U}_{2}=\mathbf{U}_{2}/\epsilon\) \(\mathbf{\theta}_{1}=\mathbf{\theta}_{1}-\alpha\mathbf{U}_{1}\) \(\mathbf{\theta}_{2}=\mathbf{\theta}_{2}-\alpha\mathbf{U}_{2}\) \(\mathbf{U}_{1}=\mathbf{0},\mathbf{U}_{2}=\mathbf{0}\) endif endfor ``` **Algorithm 1** Model Training for 1 iteration ## 4 Experiments We demonstrate here the efficacy of PatchGD through multiple numerical experiments on two benchmark datasets comprising large images with features at multiple scales. ### Experimental setup **Datasets.** For the experiments presented in this paper, we consider two datasets: UltraMNIST (Gupta et al., 2022) and Prostate cANcer graDe Assessment (PANDA) (Bulten et al., 2022) datasets. UltraMNIST is a classification dataset and each sample comprises 3-5 MNIST digits of varying scales placed at random locations in the image such that the sum of the digits lies between 0-9. PANDA dataset comprises high-resolution histopathological images, and for this study, we consider a maximum image resolution of \(4096\times 4096\) pixels. Note that unlike the aforementioned approaches, we do not make use of any segementation masks for PANDA. Therefore, the complete task boils down to taking an input high-resolution image and then classifying them into 6 categories based on the International Society of Urological Pathology (ISUP) grade groups. More details related to the datasets can be found in Appendix B. **CNN models.** We consider two popular CNN architectures: ResNet50 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). ResNet50 is a popular network from the residual networks family and forms backbone for several models used in a variety of computer vision tasks (such as object detection and tracking). Thus, we demonstrate the working of PatchGD on primarily this model. MobileNetV2 is a light-weight architecture which is commonly employed for edge-devices, and it would be of interest to see how it performs with large images under limited memory scenarios. **Implementation details.** We follow the same hyperparameters across our experiments for a fair comparison. Exact details are stated in Appendix C. We report classification accuracy and quadratic weighted kappa (QWK) for PANDA dataset. PyTorch is the choice of framework to implement both the baselines and the PatchGD. We follow 4GB, 16GB and 24GB memory constraints to mimic the popular deep learning GPU memory limits. Latency is calculated on 40GB A100 GPU, completely filling the GPU memory. ### Results **UltraMNIST classification.** The performance of PatchGD for UltraMNIST has already been shown in Figure 2. More detailed results are presented in Tables 1 and 2. For both the architectures, we see that PatchGD outperforms the standard gradient descent method (abbreviated as GD) by large margins. Our approach employs an additional sub-network \(g_{\mathbf{\theta}_{2}}\), and it can be argued that the gains reported in the paper are due to it. For this purpose, we extend the base CNN architectures used in GD and report the respective performance scores in Tables 1 and 2 as GD-extended. For both the architectures, we see that PatchGD outperforms GD as well as GD-extended by large margins. For ResNet50, the performance difference is even higher when we have a low memory constraint. At 4 GB, while GD seems unstable with a performance dip of more than 11% compared to the 16 GB case, our PatchGD approach seems to be significantly more stable. For MobileNetV2, the difference between PatchGD and GD is even higher at 16GB case, thereby clearly showing that PatchGD blends well with even light-weight models such as MobileNetV2. For MobileNetV2, we see that going from 16 GB to 4 GB, there is no drop in model performance, which demonstrates that MobileNetV2 can work well with GD even at low memory conditions. Nevertheless, PatchGD still performs significantly better. The underlying reason for this gain can partly be attributed to the fact that since PatchGD facilitates operating with partial images, the activations are small and more images per batch are permitted. We also observe that the performance scores of GD-extended are inferior compared to even GD. ResNet50 and MobilenetV2 are optimized architectures and we speculate that addition of plain convolutional layers in the head of the network is not suited due to which the overall performance is adversely affected. **Prostate Cancer Classification (PANDA).** Table 3 presents the results obtained on PANDA dataset for three different image resolutions. For all experiments, we maximize the number of images used per batch while also ensuring that the memory constraint is not violated. For images of \(512\times 512\), we see that GD as well as PatchGD deliver approximately similar performance scores (for both accuracy as well as QWK) at 16 GB memory limit. However, for the similar memory constraint, when images of size \(2048\times 2048\) (2K) pixels are used, the performance of GD drops by approximately 10% while our PatchGD shows a boost of 9% in accuracy. There are two factors that play role in creating such a big gap in the performance of GD and PatchGD. First, due to significantly increased activation size for higher-resolution images, GD faces the bottleneck of batch size and only 1 image per batch is permitted. Note that to stabilize it, we also experimented with gradient-accumulation across batches, however, it did not help. Alternatively, we performed hierarchical training, where the model trained on the lower resolution case was used as the initial model for the higher-resolution. To alleviate the issue of using only 1 image per batch, we considered a higher memory limit. Another reason for the low performance is that for higher-resolution images, the optimized receptive field of ResNet50 is not suited which leads to non-optimal performance. For increased batch size at 2K resolution, we also considered running quantized networks at half-precision and increased memory (see Table 3). At half-precision, the performance of GD improves, however, it is still significantly lower than PatchGD. Similar observation is made for 4K images that PatchGD performs better. The performance improves further when a patch size of 256 is used. Clearly, from the results reported on PANDA dataset, it is evident that PatchGD is significantly better than GD in terms of accuracy as well as QWK when it comes to handle large images in an end-to-end manner. We also report the latency of both the methods during inference time, and it can be seen that PatchGD performs almost at par with GD. The reason is that unlike GD, the activations produced by PatchGD are smaller and the gain in terms of speed from this aspect balance the slowness induced by patchwise processing of the images. Clearly for applications demanding to handle large images but also aiming to achieve real-time inference, PatchGD could be an interesting direction to explore further. **Additional study.** We demonstrated in the earlier experiments that PatchGD performs significantly better than its counterpart. We present here a brief study related to some of the hyperparameters involved in PatchGD. Table 4 presents the influence of patch sampling on the overall performance of PatchGD. We vary the sampling fraction per inner-iteration as well as the fraction of samples considered in total for an image in a certain iteration. We observe that keeping the sampling fraction per inner-iteration small helps to achieve better accuracy. This is counter-intuitive since smaller fractions provide a lesser context of the image in one go. We speculate that similar to mini-batch gradient \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \\ \hline GD & - & 16 & 65.2 \\ GD-extended & - & 16 & 50.5 \\ PatchGD & 256 & 16 & **69.2** \\ GD & - & 4 & 53.6 \\ GD-extended & - & 4 & 52.5 \\ PatchGD & 256 & 4 & **63.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance scores for standard Gradient Descent and our PatchGD method obtained using ResNet50 architectures on the task of UltraMNIST classification with images of size \(512\times 512\). \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \% \\ \hline GD & - & 16 & 67.3 \\ GD-extended & - & 16 & 64.3 \\ PatchGD & 256 & 16 & **83.7** \\ GD & - & 4 & 67.7 \\ GD-extended & - & 4 & 60.0 \\ PatchGD & 256 & 4 & **74.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance scores for standard Gradient Descent and our PatchGD method on the task of UltraMNIST classification with images of size \(512\times 512\) obtained using MobileNetV2 architecture. descent, not using too large patch batch size induces regularization noise, which in turn improves the convergence process. However, this aspect needs to be studied in more details for a better understanding. We also observed that the fraction of the image seen in one overall pass of the image in PatchGD does not generally affect the performance, unless it is low. For lower fractions, it is hard for the model to build the global context and the convergene is sub-optimal. We have also briefly studied the influence of gradient accumulation length parameter for PatchGD and the results are reported in Table 6 of the appendices. We observed that performing gradient-based model update per inner iteration leads to superior performance for the chosen experiment. However, the choice of \(\epsilon\) depends on the number of inner steps \(\zeta\). For large values of \(\zeta\), values greater than 1 are favored. For example, for the case of processing 2K resolution images with patch size of \(128\times 128\), \(\epsilon=\zeta\) worked well. However, an empirical relation between \(\zeta\) and \(\epsilon\) is still to be identified, and this is a part of our future research work. ## 5 Discussion **Hyperparameter optimization and fine-tuning.** PatchGD involves several hyperparameters and their optimized combination is still to be identified. While we have demonstrated the influence through a few experiments, more clarity needs to be gained on the best values of number of inner-update steps to be combined in gradient accumulation (\(\epsilon\)), striking the right balance between patch size and the number of inner iterations for a given compute memory limit as well choosing the right pretraining strategy. We have observed that using the models trained with GD as the initial models in PatchGD can improve the overall performance. However, there are instances when model training on GD is not possbile. In such scenarios, one could use low-resolution models trained on GD or even the conventional pretrained models. Nevertheless, the effect of each of these choices needs to be thoroughly studied. **Application to other tasks.** In this paper, we have focused on demonstrating the working of PatchGD on tasks of image classification, and in particular those where features exist at varying scales. However, this does not limit the applicability of our method to other problems. PatchGD can also be used on the conventional classification problems, and we speculate that it could help to refine the receptive field of the existing models. We discuss this in more details later in this paper. Beyond classification, it is also straightforward to adapt this method for other tasks such as segmentation, object detection, among others, and we intend to cover them in an extended version of this study later. **Limitations.** This paper presented the foundational concept of PatchGD. Although we have demonstrated the efficacy of PatchGD through multiple numerical experiments, the overall investigation is still limited in terms of understanding the generalization and stability of the method. Another minor limitation is that since our approach looks only at a fraction of an image in one step, it is relatively slower than the standard gradient descent method. However, since the inference speed is almost the same, this issue creates a bottleneck only when real-time training is a priority. **Conclusions.** In this paper, we have demonstrated that it is possible to handle large images with CNN even when the available GPU memory is very limited. We presented Patch Gradient Descent (PatchGD), a novel CNN training strategy that performs model updates using only fractions of the image at a time while also ensuring that it sees almost the full context over a course of multiple steps. We have demonstrated through multiple experiments the efficacy of \begin{table} \begin{tabular}{c c c c} \hline \hline Sampling & Max Sampled & Accuracy & QWK \\ \hline 50 & 100 & 42.3 & 0.538 \\ 30 & 100 & 49.9 & 0.613 \\ 10 & 100 & 53.9 & 0.627 \\ 10 & 70 & 53.1 & 0.624 \\ 10 & 50 & 53.9 & 0.622 \\ 10 & 30 & 51.1 & 0.610 \\ \hline \hline \end{tabular} \end{table} Table 4: Sampling ablation on PANDA dataset. Memory limit is 16 GB, Image size and patch size are 2048 and 128 respectively \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Method & Resolution & Patch Size & Sampling \% & Mem. Constraint & \# Parameters (M) (G) & Latency (imgs/sec) & Accuracy \% & QWK \\ \hline GD & 512 & - & - & 16 & 23.52 & 618.05 & 44.4 & 0.558 \\ PatchGD & 512 & 128 & 30* & 16 & 26.39 & 521.42 & 44.9 & 0.576 \\ GD & 2048 & - & - & 16 & 23.52 & 39.04 & 34.8 & 0.452 \\ PatchGD & 2048 & 128 & 10 & 16 & 26.40 & 32.52 & 53.9 & 0.627 \\ GD-fp16 & 2048 & - & - & 24 & 23.52 & 39.04 & 50.6 & 0.658 \\ PatchGD-fp16 & 2048 & 128 & 10 & 24 & 26.40 & 32.52 & 56.1 & 0.662 \\ GD-fp16 & 4096 & - & - & 24 & 23.52 & 9.23 & 50.1 & 0.611 \\ PatchGD-fp16 & 4096 & 128 & 10 & 24 & 26.41 & 8.09 & 53.5 & 0.667 \\ PatchGD-fp16 & 4096 & 256 & 10 & 24 & 26.40 & 9.62 & 55.6 & 0.672 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance scores obtained using Resnet50 on PANDA dataset for Gradient Descent (GD) and Patch Gradient Descent (PatchGD). In case of 512 image size, 10% sampling leads to only one patch, hence 30% patches are chosen. PatchGD in handling large images as well as operating under low memory conditions, and in all scenarios, our approach outperforms the standard gradient descent by significant margins. We hope that the details of the method as well as the experimental evidence presented in the paper sufficiently justify the significance of PatchGD in making existing CNN models work with large images without facing the bottleneck of compute memory. **Future work.** This paper has established the foundational concept of patch gradient descent to enable training CNNs using very large images. The results as well as insights presented in the paper open doors to several novel secondary research directions that could be interesting in terms of improving the efficacy as well as the acceptance of the presented method in a broader scientific community. Examples of these include extending PatchGD to work on gigapixel images at small compute memory, using PatchGD for enhanced receptive field on standard computer vision tasks, and lastly to couple PatchGD with transformers. Details on the associated challenges and possible modifications are further discussed in Appendix A. **Acknowledgement** We would like to thank Texmin Foundation for the financial support provided through grant PSF-IH-1Y-022 to support this work.
2309.16195
Reconstructing microstructures from statistical descriptors using neural cellular automata
The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability.
Paul Seibert, Alexander Raßloff, Yichi Zhang, Karl Kalina, Paul Reck, Daniel Peterseim, Markus Kästner
2023-09-28T06:35:07Z
http://arxiv.org/abs/2309.16195v1
# Reconstructing microstructures from statistical descriptors using neural cellular automata ###### Abstract The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability. Keywords: Microstructure - Reconstruction - Descriptor - Neural cellular automata ## 1 Introduction The generation and analysis of random heterogeneous composite materials is a recently emerging research topic that aims at accelerating materials engineering by enabling digital workflows such as numerical simulation and inverse design [1]. Specifically, microstructure characterization and reconstruction allows to _(i)_ generate many microstructure realizations from a single example, _(ii)_ explore hypothetic materials by interpolating between microstructures in a morphologically meaningful manner, and _(iii)_ create 3D models from 2D observations. A multitude of approaches has been developed in the last decades that is summarized in different review articles [2, 3, 4]. For the purpose of this work, the existing approaches can be broadly divided in four categories1 - simulation, Markov random field, deep learning and descriptor-based approaches. Naturally, some algorithms in the literature can be identified as hybrid methods that fall into two or more of these categories. After discussing the main ideas of these categories and approaches in subsection 1.1, this work presents an algorithm that bridges all four categories and exhibits some very interesting properties as described in subsection 1.2. Footnote 1: Besides hybrid methods that fall into multiple categories, some exceptions like Wang tiles [5] do not clearly fall into any of the categories. ### Existing approaches for microstructure reconstruction Simulation-based approachesSimulating the microstructure evolution might be the most direct way. This requires to identify and to solve the physical (partial differential) equations (PDEs) that govern the process. An excellent overview is given in [2]. As an example, the Cahn-Hilliard equation describing phase separation [6] has been studied extensively [7, 8, 9]. Similarly, for granular structures, given a representative set of particles, realistic and dense packing can be achieved by simulating gravitational forces [10, 11, 12]. As a final, more complex example, grain formation in polystraline structures has been studied in depth. Simplified approaches reduce the description to vertices [13] or grain boundaries [14], whereas Monte Carlo methods [15] or cellular automata [16, 17, 18] are used to model the evolution of an entire 2D pixel field. Recently, neural cellular automata have been applied to solidification microstructure modeling [19]. Approaches based on the phase field method are probably the most developed. Thereby, the evolution of a diffuse indicator function is modeled by an additional differential equation [20, 21, 22] that can be solved, for example, in _OpenPhase_[23]. These approaches are often applied to simulate the complex microstructure morphologies that arise in additive manufacturing [24, 25, 26]. This non-exhaustive list indicates that a variety of physical processes are responsible for the formation of different material classes. Even if the relevant set of physical equations is selected, it can be challenging to perform the simulations due to numerical issues or difficulties in parameterizing the underlying constitutive models [27, 25]. This motivates the purely image-based approaches that are presented in the following. Markov-based reconstructionAs a first purely image-based method, this subsection discusses a class of reconstruction algorithms originally developed for computer graphics applica tions which are herein referred to as Markov-based approaches. For this purpose, it is worth noting that a microstructure can be modeled as a stationary Markov random field if the probability of finding a certain phase at a given location does not depend directly on the location, but only on the phase distribution in the local finite-size neighborhood. This assumption of locality and stationarity motivates reconstruction algorithms that directly rely on this conditional probability to update individual pixels based on their neighbor's values. A very simple implementation inspired by texture synthesis [28] might determine individual pixel updates by scanning the reference data for the given neighborhood in order to compute a probability [29; 30]. It is worth noting that this approach is akin to the multi-point statistics method that has been developed in the Geosciences literature [31] and has been applied and improved substantially by Tahmasebi [32; 33; 34; 35]. For a better scalability, improved algorithms precompute the probabilities for all neighborhoods and store them in efficient data structures for access during reconstruction [31; 36]. Direct sampling methods [37] as well as data structure-based alternatives are implemented in _MPSLIB_[38]. Despite a good local prediction quality, MRF-based approaches often fail to accurately reproduce long-range correlations. This behavior is related to the neighborhood size in the Markovian assumption: Capturing long-range correlations requires large neighborhood sizes, which are often unfeasible because of a disproportionately increased need for training data. Multigrid approaches [39; 35] have been shown to alleviate this issue to a certain extent. Furthermore, to condense the information to a compact model that is also able to interpolate missing neighborhood patterns from similar examples, supervised models have been trained to predict a pixel's phase given its neighborhood. In particular, decision trees [40] and neural networks [41; 42; 39] have been used for 2D and 3D [43] reconstruction. This motivates the discussion of purely deep learning-based approaches in the following subsection. Deep learning-based reconstructionIn deep learning-based methods, a generative model is fitted or trained on a sufficiently large data set of microstructures and is then used to sample new realizations of the same structure. Autoencoders [44; 42; 45] and generative adversarial networks (GANs) are typical examples that have been applied to MCR [46; 47]. For the latter, the merits of modifications like conditional GANs [48; 49], SytleGAN [50], and gradient penalty [51] have also been discussed in the context of microstructure generation. Applications to steels [52] and earth materials [53] show high image quality. Although GANs usually operate on 2D data, 3D-to-3D reconstruction can be achieved by using 3D convolutions [54; 55]. For reconstructing 3D data from 2D examples, a 3D generator has been combined with a 2D discriminator [56; 57]. As an alternative, the third dimension can be regarded as time by combining the GAN with a recurrent neural network [58]. To harness the advantage of both, autoencoders and GANs, they are sometimes combined by using the decoder simultaneously as a generator. This has proven advantageous for 2D-to-3D reconstruction [59; 60; 61] and for extremely small data sets [62]. As an alternative, machine learning methods like Bayesian approaches [63] and attention-based models [64; 65; 66] are equally applicable. Diffusion models, which have recently replaced GANs as state-of-the-art in general-purpose image generation, have also been applied to microstructure reconstruction [67; 68] and optimization [69; 70]. Much research is focused on identifying suitable model types and adapting them to microstructure reconstruction by enabling 2D-to-3D reconstruction [61; 57] making them applicable to small data sets [62] or ensuring that certain descriptor requirements are met [71; 72]. A major challenge lies in defining models with high accuracy that at the same time do not require large data sets to be trained on. These challenges motivate training-free models such as descriptor-based reconstruction, as presented in the next subsection. Descriptor-based reconstructionThe central idea behind descriptor-based reconstruction methods is to statistically quantify the microstructure morphology by means of descriptors like volume fractions and spatial \(n\)-point correlations [73]. Reconstructing a microstructure from a given set of descriptors can then be formulated as an optimization problem directly in the space of possible microstructures. Here, the desired microstructure descriptors can be computed from a single microstructure example, making these methods very data-efficient. One of the most well-known descriptor-based reconstruction methods is the Yeong-Torquato algorithm [73], which iteratively swaps individual pixels in the microstructure to solve the optimization problem. A detailed discussion is given in [74; 75]. This enables high flexibility, as descriptors can be replaced by new alternatives [76; 77] or higher-fidelity versions of the same descriptor [78; 79]. However, even with computationally inexpensive descriptors, the Yeong-Torquato algorithm becomes computationally challenging at high resolutions and in 3D, where billions of iterations are sometimes required for convergence [80]. A common solution is to use a multigrid scheme [81; 82; 83; 84; 85]. Further ideas include different-phase neighbor sampling rules [86], efficient descriptor updates [87; 80] and optimized directional weighing of correlation functions [78]. More information is given in [3]. As an alternative to the pixel-based Yeong-Torquato algorithm, the optimization problem can be formulated in a much lower-dimensional space. For this purpose, the microstructure is approximated by geometric objects that can be described by a few parameters, e.g., ellipsoidal inclusions [88; 89; 90] or Voronoi or Laguerre cells [91; 92; 93]. Independently from the microstructure representation [90], differentiable descriptors allow solving the optimization problem using a gradient-based optimizer. This idea is formulated as differentiable microstructure characterization and reconstruction (DMCR) [94; 95] and several approaches can be identified as special cases [71; 96; 97]. The Yeong-Torquato algorithm and improved versions of it, such as DMCR, have been successfully validated and applied to alloys and anisotropic metamaterials [85] sandstone [98], rock [99], chalk [100], various soils [101] and more. Some versions are publicly available in the open-source _MCRpy_ package [102]. While descriptor-based approaches are very accurate and data-efficient since no training data set is required, they are computationally intensive. More specifically, since the optimization is directly carried out in the microstructure space, the memory and computational requirements grow quickly as the microstructure size increases, especially in 3D. Hybrid reconstruction approachesThe specific and unique advantages and disadvantages of all four categories of MCR approaches motivate hybrid methods that fall into multiple of these categories. Naturally, there is no sharp boundary between Markov-based and deep learning methods if a machine learning model like a neural network is used to predict individual pixels based on their neighborhood as in [40; 41; 42; 39; 43]. Furthermore, simulation by discretized (partial) differential equations and cellular automata resemble Markov-based methods in their locality, but are derived from physical principles and sometimes incorporate various physical quantities (e.g. temperature) beyond phase indicator functions. At the boundary between machine learning and descriptor-based methods, multiple sequential approaches use Gaussian random field-based methods [103] to initialize simulated annealing2[100; 104] and diffusion models [72]. Furthermore, the volume fractions [105; 72; 58; 106; 49], histograms [107] and Gram matrices [10; 108] are sometimes added to the loss function of deep learning-based methods as microstructure descriptors. _DRAGen_[109] combines an automaton-like growth process with a nucleation point optimization based on classical descriptors and allows to use machine learning models for generating input data. At the interface between machine learning and physical simulation, autoencoders [110] and diffusion models [12] have been used as particle generators followed by a gravity simulation for aggregate structures. Besides that, the literature comprises a large number of physics-informed neural network approaches that are not discussed herein. Footnote 2: This is technically a hybrid method between two descriptor-based approaches. ### Objectives and contribution of this work This work presents a hybrid approach that is inspired by all these categories. Like in a simulation-based approach, a partial differential equation models the temporal evolution of the microstructure. It is, however, not derived from physics but learned by a neural network. Similar to the Markov-based methods, this network operates based on local information and is therefore called neural cellular automaton (NCA). This constraint of locality is relaxed not by increasing the neighborhood beyond a one pixel distance, but by introducing further hidden channels to the microstructure function that the NCA can use to encode relevant information. Finally, unlike common machine learning or Markov-based approaches, the NCA is not trained directly on image data or on a set of neighborhoods, but on a statistical descriptor. This requires the NCA to be retrained whenever the statistical descriptor changes, however, it reduces the amount of required data to a bare minimum. The input image only needs to enable the computation of a statistical descriptor; hence the NCA is applicable whenever classical training-free approaches like the Yeong-Torquato algorithm and DMCR can be used. Furthermore, the size of the training data is independent of the image size during training, which is again independent from the size of the reconstructed structure. Hence, microstructures of massive resolutions or numbers can be reconstructed with very limited additional computational effort. Furthermore, due to the nature of NCA, the algorithm is inherently distributed, parallel and robust with respect to perturbations. In summary, the central idea lies in modeling the differential equation governing the structure evolution by training neural cellular automata (NCA) on statistical descriptors. A detailed formulation is given in section 2 and validated by various numerical experiments in section 3. A conclusion is drawn in section 4. ## 2 Neural cellular automata for descriptor-based microstructure reconstruction Based on the work of Mordvintsev et al. [111], the formulation of general neural cellular automata (NCA) is summarized in subsection 2.1. The main idea of the present work to train NCA by arbitrary descriptors is described in subsection 2.2. Finally, the implementation is discussed in subsection 2.3. ### Formulation of neural cellular automata The general idea behind a cellular automaton is to iteratively update individual pixels based on the direct neighbors. In the work of Mordvintsev et al. [111], this information source is further restricted. The neighboring pixel values are not passed directly to the cellular automaton. Instead, they are used to compute a discrete approximation to the gradient and curvature, which are then passed to the cellular automaton. Denoting \(\mathbf{x}\in\mathcal{D}\) as a position vector in the microstructure domain \(\mathcal{D}\subset\mathbb{R}^{2}\) and \(t\in\mathcal{T}=\{t\in\mathbb{R}\,|\,0\leq t\leq t^{\text{end}}\}\) as time, the evolution of the microstructure \(m(\mathbf{x},t)\) can be written as a partial differential equation \[\frac{\partial m(\mathbf{x},t)}{\partial t}=f_{\mathbf{\theta}}\left(m(\mathbf{x},t), \nabla_{\mathbf{x}}m(\mathbf{x},t),\nabla_{\mathbf{x}}^{2}m(\mathbf{x},t)\right)\,, \tag{1}\] where \(f_{\mathbf{\theta}}\) is the cellular automaton which maps the value, gradient and curvature of the microstructure function to its temporal derivative. To be more specific, \(\nabla_{\mathbf{x}}(\bullet)\) and \(\nabla_{\mathbf{x}}^{2}(\bullet)\) denote the gradient and Laplace operator, respectively. Furthermore, \(m\) takes real values within the arbitrarily chosen bounds \[0\leq m(\mathbf{x},t)\leq 1\quad\forall\,\mathbf{x}\in\mathcal{D},\,t\in\mathcal{T}\,\,. \tag{2}\] In a neural cellular automaton specifically, a neural network is chosen as \(f_{\mathbf{\theta}}\), where \(\mathbf{\theta}\) denotes the parameter vector. In other words, the NCA defines partial differential equation (PDE)3 that needs to be discretized and solved in order to generate a microstructure. An explicit Euler scheme is chosen as a time stepping scheme Footnote 3: To be precise, the NCA defines a PDE _system_, as explained later in the document. \[\frac{m_{n_{t}+1}-m_{n_{t}}}{\Delta t}=f_{\theta}\left(m_{n_{t}},\nabla_{x}m_{n _{t}},\nabla_{x}^{2}m_{n_{t}}\right)\,, \tag{3}\] where the current solution at time step \(n_{t}\) defines the update for the next time step \(n_{t}+1\). The dependence on \(\mathbf{x}\) and \(t\) is dropped for the sake of brevity. The space is naturally discretized on an equidistant grid of pixel values, where \(\nabla_{x}(\bullet)\) and \(\nabla_{x}^{2}(\bullet)\) are approximated by a Sobel and Laplace filter, respectively. Based on this discretization, the relation between the current solution, its spatial derivatives and its temporal evolution, i.e. the PDE itself, is learned by the NCA. Given the inability of Markov-based approaches with small neighborhood sizes to accurately capture long-range correlations, it should be clear that the extremely limited local information is not sufficient to train a good NCA. For this reason, the augmented microstructure function \(\mathbf{m}^{\prime}(\mathbf{x},t)\) is introduced which maps a spatial position \(\mathbf{x}\) at time \(t\) to an \(n\)-dimensional vector. The first entry of the vector contains the normal microstructure function \(m(\mathbf{x},t)\) and is the only entry that affects the training. The idea behind the other entries is that the NCA can choose to allocate any quantity that is useful for passing information and increasing the image quality. As an example, for an equidistant grain microstructure, one channel might contain the distance to the next grain boundary. With this, the temporal evolution reads \[\frac{\partial\mathbf{m}^{\prime}(\mathbf{x},t)}{\partial t}=f_{\theta}\left(\mathbf{m}^{ \prime}(\mathbf{x},t),\nabla_{\mathbf{x}}\mathbf{m}^{\prime}(\mathbf{x},t),\nabla_{\mathbf{x}}^{2 }\mathbf{m}^{\prime}(\mathbf{x},t)\right) \tag{4}\] For reconstructing a microstructure from the trained NCA, \(\mathbf{m}^{\prime}(\mathbf{x},0)\) is initialized by zeros and the system evolves freely. ### Training the model from microstructure descriptors The function \(f_{\theta}\) is learned by a small neural network with two layers as shown in Figure 1: _First_, an initial solution \(\mathbf{m}^{\prime}(\mathbf{x},0)=\mathbf{0}\,\forall\,\mathbf{x}\) is chosen. _Secondly_, \(\mathbf{m}^{\prime}(\mathbf{x},t)\) develops according to Equation 4 for a randomly chosen number of time steps. As a regularization and as a measure to break symmetry, asynchronous updates are chosen, whereby in every time step, a given percentage of cells is chosen at random and only those develop. The bounds given in Equation 2 are enforced by clipping. _Thirdly_, a loss function \(\mathcal{L}\) is computed on the final result \(m^{\text{end}}=m(\mathbf{x},t^{\text{end}})\). The choice of \(\mathcal{L}\) is discussed later. Note that only \(m^{\text{end}}\), i.e., the first component of \(\mathbf{m}^{\text{end}}\), contributes to \(\mathcal{L}\). _Finally_, the gradient \(\partial\mathcal{L}/\partial\mathbf{\theta}\) of the loss function with respect to the NCA parameters is computed by conventional backpropagation and used to update \(\mathbf{\theta}\). Note that this limits the number of timesteps during training for numerical reasons. The formulation of the loss function depends on the area of application of the NCA. After initially using a pixel-wise Euclidean norm error in the RGB space for general-purpose image generation [111], Mordvintsev et al. [112] found that a Gram matrix-based style loss [113] enable NCAs to be applied to texture synthesis [112]. The novelty in the present work lies in realizing that any of the known statistical descriptors can be used, as long as they can be differentiated with respect to the microstructure field. The loss is thus formulated as a mean squared error (MSE) in the descriptor space \[\mathcal{L}=\|\mathbf{D}(\mathbf{m}^{\text{end}})-\mathbf{D}^{\text{des}}\|_{\text{MSE}}\,, \tag{5}\] where \(\mathbf{D}\) denotes a statistical descriptor or a weighted concatenation of multiple descriptors that is computed on the reconstruction result, while \(\mathbf{D}^{\text{des}}\) denotes the desired value computed from the reference structure. Because \(m^{\text{end}}\) results from the temporal evolution of \(f_{\theta}\), it depends on the parameter vector \(\mathbf{\theta}\) of the NCA. The central idea is train the NCA by gradient-based optimization of \(\mathbf{\theta}\) to minimize Equation 5, whereby arbitrary descriptors can be incorporated. While the Gram matrices used in [112] can be interpreted as a statistical descriptor, the spatial two- and three-point correlations are more common in microstructure reconstruction. The idea of using high-dimensional, differentiable descriptors for direct microstructure reconstruction is given in [94], where an efficient and differentiable formulation of the three-point correlations is given. As another example, a differentiable approximation to lineal path function is presented in [102] and a descriptor based on a hierarchical wavelet transform is given in [114]. All these descriptors are implemented in _MCRpy_[102]. ### Implementation The implementation of a descriptor-based NCA for microstructure reconstruction is carried out based on the code for NCA texture synthesis [112] and the differentiable descriptors available in _MCRpy_[102]. The former code is adapted to only a single non-hidden dimension \(m(\mathbf{x})\) as opposed to three RGB channels. Then, _MCRpy_ is used to define a loss, where different descriptors such as Gram Matrices \(G\)[71], correlations \(S\)[74], variation \(\mathcal{V}\)[95] and volume fraction \(\varphi\) can be combined and weighed in a single loss function in a flexible manner. More information on these descriptors is given in [102]. _MCRpy_ makes use of the automatic differentiation in _TensorFlow_ to compute the gradient \(\partial\mathcal{L}/\partial m\). Then, \(m\) is backpropagated through time to compute \(\partial m/\partial\mathbf{\theta}\) and consequently \(\partial\mathcal{L}/\partial\mathbf{\theta}\). Finally, a hyperparameter study is carried out on a number of structures. A 12-dimensional microstructure representation (i.e. 11 hidden channels) is chosen. Hence, the NCA has 12 output and 48 input dimensions. With a single hidden layer of 120 neurons, the network amounts to a total of 7332 parameters. Further hyperparameters like the number of time steps are summarized in Table 1. In order to visually compare the results of descriptor-based NCA with other methods from the literature, three open-source codes are selected from GitHub. To represent Markov-based methods, a patch-based texture synthesis4 algorithm based on [115, 116] and a pixel-based, multi-resolution texture synthesis5 algorithm based on [117, 118] are chosen. Furthermore, _MCRpy_[102] implements differentiable microstructure characterization and reconstruction (DMCR) [94, 95]. While _MCRpy_ is provided by previous works of the authors, the former two methods are coded and provided by Anastasia Opara. The authors greatly acknowledge this effort and appreciate the will to share software. Footnote 5: [https://github.com/anopara/multi-resolution-texture-synthesis](https://github.com/anopara/multi-resolution-texture-synthesis) ## 3 Numerical experiments The microstructure evolution and the range of applicability is investigated in subsection 3.1. These results are then compared to the literature in subsection 3.2. Finally, the scalability of descriptor-based NCA is demonstrated in subsection 3.3. All numerical experiments are carried out on a laptop with a \(12^{\text{th}}\) Gen Intel(R) Core(TM) i7-12800H CPU at 2.40 GHz and an Nvidia A2000 GPU with 4 GB VRAM. ### Microstructure evolution and diversity Figure 2 shows reconstructions from different real materials taken from [71]. It can be seen that descriptor-based NCA are applicable to a wide variety of fundamentally different structures, ranging from relatively noise-free examples like the grain boundary structure and the ceramics to the more noisy sandstone. Some limitations can also be seen. As a first limitation, although the grain boundary character in the alloy is captured relatively well, not all lines are connected as in the reference case. In order to use the results for downstream tasks like numerical simulations, a post-processing algorithm is first needed to close the gaps or eliminate unnecessary line segments. Alternatively, it might be worth investigating \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Hidden layer size & 120 \\ Non-hidden channels & 1 \\ Hidden channels & 11 \\ Activation function & ReLU \\ Fire rate & 0.5 \\ Batch size & 4 \\ Checkpointing1 pool size & 1024 \\ Learning rate & \(2\cdot 10^{-3}\) \\ Rollout length probability & \(\mathcal{U}(32,64)\) \\ Gradient normalization & True \\ Overflow loss coeff & \(10^{4}\) \\ Descriptors & \(S,G,\mathcal{V}\) \\ Descriptor weights & \(1,1,100\) \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameters chosen in the present work. Figure 1: Training procedure for a neural cellular automaton (NCA): In every iteration \(i\), random pixel locations are chosen where the gradient and curvature is computed numerically. Together with the pixel value, these quantities are given to the NCA to predict a pixel update. After some time increments, the result is compared to the reference to train the NCA. This comparison is only carried out in terms of statistical descriptors. if a different choice of descriptor can be used in order to better quantify the connectivity information. Although this approach is arguably more elegant, its difficulty lies in the requirement that the descriptor should be differentiable with respect to the microstructure. As a second limitation, it can be seen that the fingerprint-like structure of the copolymer is not adequately represented. Although the NCA successfully creates individual sub-regions with parallel lines, these regions are not sufficiently large and the lines do not exhibit smooth curves as in the reference. It is presently unclear to the authors how this issue can be addressed. As a third limitation, it is noted that the probability distribution of pixel values does not exactly match the original structures. Especially in the carbonate and PMMA, it can be seen that the white phase is reconstructed in bright grey color. Similar to the first limitation, the authors assume that a post-processing algorithm or a suitable descriptor should be sufficient to address this issue. To provide a better understanding of the generation method, the temporal evolution of the microstructure as well as the first four hidden dimensions is plotted in Figure 3. All fields are initialized by zero (black) and the structure slowly emerges. Different hidden channels take different roles in representing structural features. For example, the first hidden channel (second row) might be interpreted as a local vertical coordinate in each grain. In contrast, the fourth hidden channel (last row) contains a thickened version of the gain boundaries. Interestingly, the third hidden channel (second to last row) can be interpreted in different ways. It might be used as a type of residuum, since its norm decreases as the reconstruction converges. As an alternative, it might act as a marker for specific features like triple junctions. It can be concluded that different channels take different roles, although a direct interpretation is neither possible, nor necessary. It is demonstrated in the works of Mordvintsev et al. [111, 112] that the NCA-based generation process is often robust with respect to perturbations. To test whether this trend is transferred to descriptor-based NCA for microstructure reconstruction, two numerical experiments are carried out. After the generation process has converged to a good solution, the structure is perturbed by setting all values within a circular radius to 0.5. This is applied to all channels in Figure 4 and only to the non-hidden dimension in Figure 5. It can be seen that the structure only recovers in the latter case. Besides stressing the key role of the hidden channels, this indicates that the robustness of NCA is only partially observed in descriptor-based NCA for microstructure reconstruction. ### Comparison to literature In order to compare descriptor-based NCA reconstruction results to the literature, two Markov- and one descriptor-based approach is chosen6. Figure 6 shows the results, where only three material classes are selected for the sake of brevity. At a first glance, all methods produce high-quality results. Patch-based texture synthesis, however, does not produce new structural features, but copies pathes from the original structure to different locations in the target image. The patch boundaries can be distinguished upon closer inspection. As a pixel-based approach, multi-resolution texture synthesis does not suffer from this phenomenon. However, especially in the alloy, strange features like completely vertical grain boundaries can be observed and the structure coincidentally repeats itself in the top right corner. Furthermore, the highly complex fingerprint-like copolymer structure is not captured adequately. Finally, DMCR as a descriptor-based method7 produces good results for all considered materials. While the alloy and ceramics are similarly well reconstructed as with the descriptor-based NCA, DMCR produces visually superior results for the copolymer. It can be concluded that the result quality of NCA outperforms standard Markov-based techniques and almost reaches that of direct descriptor-based optimization. The advantage, with respect to the latter lies in the performance and scalability as discussed in the following. Footnote 7: The Yeong-Torquato algorithm can be expected to yield equally good or even better results, however, at a significantly higher computational cost. ### Performance and scalability An objective assessment of the reconstruction results in terms of microstructure descriptors is paramount to evaluating the accuracy of any reconstruction algorithm. In this context, it should be mentioned that the presented method is only partially descriptor-based since the descriptors are used during training, but not during sampling. For this reason, independent realizations of the material exhibit random deviations from the target descriptor. Naturally, these fluctuations are expected to decrease as the microstructure size increases. This is shown in Figure 7, where the error \[\mathcal{E}_{D}=\|\mathbf{D}(m^{\text{end}})-\mathbf{D}^{\text{des}}\|_{\text{MSE}} \tag{6}\] between a descriptor \(\mathbf{D}(m^{\text{end}})\) and its desired value \(\mathbf{D}^{\text{des}}\) from the reference structure is defined as a mean squared error (MSE). Note that the only difference between \(\mathcal{E}_{D}\) and the loss \(\mathcal{L}\) defined in Equation 5 is that the former measures individual descriptors, whereas the latter is based on a weighted concatenation of multiple descriptors. For all tested descriptors, the error converges to a value which is consistently lower for the proposed loss model than for the reference NCA-based texture synthesis method by Mordvintsev et al. [112]. It should be noted that the descriptor errors do not converge to zero as the resolution increases, but rather to a value that depends on the training quality. It is observed that longer training and training by larger samples reduces this value (not shown here). An interesting aspect of NCA is that the image sizes during training and sampling are independent. This is favorable because the sampling is relatively inexpensive and scales favorably with the image size compared to other methods. To demonstrate this, Figure 8 shows a reconstruction example where the resolution is chosen such that the sampling takes Figure 2: Reconstructions from various real materials. The original samples are given in the top left corner and are taken from [71], where they are released under the Creative Commons license [119]. Figure 3: The evolution of the alloy microstructure over time \(t\). The first channel is plotted in the top row and the first four hidden channels are given below. It can be seen that each hidden channel acts as a distinct feature map. Figure 4: The role of the hidden channels is illustrated by perturbing the microstructure evolution at time \(t=t^{\prime}\). All pixel values within a given radius are set to 0.5. In the presented case, all channels are perturbed, whereas in Figure 5, the hidden channels remain intact. Only the microstructure (top) and the first hidden channel (bottom) are plotted for brevity. Unlike in Figure 5, the structure does not recover. Figure 5: The role of the hidden channels is illustrated by perturbing the microstructure evolution at time \(t=t^{\prime}\). All pixel values within a given radius are set to 0.5. In the presented case, the hidden channels remain intact, whereas in Figure 4, all channels are perturbed. Only the microstructure (top) and the first hidden channel (bottom) are plotted for brevity. Unlike in Figure 4, the structure recovers, albeit to a different solution. Figure 6: Comparison of three selected materials reconstructions with methods from the literature. Patch-based texture synthesis (PBTS) and multi-resolution texture synthesis are Markov-based approaches, while differentiable microstructure characterization and reconstruction (DMCR) is descriptor-based. The reconstructed structures are two times larger than the reference in each direction. More information is given in subsection 2.3. ## References Figure 7: Influence of the loss function on the descriptor errors. The volume fractions \(\varphi\) (top), spatial correlations \(S\) (middle) and Gram matrices \(G\) (bottom) are compared for different materials (left to right) with 25 realizations per resolution. The resolutions are powers of two and an offset to the left (reference) and right (proposed) is applied only for visualization purposes. If can be seen that regardless of the model, material and descriptor, the variance of the descriptor error over different realizations decreases as the sample size increases. The proposed model consistently outperforms the reference [112]. For some structures like the ally (a), the reference model fails to converge, leading to massive discrepancies, whereas for the ceramics (c) the differences are relatively small. as long as the training8. Three different zoom levels of the same structure are shown for visualization purposes. Without any multigrid procedures, such large reconstructions are very challenging with classical descriptor-based methods. Footnote 8: Because the utilized VRAM is not sufficient for the large reconstruction, it is conducted on the CPU only, whereas the training occurs on the GPU. If a similar comparison was made on identical hardware, significantly larger structures could be reconstructed. Regardless, in the authors’ opinion, the presented results demonstrate the scalability sufficiently well. Generally, the computational cost of sampling a microstructure scales linearly in the number of pixels, because pixel updates are computed independently by the NCA. A comparison with the 2D DMCR algorithm [94] in _MCRpy_[102] is given in Figure 9. Both methods scale linearly. Sampling from a trained NCA is much faster than reconstructing by DMCR, and the computational cost grows more slowly. This is because the expensive evaluation of microstructure descriptors and iterative optimization are moved to the training stage. If the training is added to the computational cost of the NCA, they are slower for the considered microstructure sizes. As a conclusion, the expensive training phase of an NCA is compensated if large or many microstructures are reconstructed. Especially the latter might speed up a potential future extension for 2D-to-3D reconstruction. Furthermore, unlike with DMCR [94; 95], the sampling can be trivially parallelized because updates are based only on local information. ## 4 Conclusion A neural cellular automaton (NCA)-based algorithm for microstructure reconstruction is presented. The microstructure evolution is modeled as a partial differential equation which is learned by a small neural network, the NCA. Despite the purely local information in the NCA, long-range correlations are incorporated by introducing hidden dimensions to the microstructure function which can be used to communicate information. Unlike with previous approaches, this network is not trained on image data but on statistical microstructure descriptors. Thus, the method incorporates ideas from four different families of microstructure generation approaches, namely simulation, Markov, deep learning and descriptor-based methods, which are all briefly reviewed. The method is formulated, implemented and validated by a number of 2D numerical experiments. Compared to other microstructure reconstruction approaches, descriptor-based NCAs have a unique set of advantages. The neural network in the NCA enables the evolution of highly complex morphologies in a PDE-like manner without knowledge of the governing physical equations and the material parameters. It can be controlled by statistical descriptors. However, the sampling of structures from a trained NCA is based only on local information. This self-assembling nature of the algorithm makes it an inherently distributed algorithm and therefore trivial to parallelize. The random selection of the pixels to be updated make the method robust with respect to random perturbations, as long as not all channels are affected. Finally, the method scales very favorably as arbitrarily resolved structures can be sampled. In future work, the main challenge lies in enabling 3D reconstruction based on 2D or 3D reference data. ## Acknowledgements The authors thank Anastasia Opara for providing good implementations of texture synthesis algorithms to the community. The groups of M. Kastner and D. Peterseim thank the German Research Foundation DFG which supported this work under Grant numbers KA 3309/18-1 and PE 2143/7-1, respectively. ## Code and data availability The code is made available upon reasonable request. The data is taken from the literature [71], where it is released under the Creative Commons license. ## Competing interests The authors declare no competing interests. ## Author contributions **P. Seibert**: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing - Original Draft Preparation, Writing - Review & Editing. **A. Ralsfloff**: Conceptualization, Writing - Review & Editing. **Y. Zhang**: Software, Writing - Review & Editing. **K. Kalina**: Conceptualization, Formal Analysis, Writing - Review & Editing. **P. Reck**: Conceptualization, Writing - Review & Editing. **D. Peterseim**: Conceptualization, Funding Acquisition, Writing - Review & Editing. **M. Kastner**: Conceptualization, Funding Acquisition, Resources, Supervision, Writing - Review & Editing.
2305.00422
Drinfeld modules in SageMath
We present the first implementation of Drinfeld modules fully integrated in the SageMath ecosystem. First features will be released with SageMath 10.0.
David Ayotte, Xavier Caruso, Antoine Leudière, Joseph Musleh
2023-04-30T08:03:19Z
http://arxiv.org/abs/2305.00422v1
# Drinfeld modules in SageMath ###### Abstract. We present the first implementation of Drinfeld modules fully integrated in the SageMath ecosystem. First features will be released with SageMath to.o. The pursuit of Class Field Theory has been a long-standing dream, once held by Kronecker himself. In 1854, he made a significant contribution to the field with the announcement of the Kronecker-Weber theorem, which states that every abelian number field can be generated by a cyclotomic extension of \(\mathbb{Q}\). Similarly, extensions of imaginary quadratic number fields can be described using a so-called Hilbert class field [10]. Many important results of the field were conjectured by Hilbert and Kronecker. Some of them were only proven in the twentieth century, by mathematicians like Takagi, Artin, and Chevalley [10]. And to this day, the general quest for describing extensions of a number field remains elusive. But what if the quest was easier for function fields? In 1974, Drinfeld introduced the now-known Drinfeld modules [11], pursuing the ideas of Carlitz [12]. With Drinfeld modules, one can develop an explicit class field Theory for function fields: every Drinfeld module can be assigned a rank; cyclotomic function fields are generated by torsion spaces of rank 1 Drinfeld modules and \(j\)-invariants of rank 2 Drinfeld intervene in the construction of the function-field analogue of the Hilbert class field. Later developments saw Drinfeld modules being instrumental in Lafforgue's proof of some of Langlands conjectures for function fields [13]. The analogue question for number fields is still out of reach. In the recent years, purely algorithmic thesis [14] and papers [15, 16, 17, 18, 19, 20, 21, 22] have been published, emphasizing efficiency. The present implementation began as the need for a unified and tangible manipulation tool, which we hope will be useful to a large community. We made notable efforts to accompany the code with exemplary documentation and use pre-existing SageMath facilities wherever possible. Our three core principles were _reliability, user interface degame_, and _integration_. The original ticket (see Github pr#30026) was opened in April 2022 and merged in March 2023. Many _pull requests_ have since been proposed to enhance the capabilities of the original contribution and are under active development, fueling an ever-growing interest in Drinfeld modules. **Mathematical background.** Before entering into the core of this presentation, we need to recall basic definitions related to Drinfeld modules. Let \(\mathbb{F}_{q}\) be a finite field with \(q\) elements, let \(K\) be an extension of \(\mathbb{F}_{q}\) and let \(\overline{K}\) be an algebraic closure of \(K\). Additionally, we equip \(K\) with a structure of \(\mathbb{F}_{q}[T]\)_-field_, meaning we give ourselves a morphism of \(\mathbb{F}_{q}\)-algebras \(\gamma:\mathbb{F}_{q}[T]\to K\). We use the notation \(\tau\) to denote the \(\mathbb{F}_{q}\)-linear endomorphism of \(\overline{K}\) defined by \(x\mapsto x^{q}\). We define the _ring of Ore polynomials_\(K\{\tau\}\) as the ring whose elements are sum of the form \(a_{0}+a_{1}\tau+\cdots+a_{n}\tau^{n}\) where \(n\in\mathbb{Z}_{\geqslant 0}\) and \(a_{i}\in K\) for all \(0\leqslant i\leqslant n\). In \(K\{\tau\}\), we have the identity \(\tau a=a^{q}\tau\) whenever \(a\in K\). A _Drinfeld module_ over \(K\) is a morphism of \(\mathbb{F}_{q}\)-algebras \(\phi:\mathbb{F}_{q}[T]\to K\{\tau\}\) such that \(\phi(T)\) has constant coefficient \(\gamma(T)\) and nonzero degree in \(\tau\). We remark that \(\phi(T)\) entirely determines \(\phi\); we often denote it simply by \(\phi_{T}\). The name _module_ comes from the fact that \(\phi\) endows \(\overline{K}\) with an action of \(\mathbb{F}_{q}[T]\), defined by \(a\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ment of the project, we were constantly very careful to the simplicity of the interface, the clarity and the completeness of the documentation and the unit tests. Concretely, each class, method or function is augmented with a doctest that has description, tests and examples. The entry point of the documentation is the doctor string of DrinfeldModule, accessed in the SageMath console by running DrinfeldModule?. For specific methods, the? keyword is also used, _e.g._ phi.rank?. The documentation also appears in the SageMath Reference Manual [Dev]. Our library is completely open and as such, we encourage all mathematicians and computer scientists to improve it with any contribution that may interest them. ### The base type of Drinfeld module objects A first difficulty we encountered at the very beginning of the project was that a Drinfeld module is _not_ an actual module in the classical sense. In particular, a Drinfeld module has no underlying set and a morphism between Drinfeld modules is not a set-theoretical map. However, in the SageMath idiom, most objects are either sets with additional structures -- a so-called Parent -- or elements in such sets -- an Element. This philosophy is referred to as the _parent/dement framework_. It is often implicitely assumed in SageMath. For example, the default _Test Suite_ of a parent checks that its category is a subcategory of **Sets,** the constructor of Morphism objects assume that the domain and codomain are both parents, _etc._ For Drinfeld modules, this raises many questions and we eventually had to make a difficult choice between the three following compromises: 1. Making Drinfeld modules elements (as they are _in fine_ morphisms) and their set a parent (the so-called "homsets" in SageMath); this option offers a standard parent/element framework. 2. Implementing Drinfeld modules as parents without elements, following actually the implementation of elliptic curves1. This option makes the implementation of morphisms between Drinfeld module (and, more generally, of the category of Drinfeld modules) easier. Besides, making in some sense Drinfeld modules as function field analogues of elliptic curve, this option has a strong mathematical base. Footnote 1: In SageMath, elliptic curves \(\epsilon\) are schemes, and \(\epsilon\).an_element() return an element whose parent is not \(\epsilon\), but the group \(\phi\) of points of \(\epsilon\). In that case, \(6\) and \(\epsilon\) are distinct objects. 3. Implementing Drinfeld modules as CategoryObject. This class does exist in SageMath and it is not expected to have elements. However, unfortunately, it is used only sporadically, it is currently incompatible with Morphism objects and it is no longer maintained (it is possibly intended to disappear eventually). All these options have their benefits and drawbacks. We discussed all of them with the SageMath core developers (see Github pr #37313 and Github pr #34534). At some point, the third option looked to us the most mathematically appealing; however given that Categoryobjects are not fully supported, we decided to rule out this possibility. On the other hand, the first option seems more practical but we believed that it was too mathematically misleading; it would also require a workaround to make morphisms work. We then ultimately chose the second option. ## 2 Overview of our package Our package is publicly available on Github: [https://github.com/xcaruso/sage/tree/drinfeld-modules](https://github.com/xcaruso/sage/tree/drinfeld-modules). It is intended to be ultimately included in the standard distribution of SageMath. Actually, about half of the package will be released with SageMath no.o, the other half is still under review; we hope that it will be approved soon by the SageMath community. Alternatively, we offer the possibility to try our package online on the platform plm-binder. For this, please go to the URL [https://caruso.perso.math.cnrs.fr/notebook/drinfeld-modules/after](https://caruso.perso.math.cnrs.fr/notebook/drinfeld-modules/after) a few seconds, a Jupyter notebook will open with a full tutorial presenting the main functionalities of our package. Beyond reading the tutorial, plm-binder allows for editing the notebook, executing commands, creating new worksheets, _etc._ Be careful however that your modifications will not be stored after your session is closed; if you want to keep them, do not forget to download your notebooks! #### 2.1. Construction and basic properties A Drinfeld module is a rather sophisticated mathematical object, whose definition already involves several nontrivial ingredients: a morphism \(\gamma:\mathbb{F}_{q}[T]\to K\), the ring of Ore polynomials \(K\{\tau\}\). In our package, we have tried as much as possible to minimize the number of lines for creating a Drinfeld module. In particular, in most cases, it is not needed to define explicitely \(\gamma\) and \(K\{\tau\}\). ``` sage:K.<ww =GF(\(\lambda\)) sage:phi = DrinfeldModule(GF(2)['T'], [w, o, o, 1]) sage:phi Dinfeld module defined by T |--> t^3 + w ``` Once a Drinfeld module is instantiated, we have access to a panel of methods for accessing its most important invariants, _e.g._ phi.characteristic(), phi.rank(), phi.height(), _etc._ It is also also possible to compute the value \(\phi(a)\) by simply using the syntax phi(a). #### 2.2. Morphisms and isogenies Given that Drinfeld modules do not have elements, the morphisms between them are the main tools at our disposal for understanding their structure. Our package provides the method hom for easily constructing morphisms. ``` sage:t=phi.ore_variable() sage:phi.hom(t+w) DinfeldModulemorphism: From:Drinfeld module defined by T |--> t^3 + w To: Drinfeld module defined by T |--> t^3 + t^2 + wt * w ``` We observe that the software has automatically determined the codomain. Once we have constructed a morphism \(f\), many methods become available, _e.g._ f.codomain(), f.is_isomorphism(), _etc._ At the level of Drinfeld modules themselves, the method is_isomorphic allows for checking whether two Drinfeld modules are isomorphic. When \(K\) is finite, a very important morphism is the Frobenius endomorphism defined by the Ore polynomial \(\tau^{[K;\mathbb{F}_{q}]}\) (see also \(\phi_{2+4}\)). Our pack age provides the method phi.frobenius_endomorphism() for rapidly instantiating it. Of course, addition and composition of morphisms are implemented, as well as inverse of isomorphisms. We observe in addition that any polynomial \(P\in\mathbb{F}_{q}[T]\) defines an endomorphism of \(\hat{\varphi}\) (corresponding to the Ore polynomial \(\hat{\varphi}_{P}\)). In particular, the Horn spaces \(\mathrm{Hom}(\hat{\varphi},\varphi)\) inherits a structure of left module over \(\mathbb{F}_{q}[T]\), which is accessible in our package _via_ the operator \(\ast\). This simple syntax allows for writing down easily complex formulas. Finally, in contrast to the case of elliptic curves, computing morphisms between Drinfeld modules defined over finite fields amounts to solving a linear system over \(\mathbb{F}_{q}\). This leads to an efficient algorithm for finding isogenies [20], which we implemented in our package. ``` sage:psi=DrinfeldModule(GF(z)['T'],[w,w+1,1,1]) sage:Hom(phi,psi).an_isogeny() DrinfeldModulemorphism: From:Drinfeld module defined by T |-->t^3 + w To:Drinfeld module defined by T |-->t^3 + t^2 + (w + 1)+ w Defn:t^2 + (w * 1)+t + 1 ``` The command Hom(phi, psi).basis(degree=d) returns more generally an \(\mathbb{F}_{q}\)-basis of the vector space of morphisms between \(\phi\) and \(\psi\) defined by an Ore polynomial of degree at most \(d\). ### \(j\)-invariants In the classical theory, it is well known that elliptic curves over an algebraically closed field are classified, up to isomorphisms, by their \(j\)-invariants [19, Proposition 1.4]. Moreover, when working over a quadratic imaginary field \(R\), the \(j\)-invariants of elliptic curves with complex multiplication by \(R\) provide an explicit description of abelian extensions of \(R\)[19, Chap. II]. Similar results hold for Drinfeld modules: one can attach to any Drinfeld module \(\phi\) of rank \(2\)\(\mathit{a}j\)-invariant which determines the isomorphism class of \(\phi\) over an algebraic closure; besides, certain \(j\)-invariants play a pivotal role in the study of certain algebraic extensions of \(\mathbb{F}_{q}(T)\)[18, (4.-4)], [20, Theorem 6.9]. The \(j\)-invariant of a Drinfeld module of rank \(2\) is given by a simple closed formula: if \(\hat{\varphi}_{T}=\gamma(T)+g_{1}(\hat{\varphi})\tau+g_{2}(\hat{\varphi})\tau^ {2}\), then \(j(\hat{\varphi}):=g_{1}(\hat{\varphi})^{\hat{\varphi}\dagger\dagger}/g_{2}( \hat{\varphi})\). This makes it easy to compute and our package provides a direct method for accessing it. ``` sage:phi=DrinfeldModule(GF(z)['T'],[w,w+1,w+2]) sage:phi,j_invariant() w + 1 ``` In the context of Drinfeld modules, it turns out that \(j\)-invariants are defined in any rank [21]. A Drinfeld module of rank \(r>2\) does not have a single \(j\)-invariant but a complete family of \(j\)-invariants indexed by the integral points of a convex subset of \(\mathbb{R}^{r}\). Fortunately, those \(j\)-invariants are still given by explicit closed formulas, making their computation possible. Our package provides methods (basic_j_invariant_parameters, basic_j_invariants, jk_invariants, _etc._) for computing and manipulating those \(j\)-invariants in any rank. We refer to our tutorial on plm-binder for more details. ### Norms and characteristic polynomials In the classical setting, morphisms (resp. endomorphisms) between elliptic curves have norms (resp. characteristic polynomials) which can be found by looking at the action on the Tate module [22, SS]. Again, similar facts hold true in the Drinfeld setting [18, Lem. 3.10]: there is a well-defined notion of Tate module of a Drinfeld module and morphisms between Drinfeld modules do induce linear transformations of the associated Tate modules. From this construction, one can define the _norm_ of a general isogeny and the _characteristic polynomial_ of an endomorphism. Unfortunately, computing in practice the Tate module is a hard task in general given that the latter usually lives in a quite large extension of \(K\). Norms and characteristic polynomials have however alternative interpretations, which makes tangible the perspective of computing them efficiently. Concretely, algorithms for this task based on the notion of Anderson motives [1] have been designed in [19]. We implemented them in our package; they are available through the methods norm and charpoly. When \(K\) is finite, a distinguished endomorphism of a Drinfeld module \(\phi\) is its Frobenius endomorphism. Its characteristic polynomial plays a prominent role in the theory; notably, it entirely determines the isogeny class of \(\hat{\varphi}\)[18, Th. 3.5]. In our package, we implemented three different algorithms for computing this invariant, namely: * the motive algorithm, based on Anderson motives as already discussed above, * the crystalline algorithm [20], based on the action of the Frobenius on the crystalline cohomology, * the CSA algorithm [19], based on a reinterpretation of the characteristic polynomial of the Frobenius as a reduced norm in some central simple algebra. Figure 1 (on page 1) compares the timings of our three algorithms3 depending on the rank of the Drinfeld module and the degree of the extension \(K/\mathbb{F}_{q}\) (with \(q=5\) in our example). We observe that the CSA algorithm performs better when the rank is large, whereas the crystalline algorithm is the best when \([K:\mathbb{F}_{q}]\) is large. The method frobenius_charpoly, which is the entry point for this task in our package, is tuned for choosing by itself the best available algorithm depending on the input; the user can nevertheless require the use of a specific algorithm by passing in the keyword algorithm. Footnote 3: There is still some place for optimization, here. Indeed, the three algorithm rely eventually to the computation of the characteristic polynomial of an actual matrix with coefficients in \(K[T]\). For this task, we just called the charpoly function of \(\mathtt{SegMat}\) which, unfortunately, implements a slow generic algorithm with quartic complexity. Nevertheless, we believe that Figure 1 is meaningful in the sense that the comparison between timings are relevant. As a byproduct of this computation, we implemented a method is_isogenous which checks whether two given Drinfeld modules are isogenous. ### Exponential and logarithm A quite important perspective on Drinfeld modules is the analytic point of view. To explain it, let us go back to the case of elliptic curves: we know that an elliptic curve \(E\) over \(\mathbb{C}\) is uniformed by a free \(\mathbb{Z}\)-submodule in \(\mathbb{C}\) of rank \(2\), _i.e._\(E(\mathbb{C})\cong\mathbb{C}/\Lambda\) as complex Lie groups [19, VI SS]. In the Drinfeld setting, a similar result holds after replacing the field \(\mathbb{C}\) by \(\mathbb{C}_{\infty}\), the completion for the valuation associated to \(\frac{1}{T}\) of an algebraic closure of \(\mathbb{F}_{q}((\frac{1}{T}))\)[14, Theorem 4.6.9]. In this situation, the uniformization is obtained _via_ a \(\mathbb{F}_{q}\)-linear, surjective and nonconstant function \(c_{\phi}:\mathbb{C}_{\infty}\to\mathbb{C}_{\infty}\) called the _exponential_ of the Drinfeld module \(\phi\). The exponential may be represented by a power series \[c_{\phi}(z)=z+\sum_{j\in\mathbb{J}}\alpha_{i}z^{q^{j}}\] for \(\alpha_{i}\in\mathbb{C}_{\infty}\) and \(z\in\mathbb{C}_{\infty}\). The _logarithm_ of \(\phi\), denoted \(\log_{\phi}\) is the compositional inverse of the exponential. We refer the reader to chapter 4 of [14] for more details. In our implementation, any Drinfeld module possesses the methods exponential and logarithm which compute power series approximation of \(c_{\phi}\) and \(\log_{\phi}\) respectively. The code computes the power series lazily, meaning that any coefficient is computed on demands and the user does not need to input any precision parameter. ## Acknowledgements We thank Pierre-Jean Spaenlehauer and Emmanuel Thome for their guidance. David Ayotte was supported by FRQNT doctoral scholarship. This work also benefited from the financial support of the ANR projects CLap-CLap (ANR-18-CE40-0026-01) and PadLEfAn (ANR-22-CE40-0013).
2307.16439
Asymptotic behavior of the first Dirichlet eigenvalue of AHE manifolds
In this article, we investigate the rate at which the first Dirichlet eigenvalue of geodesic balls decreases as the radius approaches infinity. We prove that if the conformal infinity of an asymptotically hyperbolic Einstein manifold is of nonnegative Yamabe type, then the two-term asymptotic of the eigenvalues is the same as that in hyperbolic space.
Xiaoshang Jin
2023-07-31T06:55:05Z
http://arxiv.org/abs/2307.16439v1
# Asymptotic behavior of the first Dirichlet eigenvalue of AHE manifolds ###### Abstract In this article, we investigate the rate at which the first Dirichlet eigenvalue of geodesic balls decreases as the radius approaches infinity. We prove that if the conformal infinity of an asymptotically hyperbolic Einstein manifold is of nonnegative Yamabe type, then the two-term asymptotic of the eigenvalues is the same as that in hyperbolic space. ## 1 Introduction Suppose that \(\mathbb{H}^{n+1}\) is an \(n+1-\) dimensional hyperbolic space, then it is well-known that the spectrum of the Laplacian \(\sigma(-\Delta_{\mathbb{H}})=[\frac{n^{2}}{4},+\infty).\) Later the result is extended by R. Mazzeo in [9] and [10]. He showed that if \((X,g)\) is an \(n+1-\) dimensional asymptotically hyperbolic manifold, then the spectrum of the Laplacian \[\sigma(-\Delta)=\sigma_{p}(-\Delta)\cup[\frac{n^{2}}{4},+\infty).\] Here \(\Delta=\Delta_{g}=\frac{1}{\sqrt{G}}\frac{\partial}{\partial x^{1}}(g^{ij} \sqrt{G}\frac{\partial}{\partial x^{j}})\) stands for the Laplace-Beltrami operator and \(\sigma_{p}(-\Delta)\subseteq(0,\frac{n^{2}}{4})\) is a finite set of point spectrums (\(L^{2}\) eigenvalues). Lee [6] discovered a connection between its spectrum and conformal infinity when \(g\) is also Einstein. In other words, \(\sigma_{p}(-\Delta)\) is empty when \((X,g)\) is an asymptotically hyperbolic Einstein (AHE) manifold with conformal infinity of nonnegative Yamabe type. One can also see [14] for another proof. We rewrite Lee's result: \(Y(\partial X,[\hat{g}])\geq 0\Rightarrow\lambda_{1}(X)=\frac{n^{2}}{4}.\) It is clear from the property of the first Dirichlet eigenvalue on the noncompact manifold that \[\lim_{R\rightarrow+\infty}\lambda_{1}(B(p,R))=\frac{n^{2}}{4}\] holds for any \(p\in X.\) Here \(B(p,R)\) is the geodesic ball centered at \(p\) of radius \(R\). In this paper, we will present the rate of how \(\lambda_{1}(B(p,R))\) tends to \(\frac{n^{2}}{4}.\) Before stating our primary theorem, let's first present some basic conceptions. Suppose that \(\overline{X}\) is a compact manifold with smooth boundary \(\partial X\) and \(g\) is a complete metric in its interior \(X.\) We say that \((X,g)\) is conformally compact if there exists a defining function \(\rho\) such that \(\bar{g}=\rho^{2}g\) extends continuously to \(\overline{X}.\) Here \[\rho>0\ in\ X,\ \ \ \rho=0\ on\ \partial X,\ \ \ d\rho\neq 0\ on\ \partial X.\] \((X,g)\) is called \(C^{m,\alpha}\) (smoothly) conformally compact if \(\bar{g}=\rho^{2}g\) is \(C^{m,\alpha}\) (smooth) on \(\overline{X}.\) For any defining function \(\rho,\) we call \(\hat{g}=\rho^{2}g|_{T\partial X}\) the boundary metric. Hence the conformal class \((\partial X,[\hat{g}])\) is uniquely determined by \(g\) and we call it the conformal infinity of \(g.\) Let \(\bar{g}=\rho^{2}g\) be a \(C^{2}\) conformal compactification of \((X,g),\) then a simple calculation, such as that in [9] indicates that the sectional curvature of \(g\) tends to \(-|d\rho|^{2}_{\bar{g}}|_{\partial X}\) as \(\rho\to 0.\) Therefore no matter what the topology and geometry of \(g\) look like in \(X,\) the boundary behavior would always remind us of hyperbolic space. We call \((X,g)\) an asymptotically hyperbolic (AH for short) manifold if it is conformally compact and \(|d\rho|^{2}_{\bar{g}}=1\) on the boundary \(\partial X.\) Let \((X,g)\) be a \(C^{2}\) conformally compact manifold of dimension \(n+1,\) if \(g\) is also Einstein, i.e. \(Ric[g]=-ng,\) then \(|d\rho|^{2}_{\rho^{2}g}=1\) on \(\partial X\) for any smooth defining function \(\rho.\) In this case, we say that \((X,g)\) is an asymptotically hyperbolic Einstein (AHE for short) manifold. Here is the main result of this paper: **Theorem 1.1**.: _Let \((X,g)\) be an \(n+1-\) dimensional AHE manifold with conformal infinity \((\partial X,[\hat{g}]).\) If the Yamabe constant \(Y(\partial X,[\hat{g}])\geq 0,\) then for any \(p\in X,\)_ \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{1.1}\] _Here \(\lambda_{1}\) denotes the first Dirichlet eigenvalue._ Theorem 1.1 makes clear that the rate at which the first Dirichlet eigenvalue of geodesic balls tends to \(\frac{n^{2}}{4}\) is the same as that in hyperbolic space, at least in the second term of the expansion. On the other hand, we believe that the rate at which the first Dirichlet eigenvalue decreases is related to the geometry structure of manifolds. It is connected to the number of ends. Let's recall the work of Witten-Yau [15]. They showed that the boundary \(\partial X\) of an AHE manifold \((X,g)\) is connected if \(Y(\partial X,[\hat{g}])>0.\) Later the work was extended by Cai and Galloway in [2] where they relaxed the assumption that \(\partial X\) has nonnegative Yamabe constant. In [13], Wang proved that if the first eigenvalue of an AHE manifold \(\lambda_{1}(X)\geq n-1,\) then it either has only one end or it must be a warped product \((\mathbb{R}\times N,dt^{2}+\cosh^{2}tg_{N}).\) It would provide a new proof for Cai-Galloway's result if combined with Lee's work in [6]. Let's summarize their work: for an AHE manifold \((X,g),\) \[Y(\partial X,[\hat{g}])\geq 0\Longrightarrow\lambda_{1}(X)=\frac{n^{2}}{4} \Longrightarrow\partial X\ is\ connected\ (X\ has\ one\ end).\] Later, Li and Wang expanded the results in [7] and [8] where they didn't require \(X\) to be conformally compact. In this case, \(X\) either has one end or is a warped product. Now we could rule out the case of warped product by a direct calculation. In fact, as an application of theorem 0.5 and 0.6 in [8], we could obtain the following property: **Proposition 1.2**.: _Let \((X,g)\) be a complete \(n+1-\)dimensional manifold with \(n\geq 2\) and \(Ric[g]\geq-ng.\) If_ \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty \tag{1.2}\] _for some \(p\in X,\) then \(X\) has only one end with infinite volume._ This paper is organized as follows. In section 2, we first provide some background information on the Dirichlet eigenvalue. Then in sections 3 and 4, we prove theorem 1.1. In order to get the upper bound of the first Dirichlet eigenvalue of geodesic balls, we use the eigenvalue comparison theory and the eigenvalue formula in hyperbolic space. To estimate the lower bound, we somewhat enhance Lee's work. To be more precise, we create a new test function \(u^{-\frac{n}{2}}\cdot\sin(a\ln\varepsilon u)\) on a bounded domain. Here \(u\) is the eigenfunction solution to \(\Delta u=(n+1)u\) and was first used by Lee in [6]. In the end, we prove proposition 1.2 in section 5. ## 2 The first Dirichlet eigenvalue of manifolds Let's introduce some materials about Dirichlet eigenvalue in this section. Suppose that \((M,g)\) is a complete manifold and \(\Omega\subseteq M\) is a bounded domain of \(M\) with piecewise smooth boundary. The Dirichlet eigenfunctions are defined by solving the following problem for \(u\neq 0\) and eigenvalue \(\lambda\). \[\left\{\begin{array}{ll}\Delta u=-\lambda u&in\ \Omega,\\ u=0&on\ \partial\Omega\end{array}\right. \tag{2.1}\] where \(\Delta=\frac{1}{\sqrt{G}}\frac{\partial}{\partial x^{i}}(g^{ij}\sqrt{G}\frac{ \partial}{\partial x^{j}}).\) The smallest eigenvalue is denoted by \(\lambda_{1}=\lambda_{1}(\Omega)>0.\) Recall the Sobolev space \(H^{1}(\Omega)=W^{1,2}(\Omega)\) and \(H^{1}_{0}(\Omega)\subseteq H^{1}(\Omega)\) is defined to be the closure of the infinitely differentiable functions compactly supported in \(\Omega.\) Then by the max-min principle, \[\lambda_{1}(\Omega)=\inf_{f\in H^{1}_{0}(\Omega)\setminus\{0\}}\frac{\int_{ \Omega}|\nabla f|^{2}dV_{g}}{\int_{\Omega}f^{2}dV_{g}} \tag{2.2}\] It's easy to see that the eigenvalue has domain monotonicity: if \(\Omega_{1}\subseteq\Omega_{2}\Subset M,\) then \(\lambda_{1}(\Omega_{1})\geq\lambda_{1}(\Omega_{2}).\) Now we suppose that \((M,g)\) is a noncompact manifold, and denote the greatest lower bound for the \(L^{2}\)-spectrum of the Laplacian by \[\lambda_{1}(M):=\inf spec(-\Delta)=\inf_{f\in H^{1}_{0}(M)\setminus\{0\}} \frac{\int_{M}|\nabla f|^{2}dV_{g}}{\int_{M}f^{2}dV_{g}}. \tag{2.3}\] Notice that \(\lambda_{1}(M)\) does not necessarily be an \(L^{2}\) eigenvalue of \(-\Delta,\) but is motivated by the characterization by \[\lambda_{1}(M)=\lim_{k\to\infty}\lambda_{1}(\Omega_{k}) \tag{2.4}\] for any smoothly compact exhaustion \(\{\Omega_{k}\}\) of \(M.\) For example, for the hyperbolic space \(M=\mathbb{H}^{n+1},\) we know that \(spec(-\Delta)=[\frac{n^{2}}{4},+\infty),\) see [11]. Then for any \(p\in\mathbb{H}^{n+1},\) \[\lim_{R\to+\infty}\lambda_{1}(B(p,R))=\frac{n^{2}}{4}. \tag{2.5}\] It is an interesting problem what the formula of \(\lambda_{1}(B(p,R)\) looks like. Or how \(\lambda_{1}(B(p,R))\) tends to \(\frac{n^{2}}{4}?\) It is shown in [12] and [1] that \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{2.6}\] In this paper, we prove that (2.6) still holds for AHE manifolds with conformal infinity of nonnegative Yamabe type. ## 3 The upper bound of eigenvalues Let's recall the classic eigenvalue comparison theorem of Cheng [3]. If \((X,g)\) is an \(n+1\) dimensional complete manifold satisfying that \(Ric[g]\geq-ng,\) then for any \(p\in X\) and \(R>0,\)\(\lambda_{1}(B(p,R))\leq\lambda_{1}(B^{\mathbb{H}}(R)).\) Here \(B^{\mathbb{H}}(R)\) is a geodesic ball of radius \(R\) in hyperbolic space. He also showed that \(\lambda_{1}(B^{\mathbb{H}}(R))\leq\frac{n^{2}}{4}+\frac{C}{R^{2}}\) for some positive constant \(C.\) Later the upper bound estimate was extended by Gage, see theorem 5.2 in [4]. In the following, we provide a weak version of the estimate for the upper bound and the proof is also simpler. **Theorem 3.1**.: _Let \(\mathbb{H}^{n+1}\) be the hyperbolic space of \(n+1\) dimension, then for any \(p\in\mathbb{H}^{n+1},\)_ \[\lambda_{1}(B(p,R))\leq\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3})\ \ R\to+\infty. \tag{3.1}\] Proof.: Consider the rotationally symmetric model: \[(\mathbb{R}^{n+1},g_{\mathbb{H}}=dt^{2}+\sinh^{2}tg_{\mathbb{S}})\] and let \(p\) be the center point. For any \(R>0,\) we define the function \[f=e^{-\frac{n}{2}t}\cdot\sin(\frac{\pi}{R}t)\in H^{1}_{0}((B(p,R)) \tag{3.2}\] Then \[\begin{split}\lambda_{1}(B(p,R))&\leq\frac{\int_{ B(p,R)}|\nabla f|^{2}dV[g^{\mathbb{H}}]}{\int_{B(p,R)}f^{2}dV[g^{\mathbb{H}}]}\\ &=\frac{\int_{0}^{R}e^{-nt}(-\frac{n}{2}\sin(\frac{\pi}{R}t)+ \frac{\pi}{R}\cos(\frac{\pi}{R}t))^{2}\cdot\omega_{n}\sinh^{n}tdt}{\int_{0}^{ R}e^{-nt}\sin^{2}(\frac{\pi}{R}t)\cdot\omega_{n}\sinh^{n}tdt}\\ &=\frac{\int_{0}^{R}(1-e^{-2t})^{n}\cdot(-\frac{n}{2}\sin(\frac{ \pi}{R}t)+\frac{\pi}{R}\cos(\frac{\pi}{R}t))^{2}dt}{\int_{0}^{R}(1-e^{-2t})^{n }\cdot\sin^{2}(\frac{\pi}{R}t)dt}\\ &=\frac{\int_{0}^{\pi}(1-e^{-\frac{2R\theta}{\pi}})^{n}\cdot(- \frac{n}{2}\sin\theta+\frac{\pi}{R}\cos\theta)^{2}d\theta}{\int_{0}^{\pi}(1-e^ {-\frac{2R\theta}{\pi}})^{n}\cdot\sin^{2}\theta d\theta}\\ &=\frac{F(R)}{G(R)}\end{split} \tag{3.3}\] where \[F(R)\leq\int_{0}^{\pi}(-\frac{n}{2}\sin\theta+\frac{\pi}{R}\cos\theta)^{2}d \theta=\frac{\pi}{2}\cdot(\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}})\] For the term \(G(R),\) a direct calculation indicates that \[\int_{0}^{\pi}e^{-r\theta}\cdot\sin^{2}\theta d\theta=\frac{2}{r(r^{2}+4)}(1- e^{-\pi r})=\frac{2}{r^{3}}+O(r^{-4}),\;r\to+\infty \tag{3.4}\] Hence we could get that \[\begin{split} G(R)&=\int_{0}^{\pi}[1+\sum_{k=1}^{n }C_{n}^{k}(-e^{-\frac{2R\theta}{\pi}})^{k}]\sin^{2}\theta d\theta\\ &=\frac{\pi}{2}-\frac{\pi^{3}}{4}[\sum_{k=1}^{n}C_{n}^{k}\frac{(- 1)^{k+1}}{k^{3}}]\frac{1}{R^{3}}+O(R^{-4})\end{split} \tag{3.5}\] In the end, we deduce that \[\lambda_{1}(B(p,R))\leq\frac{F(R)}{G(R)}\leq\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}} +\frac{n^{2}\pi^{2}}{8}[\sum_{k=1}^{n}C_{n}^{k}\frac{(-1)^{k+1}}{k^{3}}]\frac{1} {R^{3}}+O(R^{-4}). \tag{3.6}\] ## 4 The lower bound of eigenvalues Suppose that \((X,g)\) satisfies the conditions in theorem 1.1. Lee proved that \(\lambda_{1}(X)=\frac{n^{2}}{4}\) in [6]. The key step is to construct a proper test function \(\varphi=u^{-\frac{n}{2}}.\) Here \(u\) is an important positive eigenfunction with prescribed growth at infinity. In order to make our proof more clear, we would give a short quick introduction to Lee's proof. ### A quick review of Lee's work **Lemma 4.1**.: _[_6_]_ _Let \((X,g)\) be an \(n+1-\) dimensional AHE manifold with boundary metric \(\hat{g}\) and let \(x\) be the associated geodesic defining function. Then there is a unique positive eigenfunction \(u\) on \(X\) such that_ \[\Delta u=(n+1)u.\] _and \(u\) has the following form of expansion at the boundary_ \[u=\frac{1}{x}+\frac{\hat{R}}{4n(n-1)}x+O(x^{2}).\] Let \(\varphi=u^{-\frac{n}{2}},\) then \[-\frac{\Delta\varphi}{\varphi}=\frac{n^{2}}{4}+\frac{n(n+2)}{4}(1-\frac{|du|_ {g}^{2}}{u^{2}}).\] One can estimate near the boundary: \[u^{2}-|du|_{g}^{2}=\frac{\hat{R}}{n(n-1)}+o(1).\] On the other hand, the Bochner formula implies that \[-\Delta(u^{2}-|du|_{g}^{2})=2|\frac{\Delta u}{n+1}g+\nabla^{2}u|^{2}\geq 0.\] When \(Y(\partial X,[\hat{g}])\geq 0\) we can choose a representative \(\hat{g}\) whose scalar curvature \(\hat{R}\geq 0,\) then the maximum principle implies that \(u^{2}-|du|_{g}^{2}\geq 0\) in \(X.\) So \(-\frac{\Delta\varphi}{\varphi}\geq\frac{n^{2}}{4}\) in \(X.\) According to the eigenvalue comparison theorem of Cheng-Yau [3], \(\lambda_{1}(X)\geq\frac{n^{2}}{4}.\) Now we turn to research the first Dirichlet eigenvalue of geodesic balls. For sufficiently small \(\varepsilon>0,\) let \(X_{\varepsilon}=\partial X\times(0,\varepsilon).\) We study the first Dirichlet eigenvalue of \(X\setminus X_{\varepsilon}.\) If \(\hat{R}>0,\) we get that \[1-\frac{|du|_{g}^{2}}{u^{2}}\geq\frac{\hat{R}}{n(n-1)}\frac{1}{u^{2}}+o(\frac{ 1}{u^{2}})=\frac{\hat{R}}{n(n-1)}x^{2}+o(x^{2}).\] Then \[-\frac{\Delta\varphi}{\varphi}\geq\frac{n^{2}}{4}+c\varepsilon^{2},\ \ \ on\ X\setminus X_{\varepsilon}\] for some positive constant \(c\) and hence \(\lambda_{1}(X\setminus X_{\varepsilon})\geq\frac{n^{2}}{4}+c\varepsilon^{2}.\) As a consequence, \[\lambda_{1}(B(p,R))\geq\frac{n^{2}}{4}+\frac{C}{e^{2R}} \tag{4.1}\] for some \(C>0\) provided \(R\) is large enough. If \(\hat{R}=0,\) we know that \(1-\frac{|du|_{g}^{2}}{u^{2}}\) is still positive in \(X,\) see [5]. Then a similar estimate of (4.1) could be obtained. The lower bound \(\frac{C}{e^{2R}}\) is too "small" compared to \(\frac{\pi^{2}}{R^{2}}.\) We need to find a better test function to get a sharper lower bound of \(\lambda_{1}(B(p,R)).\) ### A new test function Let \(u\) be the eigenfunction that is defined in lemma 4.1 and \(\varphi=u^{-\frac{n}{2}}.\) In the following, for sufficiently small \(\varepsilon>0,\) we consider a new test function \[\psi=\varphi\cdot h=u^{-\frac{n}{2}}\cdot\sin(a\ln\varepsilon u) \tag{4.2}\] on the bounded domain \[F_{\varepsilon}=\{p\in X:u(p)<\frac{1}{\varepsilon}\} \tag{4.3}\] where \(a=a(n,\varepsilon)<0\) is a constant to be determined. A simple calculation indicates that \[h^{\prime}=\frac{a\cos(a\ln\varepsilon u)}{u},\ \ h^{\prime\prime}=\frac{-a^{2} \sin(a\ln\varepsilon u)-a\cos(a\ln\varepsilon u)}{u^{2}}=-a^{2}\frac{h}{u^{2 }}-\frac{h^{\prime}}{u}. \tag{4.4}\] Hence \[\Delta h=h^{\prime}\Delta u+h^{\prime\prime}|du|^{2}=(n+1)uh^{\prime}-(a^{2}h+uh^ {\prime})\frac{|du|^{2}}{u^{2}} \tag{4.5}\] and \[2g(d\ln\varphi,d\ln h)=2g(-\frac{n}{2}\frac{du}{u},\frac{h^{\prime}}{h}du)=-n \frac{uh^{\prime}}{h}\frac{|du|^{2}}{u^{2}}. \tag{4.6}\] As a consequence, \[-\frac{\Delta\psi}{\psi} =-\frac{\Delta\varphi}{\varphi}-(\frac{\Delta h}{h}+2g(d\ln\varphi,d\ln h) \tag{4.7}\] \[=\frac{n^{2}}{4}+\frac{n(n+2)}{4}(1-\frac{|du|^{2}}{u^{2}})-[(n+1 )\frac{uh^{\prime}}{h}-(a^{2}+\frac{uh^{\prime}}{h})\frac{|du|^{2}}{u^{2}}-n \frac{uh^{\prime}}{h}\frac{|du|^{2}}{u^{2}}]\] \[=\frac{n^{2}}{4}+a^{2}+(1-\frac{|du|^{2}}{u^{2}})[\frac{n(n+2)}{ 4}-(n+1)\frac{uh^{\prime}}{h}-a^{2}]\] \[=\frac{n^{2}}{4}+a^{2}+(1-\frac{|du|^{2}}{u^{2}})[\frac{n(n+2)}{ 4}-(n+1)a\cdot\cot(a\ln\varepsilon u)-a^{2}]\] We could assume that \(u\geq 1\) on \(X,\) or else we use \(ku\) instead where \(k\) is a constant large enough. Now set \[a=\frac{\pi}{\ln\varepsilon}+\frac{c_{n}}{\ln^{2}\varepsilon} \tag{4.8}\] for some constant \(c_{n}>0,\) then \(a<0.\) Hence on \(F_{\varepsilon},\) we have that \[a\ln\varepsilon u\in(0,\pi+\frac{c_{n}}{\ln\varepsilon}]\subseteq(0,\pi).\] As a result, \(h\) is smooth and positive on \(F_{\varepsilon}\) and so is \(\psi.\) Furthermore, \[\begin{split}-a\cdot\cot(a\ln\varepsilon u)&\geq-a \cdot\cot(\pi+\frac{c_{n}}{\ln\varepsilon})=-a\frac{\cos(\pi+\frac{c_{n}}{ \ln\varepsilon})}{\sin(\pi+\frac{c_{n}}{\ln\varepsilon})}\\ &>\frac{a}{\sin(\pi+\frac{c_{n}}{\ln\varepsilon})}=\frac{\frac{ \pi}{\ln\varepsilon}+\frac{c_{n}}{\ln^{2}\varepsilon}}{\sin(-\frac{c_{n}}{ \ln\varepsilon})}\rightarrow-\frac{\pi}{c_{n}}.\end{split} \tag{4.9}\] Therefore \[\liminf_{\varepsilon\to 0}[\frac{n(n+2)}{4}-(n+1)a\cdot\cot(a\ln \varepsilon u)-a^{2}]\geq\frac{n(n+2)}{4}-(n+1)\frac{\pi}{c_{n}}. \tag{4.10}\] If we choose \(c_{n}\geq\frac{4\pi(n+1)}{n(n+2)},\) then the formula (4.10)is nonnegative and finally we can get that \[-\frac{\Delta\psi}{\psi}\geq\frac{n^{2}}{4}+a^{2} \tag{4.11}\] on \(F_{\varepsilon}\) provided \(\varepsilon\) is sufficiently small. Then \[\lambda_{1}(F_{\varepsilon})\geq\frac{n^{2}}{4}+a^{2}=\frac{n^{2}}{4}+\frac{ \pi^{2}}{\ln^{2}\varepsilon}+O(\frac{1}{\ln^{3}\varepsilon}). \tag{4.12}\] For any \(p\in X\) and large \(R>0,\) let's consider the first Dirichlet eigenvalue of \(B(p,R).\) Since \[|u-\frac{1}{x}|\leq C_{1},\ \ \ |-\ln x(\cdot)-d_{g}(p,\cdot)|\leq C_{2} \tag{4.13}\] where \(C_{1}\) and \(C_{2}\) are positive constants, we have that \[e^{d_{g}(p,\cdot)}\geq\frac{e^{-C_{2}}}{x}\geq e^{-C_{2}}(u-C_{1}) \tag{4.14}\] Then \[B(p,R)\subseteq\{q\in X:u(q)<e^{R+C_{2}}+C_{1}\}\subseteq F_{e^{-R-C_{3}}}\] for some constant \(C_{3}>0\) when \(R\) is large enough. Hence \[\begin{split}\lambda_{1}(B(p,R)&\geq\lambda_{1}(F_ {e^{-R-C_{3}}})\\ &\geq\frac{n^{2}}{4}+\frac{\pi^{2}}{(R+C_{3})^{2}}+O(\frac{1}{(R+ C_{3})^{3}})\\ &=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(\frac{1}{R^{3}}).\end{split} \tag{4.15}\] Proof of theorem 1.1: theorem 3.1 and the eigenvalue comparison theorem in [3] provide the upper bound of the first Dirichlet eigenvalue of balls while (4.15) provides the lower bound. Then we finish the proof for theorem 1.1. ## 5 The geometric property of the asymptotical behavior To prove proposition 1.2, we introduce an important result of Li and Wang: **Theorem 5.1**.: _[_8_]_ _Let \((M,g)\) be an \(n+1\) dimensional complete manifold with \(n\geq 2.\) Suppose that \(Ric[g]\geq-n\) and \(\lambda_{1}(M)=\frac{n^{2}}{4}.\) Then_ _(1) \(M\) has only one end with infinite volume; or_ _(2) \((M,g)=(\mathbb{R}\times N,dt^{2}+e^{2t}g_{N})\) where \((N,g_{N})\) is an \(n-\)dimensional compact manifold satisfying that \(Ric[g_{N}]\geq 0;\) or_ _(3) \(n=2\) and \((M,g)=(\mathbb{R}\times N,dt^{2}+\cosh^{2}tg_{N})\) where \(N\) is a compact surface satisfying that the sectional curvature \(K_{N}\geq-1.\)_ In the following, we will show that the rate of eigenvalues in case (2) and (3) of theorem 5.1 does not match the formula (1.2). **Example 5.2**.: _Let \((M,g)=(\mathbb{R}\times N,dt^{2}+e^{2t}g_{N})\) be an \(n+1-\)dimensional manifold \((n\geq 2)\) where \((N,g_{N})\) is an \(n-\)dimensional compact manifold satisfying that \(Ric[g_{N}]\geq 0.\) Then \(Ric[g]\geq-n\) and \(\lambda_{1}(M)=\frac{n^{2}}{4}.\)_ Now we are going to study the formula of \(\lambda_{1}(B(p,R)).\) We assume that \(d_{N}=diam(N,g_{N})>0\) and for any \(p\in M\) and large \(R>0,\) let \(d_{p}=dist_{g}(p,N).\) Then \[B(p,R-d_{p})\subseteq B(N,R)\subseteq B(p,R+d_{N}+d_{p}). \tag{5.1}\] Here \(B(N,R)=(-R,R)\times N.\) Let \(f=e^{-\frac{n}{2}t}\cos(\frac{\pi}{2R}t),\) then \[f>0\;\;in\;B(N,R),\quad f=0\;\;on\;\partial B(N,R).\] On the other hand, \[\begin{split}\Delta f&=f^{\prime}(t)\Delta t+f^{ \prime\prime}(t)|\nabla t|^{2}=nf^{\prime}(t)+f^{\prime\prime}(t)\\ &=ne^{-\frac{n}{2}t}[-\frac{n}{2}\cos(\frac{\pi}{2R}t)-\frac{\pi }{2R}\sin(\frac{\pi}{2R}t)]+e^{-\frac{n}{2}t}[\frac{n^{2}}{4}\cos(\frac{\pi}{2 R}t)\\ &\quad+\frac{n\pi}{4R}\sin(\frac{\pi}{2R}t)+\frac{n\pi}{4R}\sin( \frac{\pi}{2R}t)-\frac{\pi^{2}}{4R^{2}}\cos(\frac{\pi}{2R}t)]\\ &=e^{-\frac{n}{2}t}[-\frac{n^{2}}{4}\cos(\frac{\pi}{2R}t)-\frac{ \pi^{2}}{4R^{2}}\cos(\frac{\pi}{2R}t)]\\ &=-(\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}})f\end{split} \tag{5.2}\] which means that \(u\) is an eigenfunction and \(\lambda_{1}(B(N,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}}.\) Then the monotonicity of the first Dirichlet eigenvalue and (5.1) would imply that \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}}+O(R^{-3}),\;\;R \rightarrow+\infty. \tag{5.3}\] **Example 5.3**.: _Let \((M,g)=(\mathbb{R}\times N,dt^{2}+\cosh^{2}(t)g_{N})\) be a \(3-\)dimensional manifold where \((N,g_{N})\) is a compact surface with Gaussian curvature bounded from below by \(-1.\) Then \(Ric[g]\geq-2\) and \(\lambda_{1}(M)=1.\)_ As discussed above, we only need to calculate the first Dirichlet eigenvalue of \(B(N,R)=(-R,R)\times N.\) Let \(f=\frac{\cos(\frac{\pi}{2R}t)}{\cosh(t)},\) then \[f>0\;\;in\;B(N,R),\quad f=0\;\;on\;\partial B(N,R).\] Furthermore, \[f^{\prime}(t)=-\frac{\pi}{2R}\frac{\sin(\frac{\pi}{2R}t)}{\cosh(t)}-\tanh(t)\cdot f \tag{5.4}\] and \[f^{\prime\prime}(t)=(-\frac{\pi^{2}}{4R^{4}}+\tanh^{2}(t)-\frac{1}{\cosh^{2}(t)} )f+\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t) \tag{5.5}\] and hence \[\begin{split}\Delta f&=f^{\prime}(t)\Delta t+f^{ \prime\prime}(t)|\nabla t|^{2}=f^{\prime}(t)\cdot 2\tanh(t)+f^{\prime\prime}(t)\\ &=-\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t )-2\tanh^{2}(t)\cdot f\\ &\quad+(-\frac{\pi^{2}}{4R^{4}}+\tanh^{2}(t)-\frac{1}{\cosh^{2}(t )})f+\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t)\\ &=(-\frac{\pi^{2}}{4R^{4}}-\tanh^{2}(t)-\frac{1}{\cosh^{2}(t)})f \\ &=-(1+\frac{\pi^{2}}{4R^{2}})f\end{split} \tag{5.6}\] We obtain that \(\lambda_{1}(B(N,R))=1+\frac{\pi^{2}}{4R^{2}}\) and hence for any \(p\in M,\) \[\lambda_{1}(B(p,R))=1+\frac{\pi^{2}}{4R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{5.7}\] Theorem 5.1 and (5.3), (5.7) all lead naturally to Proposition 1.2.
2309.06386
Lung Diseases Image Segmentation using Faster R-CNNs
Lung diseases are a leading cause of child mortality in the developing world, with India accounting for approximately half of global pneumonia deaths (370,000) in 2016. Timely diagnosis is crucial for reducing mortality rates. This paper introduces a low-density neural network structure to mitigate topological challenges in deep networks. The network incorporates parameters into a feature pyramid, enhancing data extraction and minimizing information loss. Soft Non-Maximal Suppression optimizes regional proposals generated by the Region Proposal Network. The study evaluates the model on chest X-ray images, computing a confusion matrix to determine accuracy, precision, sensitivity, and specificity. We analyze loss functions, highlighting their trends during training. The regional proposal loss and classification loss assess model performance during training and classification phases. This paper analysis lung disease detection and neural network structures.
Mihir Jain
2023-09-10T16:37:03Z
http://arxiv.org/abs/2309.06386v1
# Lung Diseases Image Segmentation using Faster R-CNNs ###### Abstract Lung diseases are the number one cause of child mortality across the developing world, about half of the global pneumonia deaths occur in India 370k according to Indian academy of pediatrics and national Health profile 2016. Timely diagnosis could significantly reduce the mortality level. This paper uses a low density neural network structure to avoid topological lag which comes with network depth and breadth. The neural network integrates the parameters into a feature pyramid network, extracting data to avoid losses. Softening Non maximal suppression is applied on regional proposals created by RPN. All function information is filtered and a model to achieve a high level of detection is obtained. For performance evaluation the trained model is validated with randomly chest x-ray images taken from the same data set, we then compute the confusion matrix to obtain the accuracy, precision, sensitivity, specificity values. All the loss functions involved in training and testing model is shown by total loss. The loss function decreases when the training is longer. The regional proposal loss gives the performance of the model during the training phase and determines the quality of the regional proposals. The classification loss demonstrated the loss function during the classification phase and determines the quality of classification. ## 1 Introduction and Motivation According to Indian Academy of Pediatrics and National Health Profile 2016, [1] Fifty percent of pneumonia deaths occur in India which means approximately 370k children die of pneumonia annually in India. Moreover Pneumonia is the number one killer of children causing 18% of all child mortality in the world. This harsh number is still minuscule compared to mortality due to other chest lung diseases. Timely diagnosis and treatment could reduce the mortality level. The conventional X-ray images are slow[2] making human evaluation inept. Computer aided diagnosis can enhance productivity and lead to timely treatment potentially saving millions of lives. The size, shape and position of pneumonia can vary greatly[3]. The outline is very vague and blurry which leads to great difficulty for detection and enhancing the accuracy of detection is a major research problem. At present detection algorithms include two stage region detectors and object classification such as Faster R-CNN [4] and one stage region detectors and object classification such as YOLO [5] and SSD [6]. In latter object classification and bounding box regression are done directly without using pre generated region proposals. While in two stage detectors there is generation of region proposals and then object classification for each region proposal. Though faster than two stage detectors one stage detectors are less accurate. Treatment and Diagnosis needs high accuracy and thus two stage detectors and classifiers have advantage. However, there are still problems with the present image classification models such as Xception [7] and VGG [8]. They need a large network depth leading to prolonged training time and large downsampling that leads to the target position and semantic information being lost. ### Project Statement Object Detection is one of the most promising fields to expand global healthcare services especially in lower income countries. Automated algorithms and tools have the potential to support workflow, improve efficiency, and reduce errors. The Aim of this paper is to address common classification problem The ideal aim would be to assess images using a deep learning feature map, since such a map has a big receptive field which makes the size of the region in the input that produces the feature anchor also large. However, with deep maps, it reduces the object-edge resolution, which reduces the assessment accuracy of the regression curve. So after continuous downsampling in such maps, the semantic features of the targets which are small disappear in the layers; the large target is also partially lost, and the edge will move, which is not favorable for accurate detection of the target. Conventionally, to optimize a network, such as X-Ception, is to extend the width or depth [9], but this method creates huge numbers of parameters, which leads to high variance in the model, and needs large amounts of data to train. ### Organisation of Report This paper uses a low density neural network structure to avoid topological lag which comes with network depth and breadth. The neural network integrates the parameters into a [FPN] feature pyramid network [10], extracting data to avoid losses. Softening-Non maximal suppression [S-NMS] [11] is applied on regional proposals created by RPN. all functional information is filtered and a model to achieve high level detection is obtained. This paper uses Faster RCNN using ResNet. ## 2 Background Material ### Conceptual Overview The reason why one cannot proceed with object detection by making a neural network (convolutional) followed by a layer is that the length of the output layer is not fixed as the number of occurrences of the object of interest can be fixed. One might think to take different regions of interest and then filter the presence of objects within those regions. The problem with this approach is that objects of interest could have different aspect ratios and different locations within the image. Hence a huge number of regions and this could computationally blow up. Thus arises the need for algorithms like R-CNN, YOLO, SSD. Conventionally object detection models used 3 steps. The first step involved generating large number of region proposals. Region proposals are basically the useful part of image which might have the objects to be detected in them. Algorithms such as selective search and EdgeBoxes generate region proposals. Then from each of these region proposals a feature vector is extracted using image descriptors such as histogram of object gradients. This feature vector is critical for the model to work correctly these vectors should decribe an object even if it varies in scale or transition. The feature vector then gives region proposals to the object classes. But as the object classes increase the complexity for such models vary greatly with classes. After feature extraction the method which is used for classifying the region proposals are like support vector machine. ### Yolo YOLO (You only look once) was proposed by Redmon in 2016. As stated in the original proposal [5] "Compared to other region proposal classification networks (fast RCNN) which perform detection on various region proposals and thus end up performing prediction multiple times for various regions in a image, Yolo architecture is more like CNN (convolutional neural network) and passes the image (nxn) once through and output is (mxm) prediction. This the architecture is splitting the input image in mxm grid and for each grid generation 2 bounding boxes and class probabilities for those bounding boxes" The Biggest advantage of the model is speed (45 frames per second) and an even faster version 155 fps which is less accurate due to smaller architecture. But YOLO imposes strong constraints on bounding box predictions as it treats them as a regression problem. This limits the number of small objects that a model can predict. Thus the model struggles with small objects that appear in groups and thus accuracy is low. ### Ssd SSD (Single Shot Detection) was proposed by Liu [6] takes only one multiscale feature map to detect independently, It is significantly faster in speed and accuracy object detection. The comparison between speed and accuracy of different object detection models on VOC2007 [12] SDD300 : 59 FPS with mAP 74.3% SSD500 : 22FPS with mAP 76.9% Faster RCNN : 7 FPS with mAP 73.4% YOLO : 45 FPS with mAP 63.4% SSD has a base VGG-16 network followed by multibox layers. The high speed and accuracy is because it eliminates the bounding box proposals like one used in R-CNN and filters with different sizes, and aspect ratio for object detection. The spatial resolution is reduced which makes it unable to locate small targets reducing accuracy. ### Region Proposals Region Proposals or Region of Interests on given an input image find all the possible places where the object can be located. The output given is a list of bounding boxes of likely positions of the objects. ### Rol Polling RoI pooling is a layer neural network used for object detection. It achieves speedup for both training and inference while maintaining high accuracy. Consider we need to perform RoI pooling on the map below for one region of interest and an output of size 2x2.Also say we have a region proposal (x,y,h,w coordinates).By dividing into 2x2 size.Note that the size of RoI need not be perfectly divisible by number of pooling sections The max values are also the output from RoI pooling layer. ### Rcnn R-CNN was proposed by Girschick in 2014 [13] to solve the problem of selecting a high number of regions. He proposed a method to selectively approach search to extract only regions from the image which are data giving instead of going through all the regions. This improved the speed of training and the accuracy on VOC2010 dataset the mAP from 35.1% to 53.7%. Compared to conventional model the R-CNN model can detect 80 different types of objects in images extracting features on the basis of convolutional neural network. Other than this everything is same from the traditional model. RCNN contains 3 steps. The first module generates 2k region proposals using selective search algorithm. Then after augmentation the second module extracts feature vector of each region proposal proportional to length of 4,096. The third module uses a pre-trained SVM algorithm to classify region proposal as either one of the object classes or the background. The issue with R-CNN is that it still takes a huge time to train the model network as one has to classify region proposals on each image. Also it can't be implemented in real time. The selective search approach is a fixed algorithm and therefore no learning happens at the stage which could lead to bad candidate region proposals. ### Fast RCNN This is an object detector which was also developed by Girshick, It overcomes some of the issues of R-CNN. - He Proposed a new layer called ROI pooling that extracts equal-length feature vector from all region of interests (i.e. proposals) in the same image. - Compared to R-CNN which has multiple steps, Fast R-CNN builds the network in a single step. - Fast R-CNN shares convolutional layer calculations across all region of interests rather than doing calculations for each of proposal singularly. Using the ROI Pooling layer making the Fast R-CNN faster than R-CNN. The feature map from the last convolutional layer is fed to an ROI Pooling layer. The reason is to extract a fixed length feature from each ROI. ROI Pooling works by splitting each region proposal into a grid of cells. The max pooling operation is applied to each grid cell to return a single value. After which the extracted feature vector is passed to some neural layers. The output of which is splitted in 2 branches softmax layer to predict the scores and FC (neural network) layer to predict the bounding boxes for detected object. This leads to decrease in time consumption as fast rcnn shares the computationally across proposals while rcnn computes for each proposal. Rcnn also takes one single RoI from input image which if say model need 256 proposals it would take from 256 images while the fast rcnn could potentially take 256 RoI from 4 images about 64 per image leading to a reduced time span. Though using multiple layers on the same image reduces the accuracy of the model as all regions start to correlate. Even though fast rcnn is better compared to rcnn on time it has its problems as it depends on selective search to generate region proposals which cannot be customized for a specific object detection dataset. ### Faster RCNN Selective search [14] is a slow and time consuming process to overcome which in 2015 Ren [4] came up with the Faster R-CNN model, an object detection algorithm that doesn't need to search and lets the network learn the region proposal.It is an extension of Fast R-CNN due to region proposal network RPN Region proposal network is a convolutional neural network that generates proposals with various augmentations. It tells the model network for where to look. Instead of using various images at different shape or sizes the paper introduced anchor boxes. An anchor box is a reference box of specific scale and ratio. Multiple reference boxes leads to multiple share and sizes for binary region. Each region is then mapped to reference anchor box for detecting at different scales. The computations are shared across the RPN and Fast-RCNN proposed above to reduce the computational time. The architecture of Faster R-CNN :- consists of two parts RPN : For detecting region proposals & Fast R-CNN for object detection in the proposed regions. It works as follows RPN generates region proposals. Then from all region proposals a fixed length feature vector is extracted using RoI Pooling layer. Then extracted features are classified using Fast r-cnn. Then class scores of detected objects in addition to their boxes are returned. Both r-cnn models before faster rcnn depend on selective search algorithms to generate region proposals. Each proposal is fed to a pre trained CNN while faster rcnn uses region proposals network to produce region proposals.Thus region proposals are produced using a network which means they can be trained for specific tasks in object detection. Also as they are trained now the model can be used on customised dataset which produces better proposals than selective Figure 1: Fast RCNN Architecture search or EdgeBoxes.By sharing convolutional layers, the RPN is merged with Fast-rcnn to a single unified network to do training only once. RPN works on the output feature map of the convolution layer shared with the Fast-rcnn a sliding window passes over each of the feature map that leads to generation of region proposal. Each proposal is characterized by score given by a reference box called the anchor box. The anchor box has two parameters Scale and aspect ratio. k regions are produced from each region proposal where k regions varies in scale or size. As anchor boxes vary in different sizes and scale they are able to get a non changing scale object detector as a single image at a single scale is used. Multi scale anchor boxes are to the benefit of the model. From region proposal a feature vector is extracted and fed to two layers. The first is a binary classification that generates object score for each proposal and second returns the bounding box of region. First layer has two outputs whether the region is object that had to be detected or the background. The layer outputs two elements if the first is 1 and second is 0 then region is classified as background whereelse if the first is 0 and second is 1 it is the object. For RPN training each anchor is also given a score based on IoU which is later discussed is the ratio of intersection of area between the anchor box and ground truth to the union of the same boxes. IoU increases as the boxes come closer to each other. The image is used as an input to a network which outputs a convolutional feature map. Instead of relying on a selective search algorithm on the feature map for identifying the region proposals, a separate network is used to predict the region proposals. Faster R-CNN is faster and can even be used for real-time object detection. ### Pneumonia Detection Works Many researchers have sought to detect pneumonia in the recent past. Abiyev and Ma' aitah [15] applied a CNN on the chest x-ray in comparison to RNN the convolutional neural network gets higher accuracy but has a longer training time. Guendel [16] proposed to use the DenseNet for the detection on chest x-ray dataset. Abiyev and Ma' aitah [15] extracted the features from the layers of CNN and explored them like GIST on more than 600 radiographs. As covid pandemic engulfed the world in 2020 Wang and Wong [17] proposed COVID-Net which is a deep CNN specialized for detection of covid-19 cases from chest x-ray (CXR) images. Ozturk [18] proposed an automatic detection using raw chest X-ray images; this model is used to provide accurate diagnostics for binary classification (COVID vs. no findings) and multiclass classification (COVID vs. no findings vs. pneumonia). ## 3 Methodology In this part, I would introduce in detail our proposed model method, including data processing, architecture and enhancement effect used of soft non maxial suppression. ### Dataset The dataset [19] for chest x-ray images and metadata is provided by National institute of health and clinical research through kaggle. This dataset contains images from 27864 unique patients. Each image is labelled with three classes. The case for lung complications is when the lungs are replaced by something other than air like bacteria, fluids, cells. This causes lung opacities to differ which is why xray beams are used as such contingent lungs opacity greater than normal because lung tissue is not healthy. Normal class had images of perfectly healthy lungs without any pathological diseases found in cxr. The third class in the dataset lung opacity had images of lungs with clouds associated with diseases such as pneumonia. This region of lung opacity is labelled with bounding boxes. An image can have multiple such boxes If more than one area is detected with more opacity by object detection model. The middle class are of patients with more opaque lungs but no lung contingencies. ### Evaluation Metrics The model is evaluated on the mean average precision at different intersections over union IoU thresholds. The IoU of a set of predicted bounding boxes and ground truth boxes are given by IoU(A,B) = \(\frac{A\cap B}{A\cup B}\) The metric over a range of IoU thresholds at each point calculating an average precision value. At a threshold of 0.5 a object detected is considered a hit if its IoU with a ground truth object is greater than 0.5. At each threshold value t a precision value is calculated based on the number of True positives (TP), False negatives (FN) and false positives arising from model compared with testing truth object A true positive is counted when a single predicted object matches a ground truth with an IoU above threshold. A false positive is when model predicted object had no associated truth object. A false negative is when a testing object has no associated predicted object. When no ground truth objects testing at all for an image, any number of false positives will result in the image receiving a score of zero and will be included in mean average precision. The average precision of a single image is calculated as the mean of precision value at each of IoU threshold. mAP(p,t) = \(\frac{1}{|\text{thresholds}|}\sum_{t}\frac{TP(t)}{TP(t)+FP(t)+FN(t)}\) In this model we are using confidence level for each of bounding boxes. Bounding boxes will be used to evaluate in order of the confidence levels in the said process. That is the bounding boxes with higher confidence will be checked first against testing solution which determines what boxes considered are true and false positives. None of the edge cases are known to exist in the data set. Lastly the score returned by the competition metric is the mean taken over the individual average precision of each image in test dataset. ### Model To avoid test time augmentations which would require high graphical usage and pseudolabelling which is not possible and feasible in practice. To reduce the memory footprint and time of \begin{table} \begin{tabular}{|c|c|c|} \hline CLASS & Pathogens (1) & None (0) \\ \hline Lung Opacity & 1 & 9555 \\ \hline Not Normal / No Opacity & 0 & 11821 \\ \hline Normal & 0 & 8851 \\ \hline \end{tabular} \end{table} Table 1: Distribution Of Classes in the dataset Figure 2: Model inference. I propose a faster rcnn model utilizing a Resnet encoder which is pertained on TensorFlow ImageNet. Also because Soft-NMS [23] improves the performance of detection model I have precoded the soft-nms algorithm. Faster RCNN is used for object detection while ResNet is the architecture which performs feature extraction for the model. Faster RCNN defines label for each input, defines how the features [24] are used and label to perform supervised learning, defines the loss function and optimiser and defines training and testing pipeline. While resnet performs how we will extract the feature [25]. After having saved the checkpoints for the trained model we call the model with argument = generatePredictions with the path to model checkpoint and generate predictions for test images. ## 4 Implementation ### Model Building The faster rcnn model is implemented on cloud pytorch with resnet architecture. The input to the model is a list of tensors of c-color h-height w-width for each image and in range 0-1. Different images have different sizes The model behaviour changes depending on if it is training or in evaluation. During training the model expects both the input tensors as well as targets containing the boxes which are the ground-truth boxes and labels for each of these ground truth boxes. The model returns a Dict[Tensor] during tensors containing the classification and regression losses for the regional proposal network and R-CNN. During inference the model requires only the input tensors and returns processesed predicted results as List[Dict[Tensor]], one for each input image. FasterRCNN needs following arguments as inputs - a backbone network which is used to compute the features for the model. The backbone should have a at least an attribute which fives the number of output that each feature map has. The backbone returns a single Tensor. - number of output classes for the model. If boxPredictor is specified then number of classes are none. - Min and max size of the image to be rescaled before feeding it to the backbone. - ImageNetan the values used for input normalisation. Should be the mean values of dataset on which the backbone has been trained on. Mean average precision at differnet intersection over union (IoU) threshold is calculated. Images with no ground truth bounding boxes are not included in the map score unless there is a false positive detection. None is returned if both are empty, don't count the image in final evaluation. True Positive (TP) tp = tp+1 if matched = True is counted when bt (boxes_true) are matched. False Negative (FN) is when the truth box bt has no match then it is calculated as FN fn = fn+1 False Positive (FP) is the box predicted that is not matched by any truth boxes fp = len(boxes_pred) - len (matched_bt) m = tp/ (tp+fn+fp). map total += m the score for the mAP is given by map_total/len (thresholds) - Images with empty boxes are added to model for contributing to optimisation and loss calculation - Original RetinaNet implementation in pytorch [20] ignored images with no boxes while this model has added them to get better loss calculation and optimisation. - the small anchors output is added to handle smaller boxes - to reduce overfitting dropout was added to classification to achieve regularisation and optimal classification result at same epoch. The base model preparation part is not that comparatively quicker as PyTorch already provides a Faster RCNN ResNet50 FPN model with also preTrained implementations so to save time and computing power, we need to: Load that model and its comparitive models for results. Append the model for our input and output needs. Modify the base model heads with the number of classes according to our dataset. ### Model Training Training dataset included data for 27864 and used data for 1000 participants for testing set. The range of pretrained pytorch imagenet basemodel dataset was used to pretrain our model. Without which the model worked on regression but failed on classification. As training dataset was reasonably balanced there was no need to extra balance. For learning rate used the ReduceLRonPlateu with patience of 4 and decrease factor of 0.2. Entire image classification loss and boxes classification were combined to get as total loss. Figure 8 shows Model Comparision Shows validation losses for a range of various backbones. The SE-Type nets demonstrated optimal performance with resnext101 showing the best results and resnet50 being slightly worse. Different architecture were compared with our architecture PNASNet, NASNet, Inception, SE_ResNet, Xception and ResNet. There were significant tradeoff between the speed and complexity and accuracy also parameteres. ResNet demonstration between accuracy and complexity was optimal. VGG nets did not provide accuracy on dataset while SeNet and NasNet had the longest training time period. The Training configuration file contains - Number of epochs to train for. - Computation method used for training as the CPU is slow for Faster-RCNN we need GPU alternatively can use collaboratory by google for object detection in general. - The batch size used for training - The dimensions for augmentation we need to resize the images too. ### Model Learning For removal of parameters we trained the model with fixed augmentations and without classification outputs. Result was improved when making the model predict other related functions other than only output of regression based model. Without pretraining the model took much longer for the validation loss to converge pretrained in cloud imagenet and dropouts of 0.5 and 0.75 showed the best results on dataset ### Dataset Image Preprocessing The dataset was scaled to 512x512 pixels and the 256 pixels gave unsatisfactory results. The original images were 2000x2000 pixels hence was not practical being heavier to train the dataset. So deployed following augmentations rotations, shift, horizontal flip. Without enough augmentation the model was overfitted as validation stopped improving on increasing the training. Heavy augmentation led the model to show better validation loss and mAP results. The grayscale is different for the images and low brightness contrast of cxf leads to higher validation losses as it makes harder to locate the lesion [21] and hence the model proposed in this paper used clahe algorithm [22] to equal the gray scale of our dataset. Figure 3: Model Training ### Result And Analysis The deep learning model is used to perform and detect lung contingencies in chest xray images dataset. Faster RCNN + ResNet was trained for 15 epochs. The Resnet model was compared with Xception, PNASNet And Inception and showed considerably greater accuracy and specificy for the same dataset used in comparision. Accuracy, Specificity, Precision, Recall were compared for comparable architecutres as well as the method used in the paper. The faster rcnn model was trained to classify the ccr dataset. For evaluation of the validity of the model 5 fold cross validation was performed with first fold used to test and remaining used to train the model. Then performance as a binary classification problem. Performance change in resnet on each fold cross validation shown in table 3 The Researched FasterRCNN - ResNet Model showcased 95.28specificity. ## 5 Conclusion and Future Scope ### Conclusions In this paper, faster rcnn model algorithm was used based on resnet architecture. A number of changes were implemented to improve the accuracy of the model. Also the architecture was compared with similar models to verify the showcased loss validation results. Heavy augmentation in particular was applied the dataset. Several checkpoints were utilised in the model to create a generalise paper. Improvements were made using the said approaches as shown in the validation loss results. The model did not involve end user processing and augmentation, gives a good arrangement between accuracy and speed considering the resources of dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & AC & SP & PR & RC & F1 \\ \hline Xception & 93.16\% & 89.03\% & 94.63\% & 87.26\% & 94.5\% \\ \hline Resnet & 95.28\% & 96.36\% & 96.6\% & 97.32\% & 96.93\% \\ \hline PNASNet & 94.35\% & 91.79\% & 96.48\% & 89.35\% & 95.74\% \\ \hline Inception & 95.23\% & 92.64\% & 94.35\% & 87.62\% & 97.63\% \\ \hline \end{tabular} \end{table} Table 2: Performance Metrics by Method Figure 4: Model Learning ### Future Scope The paper can be compared with other deep learning technologies such as YOLO, SSN and even add to that different types of CNN like RCNN, Mask RCNN, Fast RCNN - Different architectures can be used to increase the comparision and verify the paper results. Even a combination of architectures can be combined to create a newer architecture which would change the paper labelled tradeoff between the accuracy and dataset limitation - The model needs to be trained largerly on a localised GPU for a bigger dataset to verify that the projected data holds true in further experiments. - The aspects to solve in further studies involves increasing the learning dataset which would need labelling done manually by a medical professional and verified by another medical professional to mitigate any human errors. ## 6 Backmatter Acknowledgments.I would like to express my special thanks of gratitude to my project advisor, Dr. Narendra Singh Yadav, for their able guidance and support. Their help, suggestions, and encouragement greatly contributed to the completion of my report. I would also like to thank Manipal University and the Department of Information Technology for providing me with all the facilities required for this project. Data Availability Statement.The dataset [19] containing chest X-ray images and metadata is provided by the National Institute of Health and Clinical Research through Kaggle. Data Availability.Data underlying the results presented in this paper are available in Dataset 1.
2309.07098
Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding
Hallucinations and off-target translation remain unsolved problems in MT, especially for low-resource languages and massively multilingual models. In this paper, we introduce two related methods to mitigate these failure cases with a modified decoding objective, without either requiring retraining or external models. In source-contrastive decoding, we search for a translation that is probable given the correct input, but improbable given a random input segment. In language-contrastive decoding, we search for a translation that is probable, but improbable given the wrong language indicator token. Experiments on the massively multilingual models M2M-100 (418M) and SMaLL-100 show that these methods suppress hallucinations and off-target translations, reducing the number of translations with segment-level chrF2 below 10 by 67-83% on average, and the number of translations with oscillatory hallucinations by 75-92% on average, across 57 tested translation directions. In a proof of concept on out-of-English translation, we also show that we can suppress off-target translations with large language models. We release our source code at https://github.com/ZurichNLP/ContraDecode.
Rico Sennrich, Jannis Vamvas, Alireza Mohammadshahi
2023-09-13T17:15:27Z
http://arxiv.org/abs/2309.07098v2
Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding ###### Abstract Hallucinations and off-target translation remain unsolved problems in machine translation, especially for low-resource languages and massively multilingual models. In this paper, we introduce methods to mitigate both failure cases with a modified decoding objective, without either requiring retraining or external models. In source-contrastive decoding, we search for a translation that is probable given the correct input, but improbable given a random input segment, hypothesising that hallucinations will be similarly probable given either. In language-contrastive decoding, we search for a translation that is probable, but improbable given the wrong language indicator token. In experiments on M2M-100 (418M) and SMaLL-100, we find that these methods effectively suppress hallucinations and off-target translations, improving chrF2 by 1.7 and 1.3 points on average across 57 tested translation directions. In a proof of concept on English-German, we also show that we can suppress off-target translations with the Llama 2 chat models, demonstrating the applicability of the method to machine translation with LLMs. We release our source code.1 Footnote 1: [https://github.com/ZurichNLP/ContraDecode](https://github.com/ZurichNLP/ContraDecode) ## 1 Introduction Hallucinations are a long-standing well-known problem in machine translation (MT) (Koehn and Knowles, 2017) and natural language generation (Ji et al., 2023). While there has been extensive research on their identification and mitigation (Lee et al., 2019; Raunak et al., 2021; Mohammadshahi et al., 2022; Guerreiro et al., 2023; Dale et al., 2023, among others), they still persist as an issue, especially in low-resource settings. Contrastive conditioning (Vamvas and Sennrich, 2021) has previously been used for the analysis of specific translation errors such as disambiguation errors and undertranslation (Vamvas and Sennrich, 2022). The main idea is that translations that are equally or more probable given some corrupted source than the true source are likely to be erroneous in respect to the part of the source that was corrupted. We can apply the same intuition to hallucinations and translations that are in the wrong language, so called off-target translations: if hallucinations are detached from the source, they should have a similar probability given the true source and given a random other source. If a translation is in the wrong language, it should have a similar or higher probability if that language is marked as the desired output language. Inspired by this, we design decoding objectives that do not simply search for the most probable translation under our model, but search for a translation that maximizes the probability given the true input, while at the same time minimizing the probability given one or several contrastive inputs. To sum up, this paper makes the following contributions: * We propose decoding objectives to address two problems often observed in MT: we mitigate hallucinations with source-contrastive decoding and suppress off-target translations with language-contrastive decoding. Figure 1: Our decoding objective yields a translation that is probable given the actual input, but improbable given a source-contrastive or language-contrastive input. * By evaluating two massively multilingual MT models, M2M-100 (418M) and SMaLL-100, across 57 translation directions, we demonstrate the effectiveness of both decoding objectives, improving translation quality for low-resource translation directions, improving chrF2 by 1.7 and 1.3 points for M2M-100 and SMaLL-100, respectively. * Finally, we provide a proof of concept for applying our approach to LLM-based translation, where off-target issues are common. ## 2 Method Different from previous work on contrastive conditioning that focused on analyzing translation errors, we modify the decoding objective to improve translations. To suppress hallucinations, we pair each input \(X\) with a randomly selected input segment \(X^{\prime}\).2 Rather than finding a translation that maximizes \(p(Y|X)\), we search for one that both maximizes \(p(Y|X)\) and minimizes \(p(Y|X^{\prime})\). We add a hyperparameter \(\lambda\) to control the strength of this contrastive penalty, yielding equation 1. Footnote 2: In practice, by shuffling the segments of the input document. \[score(Y,X)=\sum_{i=1}^{n}-\log\biggl{(}p(y_{i}|y_{<i},X)\\ -\lambda p(y_{i}|y_{<i},X^{\prime})\biggr{)} \tag{1}\] We denote this decoding objective **source-contrastive decoding**. Off-target translations are a common failure mode in multilingual MT systems Arivazhagan et al. (2019). They have been linked to the predominance of English in the training of multilingual systems Rios et al. (2020). Production of text in the source language, often a copy of the input, is connected to the occurrence of copying in the training data (from innocuous copying of names to segment-level copies due to noisy data extraction), and the high probability of continuing to copy once a copy has been started Ott et al. (2018). The majority of multilingual machine translation systems use special tokens to indicate the desired target language, following Johnson et al. (2017)3. To penalize output in the wrong language, we can add contrastive inputs that keep the original source segment, but vary in the choice of language indicator token. Footnote 3: The target language indicator token is in the source segment for SMaLL-100, and the beginning of the output segment in M2M-100, using forced decoding. Let \(l_{y}\) be the language indicator token, and \(l_{\hat{y}}\) the desired target language. We simply add contrastive variants \(l_{y^{\prime}}\) for output languages we wish to suppress. Based on the predominant off-target languages in multilingual MT Arivazhagan et al. (2019), we include English4 and the respective source language5 in the set of contrastive languages. This results in equation 2. Footnote 4: Unless the desired target language is English. Footnote 5: If English is the source language, we deduplicate. \[score(Y,X)=\sum_{i=1}^{n}-\log\biggl{(}p(y_{i}|y_{<i},X,l_{y}=l _{\hat{y}})\\ -\sum_{l_{y^{\prime}}}\lambda p(y_{i}|y_{<i},X,l_{y}=l_{y^{\prime }})\biggr{)} \tag{2}\] We refer to decoding with contrastive translation directions as **language-contrastive decoding**. We can combine source-contrastive and language-contrastive decoding by summing all contrastive variants, and will then refer to the individual weights as \(\lambda_{\text{src}}\) and \(\lambda_{\text{lang}}\). ## 3 Evaluation ### Data and Models We perform our experiments with two massively multilingual machine translation models: M2M-100 (418M) Fan et al. (2020), and SMaLL-100 Mohammadshahi et al. (2022), a distilled version of M2M-100 (12B). We use beam size 5 across experiments. We employ minimal hyper-parameter tuning on the ps-ast translation direction with M2M-100 and set \(\lambda_{\text{src}}\) to \(0.7\). We exclude ps-ast from average results reported. Since off-target translation only affects a small number of translation directions, we report results without any hyperparameter tuning, simply setting \(\lambda_{\text{lang}}=0.1\). We evaluate our method on three sets of translation directions: * the 25 non-English-centric directions used by Guerreiro et al. (2023) (**HLMT**). These are af-zu, ar-fr, be-ru, cs-sk, de-hr, de-hu, el-tr, fr-sw, hi-bn, hi-mr, hr-cs, hr-hu, hr-sk, hr-sr, it-de, it-fr, nl-de, nl-fr, ro-de, ro-hu, ro-hy, ro-ru, ro-tr, ro-uk, uk-ru.6 Footnote 5: See Appendix B for full language names. * 29 translation directions (all but ps-ast) between 5 low-resource languages from different branches of Indo-European, plus Zulu from the Atlantic-Congo family (**X-branch**): af, ast, hr, ps, ur, zu. * 4 high-resource translation directions: en-de, de-en, en-fr, fr-en (**high-res**). We additionally report results for the union of these sets (**all**). We evaluate the methods with spBLEU (Goyal et al., 2022) and chrF2 (Popovic, 2015) using sacreBLEU (Post, 2018)7 on the Flores-101 devtest set (Goyal et al., 2022). We use OpenLID (Burchell et al., 2023) for language identification to measure off-target translation rates. To quantify the number of hallucinations, we employ a rough approximation following Lee et al. (2019); Muller and Sennrich (2021), counting the proportion of segments with chrF2 \(<10\).8 Footnote 7: BLEU@#: llc:mixedle:nobtok:flores101ls:explv:2.3.1 Footnote 7: BLEU@r1lc:mixedle:yeslnc:fhnw:0ls:nov:2.3.1 Footnote 8: Müller and Sennrich (2021) report a threshold of 1, but we confirmed that this is a typo (personal communication with Mathias Müller). Note that this method does not distinguish between hallucinations and off-target translations. ### Results We report results using source-contrastive decoding (\(C_{src}\)), and combining source-contrastive and language-contrastive decoding (\(C_{src+lang}\)) in Tables 1 and 2.9 Across the 57 translation directions tested, chrF2 improves by 1.3 (M2M-100) and 1.1 (SMaLL-100) points with source-contrastive decoding. When adding language-contrastive decoding, we see additional gains in chrF2 of 0.4 (M2M-100) and 0.2 (SMaLL-100). Footnote 9: See Appendix A for full results. Improvements are more modest when measured with spBLEU (0.2 on M2M-100; 0.3 on SMaLL-100). We notice that hallucinations tend to be over-long, and can perversely improve BLEU by reducing the brevity penalty. We thus consider chrF2, which pairs n-gram precision with n-gram recall instead of a simplistic brevity penalty, to be our primary metric. Off-target translations are relatively rare for the translation directions tested, especially for SMaLL-100 (see Table 3). With M2M-100, the highest proportion of English outputs in the baseline was detected for af-zu (9.1%), the highest percentage of outputs in the source language for hr-sr (4.2%)10. These are also among the translation directions that benefit the most from language-contrastive decoding: chrF2 increases by 2.3 for hr-sr11, and by 2 for af-zu. However, we observe the largest increase in chrF2 (2.6) for ast-zu, a translation direction that sees an increase of off-target translations with \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{chrF2} & \multicolumn{4}{c}{spBLEU} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-10} baseline & 46.4 & 28.8 & 61.3 & 39.0 & 22.0 & 8.3 & 37.2 & 16.4 \\ \(C_{src}\) & 46.7 & 31.4 & 60.8 & 40.3 & 21.6 & 9.1 & 36.4 & 16.6 \\ \(C_{src+lang}\) & 46.8 & 32.1 & 60.7 & 40.7 & 21.5 & 9.3 & 36.1 & 16.6 \\ \hline \hline \end{tabular} \end{table} Table 1: results for M2M-100. Averages over different sets of translation directions. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{chrF2} & \multicolumn{4}{c}{spBLEU} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-10} baseline & 48.3 & 32.0 & 62.5 & 41.4 & 23.5 & 10.2 & 38.7 & 18.1 \\ \(C_{src}\) & 48.5 & 34.2 & 62.1 & 42.5 & 23.2 & 11.1 & 37.9 & 18.4 \\ \(C_{src+lang}\) & 48.7 & 34.6 & 62.0 & 42.7 & 23.3 & 11.2 & 37.6 & 18.4 \\ \hline \hline \end{tabular} \end{table} Table 2: results for SMaLL-100. Averages over different sets of translation directions. source-contrastive decoding alone, and where the English output rate goes from 5.5% (baseline) to 9.9% (\(C_{src}\)) to 2.7% (\(C_{src+lang}\)). The proportion of translations with chrF2 below 10 is shown in Table 4. We observe large reductions in the number of defect translations, with a reduction from 7.3% to 1.2% for M2M-100, and from 5.6% to 1.8% for SMaLL-100. ### Ablation Studies The fact that we pick contrastive inputs from the test sets at random raises a few questions about this approximation. We repeated the translation with M2M-100 across all 57 translation directions 3 times and find that the standard deviation is minimal (0.0107 for chrF2). Using a single random input as a contrastive variant is a heavy approximation, but our ablation study in Table 5 shows that this yields the majority of the performance gains, and using up to 3 inputs as contrastive examples12 only yields an additional 0.1 point improvement in chrF2. Footnote 12: we divide \(\lambda_{\text{seq}}\) by the number of contrastive inputs. ## 4 Application to Large Language Models In this section, we demonstrate that our method can be applied to the prompting of large language models (LLM). Previous work has achieved competitive translation quality for some language pairs by prompting models such as PaLM (Vilar et al., 2023; Garcia et al., 2023), GPT (Hendy et al., 2023) or BLOOM (Bawden and Yvon, 2023). However, LLM-based translation is still prone to hallucination and off-target translation (Zhang et al., 2023; Guerreiro et al., 2023). Our demonstration is based on the Llama 2 model family (Touvron et al., 2023) and specifically the instruction-tuned version (_Llama Chat_), exploiting the fact that MT examples were among the data used for instruction tuning (Wei et al., 2022; Chung et al., 2022). We generate translations by instructing the model to translate a segment into a given language, force-decoding the line _"Sure, here's the translation:"_, and then decoding until the next line break. The template is provided in Appendix C. When using this simple prompting approach in the en-de direction, we find that off-target output in English is very common. Moreover, providing a 1-shot example in the prompt, while improving translation quality, does not prevent the off-target issue. We thus apply language-contrastive decoding and add a contrastive prompt that instructs the model to translate into English instead of German. The decoding objective is analogous to Eq. 2. Table 2 shows the percentage of off-target output for different values of \(\lambda_{\text{lang}}\). Generally, we observe that the percentage falls with an increasing \(\lambda_{\text{lang}}\), demonstrating that our method can be effectively applied to LLM prompting. ## 5 Related Work #### 5.0.1 Hallucination Detection and Reduction Various methods have been proposed to detect hallucinations, including identifying typical patterns in the output (Raunak et al., 2021), using internal information like attention patterns (Lee et al., 2019) or the contribution of the source to the prediction (Dale et al., 2023), or measures of decoder confidence, including the probability of the output (Guerreiro et al., 2023) or stability of samples under perturbation (Lee et al., 2019; Guerreiro et al., 2023). Hallucination mitigation is more difficult, espe \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{M2M-100} & \multicolumn{2}{c}{SMaLL-100} \\ & EN & SRC & EN & SRC \\ \cline{2-5} baseline & 260 & 55 & 54 & 63 \\ \(C_{src}\) & 375 & 47 & 78 & 70 \\ \(C_{src+lang}\) & 88 & 28 & 16 & 21 \\ \hline \hline \end{tabular} \end{table} Table 3: Total number of translations (out of 57684 output sentences) that are off-target, producing English (EN) or the source language (SRC) according to Open-LID. Figure 2: Off-target translation rate for Llama 2 Chat models when translating the English Flores-101 devtest set into German. Language-contrastive decoding tends to reduce off-target translation as \(\lambda_{\text{lang}}\) is increased. cially if we assume that models are already trained using best practices, and focus on training-free methods. Several studies use external models for mitigation, e.g. using other translation models as a fall-back Guerreiro et al. (2023), or doing sample reranking based on quality estimation models Guerreiro et al. (2023). Our method has the advantage that it does not require external models, and we note that modern quality estimation metrics are themselves prone to score certain hallucinations highly Freitag et al. (2022). Mitigation methods that do not rely on external models are typically sampling-based. Guerreiro et al. (2023) report that even the translation model's own sequence probability can be used for sample reranking. A consensus translation can be identified via sampling-based Minimum Bayes Risk (MBR) decoding Eikema and Aziz (2020), which benefits from the fact that hallucinations are dissimilar from each other Muller and Sennrich (2021). #### 5.0.2 Contrastive Decoding Contrastive decoding bears similarity to contrastive learning Hadsell et al. (2006); Socher et al. (2014); Gao et al. (2021), among others) in that positive and negative examples are contrasted, but involves no training. Li et al. (2023) introduce a form of contrastive decoding that contrasts the probability between different models, whereas our methods work with a single model, contrasting probabilites given different inputs. Source-contrastive decoding also be seen as a variant of implicit language model (ILM) compensation, mirroring recent work by Herold et al. (2023). Our work is different in motivation in that ILM is typically used to allow the inclusion of an external LM, where we show the effectiveness of simply suppressing the ILM. Also, we show the effectiveness of a different, simple approximation, using a single contrastive source segment. Finally, language-contrastive decoding bears some resemblance to negative prompting, a technique used to suppress undesired concepts in guided image generation. ## 6 Conclusion This paper shows that certain failure modes of MT can be addressed by contrastive decoding objectives that use pairs or sets of inputs for the prediction. Specific contrastive inputs address specific errors, and we introduce strategies to mitigate hallucinations and off-target translation. Future work could expand on our work by exploring if other failure modes of machine translation can be mitigated with appropriate contrastive inputs, or if other forms of control can be improved. For example, for models that use domain indicator tokens Kobus et al. (2017), we could perform domain-contrastive decoding and potentially achieve stronger domain control. Beyond MT, we expect that source-contrastive decoding can also be useful for other tasks, e.g. to penalize over-generic responses in dialogue systems. ## 7 Limitations We only tested language-contrastive decoding in multilingual models that control the target language via language indicator tokens. It is possible to apply the same strategy to modular architectures that use language-specific components Firat et al. (2016); Vazquez et al. (2019); Bapna and Firat (2019), but its effectiveness remains to be tested. For bilin \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{M2M-100} & \multicolumn{4}{c}{SMaLL-100} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-9} baseline & 2.1 & 13.0 & 0.0 & 7.3 & 1.3 & 10.6 & 0.0 & 5.6 \\ \(C_{src}\) & 1.0 & 4.1 & 0.0 & 2.4 & 0.8 & 4.3 & 0.0 & 2.5 \\ \(C_{src+lang}\) & 0.5 & 2.0 & 0.0 & 1.2 & 0.4 & 3.4 & 0.0 & 1.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Proportion of translations (in %) with segment-level chrF\(2<10\). \begin{table} \begin{tabular}{l c c} \hline \hline & chrF2 & spBLEU \\ \cline{2-3} baseline & 38.97 & 16.40 \\ \(C_{src}\) (1) & 40.31 & 16.60 \\ \(C_{src}\) (2) & 40.39 & 16.68 \\ \(C_{src}\) (3) & 40.41 & 16.67 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation results for M2M-100 with different numbers of source-contrastive inputs. Average over all languages reported. gual translation models that suffer from off-target translations, e.g. because of noisy training data (Khayrallah and Koehn, 2018), we would need bilingual models for other translation directions to implement language-contrastive decoding, but this sacrifices the main strength of our approach: not relying on external models. We employ minimal hyperparameter tuning for \(\lambda_{\text{src}}\), and did not tune \(\lambda_{\text{lang}}\). Using the same hyperparameters across translation directions and translation models results in performance degradations in some cases, most noticeably for high-resource translation directions. We consider it a positive result that we obtain improvements on average with minimal hyperparameter tuning, but future work may wish to use more complex strategies to weight (or disable) contrastive variants across translation directions. ## Acknowledgements This work was funded by the Swiss National Science Foundation (project MUTAMUR; no. 213976).
2309.08383
Dynamical Analysis of an Allelopathic Phytoplankton Model with Fear Effect
This paper is the first to propose an allelopathic phytoplankton competition ODE model influenced by a fear effect based on natural biological phenomena. It is shown that the interplay of this fear effect and the allelopathic term cause rich dynamics in the proposed competition model, such as global stability, transcritical bifurcation, pitchfork bifurcation, and saddle-node bifurcation. We also consider the spatially explicit version of the model and prove analogous results. Numerical simulations verify the feasibility of the theoretical analysis. The results demonstrate that the primary cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. Allelopathy only affects the density of non-toxic species. The discussion provides guidance for the conservation of species and the maintenance of biodiversity.
Shangming Chen, Fengde Chen, Vaibhava Srivastava, Rana D. Parshad
2023-09-15T13:16:36Z
http://arxiv.org/abs/2309.08383v1
# Dynamical Analysis of an Allelopathic Phytoplankton Model ###### Abstract This paper is the first to propose an allelopathic phytoplankton competition ODE model influenced by a fear effect based on natural biological phenomena. It is shown that the interplay of this fear effect and the allelopathic term cause rich dynamics in the proposed competition model, such as global stability, transcritical bifurcation, pitchfork bifurcation, and saddle-node bifurcation. We also consider the spatially explicit version of the model, and prove analagous results. Numerical simulations verify the feasibility of the theoretical analysis. The results demonstrate that the primary cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. Allelopathy only affects the density of non-toxic species. The discussion provides guidance for the conservation of species and the maintenance of bio-diversity. Alleopathy; Competition; Global Stability; Transcritical Bifurcation; Pitchfork Bifurcation; Saddle-node Bifurcation; Reaction-diffusion system. + Footnote †: Author for correspondence ## 1 Introduction Phytoplankton are at the base of aquatic food webs and of global importance for ecosystem functioning and services (Winder & Sommer, 2012). Moreover, phytoplankton also contribute significantly to economic growth, which is advantageous for the biotechnology, pharmaceutical, and nutraceuticalal sectors [Pradhan & Ki, 2022]. Hence, the investigation of phytoplankton species density holds significant academic significance. A unique phenomenon among phytoplankton species is when secondary metabolites generated by one phytoplankton have an inhibiting influence on the development or physiological operation of another phytoplankton [Legrand _et al._, 2003]. This behavior is often called alleopathy when phytoplankton engage in competitive interactions with peers by releasing toxic compounds. Numerous studies have shown that allelopathy plays a crucial role in the competitive dynamics of phytoplankton. For example, Mulderij _et al._[Mulderij _et al._, 2006] investigated the allebotathic potential of exudates from the aquatic macrophytte _Stratiotes aloides_ on the growth of phytoplankton. The study results show that _Stratiotes aloides_ exerts a chemossentizing effect on phytoplankton, inhibiting the growth of other algae by releasing toxins. Maynard-Smith [Maynard-Smith, 1974] added the allebotathic term into the classical two-species Lotka-Volterra competition model in order to account for the harmful impacts exerted by one species on the other: \[\left\{\begin{aligned} &\frac{\mathrm{d}N_{1}(t)}{\mathrm{d}t}=N_{1}(t) \left[\alpha_{1}-\beta_{1}N_{1}(t)-v_{1}N_{2}(t)-\gamma_{1}N_{1}(t)N_{2}(t) \right],\\ &\frac{\mathrm{d}N_{2}(t)}{\mathrm{d}t}=N_{2}(t)\left[\alpha_{2} -\beta_{2}N_{2}(t)-v_{2}N_{1}(t)-\gamma_{2}N_{1}(t)N_{2}(t)\right],\end{aligned}\right. \tag{1}\] where \(N_{i}(t)\) (\(i=1,2\), the same below) is the density of two competing phytoplankton species, \(\alpha_{i}\) represents the rate of daily cell proliferation, \(\beta_{i}\) denotes the intraspecific competition rate of the i-th species, \(v_{i}\) stands for the rate of interspecific competition, \(\gamma_{i}\) represents the toxicity release rate from the other species to species \(i\)-th. The initial conditions \(N_{i}(0)>0\). Based on the work of Maynard-Smith, many scholars have considered the situation where only one species releases toxins. Chen _et al._ [Chen _et al._, 2013] proposed a discrete system for toxin release from single species: \[\left\{\begin{aligned} x_{1}(n+1)&=x_{1}(n) \mathrm{exp}\left[r_{1}(n)-a_{11}(n)x_{1}(n)-a_{12}(n)x_{2}(n)-b_{1}(n)x_{1}(n )x_{2}(n)\right],\\ x_{2}(n+1)&=x_{2}(n)\mathrm{exp}\left[r_{2}(n)-a_{2 1}(n)x_{1}(n)-a_{22}(n)x_{2}(n)\right].\end{aligned}\right. \tag{2}\] The authors proved the extinction and global stability conditions for system (2). It was found that the extinction of system (2) is not affected at low rates of toxin release, meaning that the toxic species cannot extinguish non-toxic species. Further studies on single toxic species were conducted in [Chen _et al._, 2016, 2023]. However, in reality, a non-toxic species can go extinct even if it is only affected by lower concentrations of toxins. In other words, what factors other than degradation by actual toxins might affect the density of competing phytoplankton species, without additional external factors interfering? Since the effect of allelopathy is based on the classical Lotka-Volterra competition model, we will consider competitive fear. In 2016, Wang _et al._ [Wang _et al._, 2016] considered the fear effect for the first time based on the classical two-species Lotka-Volterra predator-prey model: \[\left\{\begin{aligned} &\frac{dx}{dt}=rxf(k,y)-dx-ax^{2}-g(x)y,\\ &\frac{dy}{dt}=-my+cg(x)y,\end{aligned}\right. \tag{3}\] where \(a\) represents the mortality rate due to intraspecific competition of the prey, \(g(x)\) is the functional predation rate of the predator, and \(f(k,y)=\dfrac{1}{1+ky}\) represents the anti-predation response of the prey due to the fear of the predator, i.e., the fear effect function. The researchers found that under conditions of Hopf bifurcation, an increase in fear level may shift the direction of Hopf bifurcation from supercritical to subcritical when the birth rate of prey increases accordingly. Numerical simulations also suggest that animals' anti-predator defenses increase as the predator attack rate increases. Further research on the fear effect of the predator-prey model can be seen in [Lai _et al._, 2020; Liu _et al._, 2022]. By studying the fear effect of the predator-prey model, scholars generally agree that the non-consumptive effect of fear on the density of bait species is more significant than depredation on them. In connection with natural biological phenomena, prey perceives the risk of predation and respond with a range of anti-predatory responses, such as changes in habitat selection and foraging behavior [Polis _et al._, 1989; Peckarsky _et al._, 2008]. These changes in various forms, may ultimately affect the overall reproductive rate of the prey population. The effect of fear on predator-prey systems has been extensively studied, but fear has been considered far less in competition systems. However, there is strong evidence that fear exists in purely competitive systems without predation effects or where predation effects are negligible [Chesson & Kuang, 2008; Wiens_et al._, 2014]. The Barred Owl (_Strix varia_) is a species of Owl native to eastern North America. During the last century, they have expanded their range westward and have been recognized as an invasion of the western North American ecosystem--their range overlaps with the Spotted Owl (_Strix occidentalis_). The Spotted Owl is native to northwestern and western North America, which has led to intense competition between two species [Long & Wolfe, 2019]. The Barred Owl has a strong negative impact on the Spotted Owl, and field observations have reported that barred owls frequently attack spotted owls [Van Lanen _et al._, 2011]. Evidence also shows that barred owls actively and unilaterally drive spotted owls out of shared habitat [Wiens_et al._, 2014]. Such evidence motivates us to consider the fear effect in a purely competitive two-species model, in which one competitor causes fear to the other. Thus, Srivastava _et al._[Srivastava _et al._, 2023] considered the classical two-group Lotka-Volterra competition model with only one competitor causing fear to the other competitor: \[\left\{\begin{aligned} \frac{du}{dt}&=a_{1}u-b_{1}u^{2}-c_{1}uv,\\ \frac{dv}{dt}&=\frac{a_{2}v}{1+ku}-b_{2}v^{2}-c_{2} uv.\end{aligned}\right. \tag{4}\] Their study found that the fear effect leads to exciting dynamics such as saddle-node bifurcation and transcritical bifurcation in system (4). This is not found in the classical Lotka-Volterra competition model. In extension to this work, Chen _et al._[Chen _et al._, 2023] also proved several interesting dynamics for a two-species competitive ODE and PDE systems where an Allee and the fear effect are both present. Inspired by the above works, we aim to investigate how the fear parameter affects competitive allebot-pathic planktonic systems by introducing a fear effect term, where the non-toxic species is "fearful" of the toxic population. Thus we propose the following model: \[\left\{\begin{aligned} \frac{\mathrm{d}x_{1}}{\mathrm{d}\tau}& =x_{1}\left(r_{1}-\alpha_{1}x_{1}-\beta_{1}x_{2}\right),\\ \frac{\mathrm{d}x_{2}}{\mathrm{d}\tau}&=x_{2}\left( \frac{r_{2}}{1+\eta x_{1}}-\alpha_{2}x_{2}-\beta_{2}x_{1}-\xi x_{1}x_{2} \right),\end{aligned}\right. \tag{5}\] where \(\eta\) is the fear effect parameter and \(\xi\) represents the toxic release rate. In the current manuscript we perform a complete dynamical analysis of system (5) with the following innovations: * System (5) has at most two positive equilibria, while the global stability of positive equilibria is influenced by the fear effect parameter \(\eta\) and the interspecific competition rate \(\beta_{1}\). * Changing the values of the fear effect \(\eta\) and the interspecific competition rate \(\beta_{1}\) will cause system (5) to experience a transcritical bifurcation at the boundary. At the same time, the toxic release rate \(\xi\) will transform the transcritical bifurcation into a pitchfork bifurcation. * The toxic release rate \(\xi\) causes system (5) to undergo a saddle-node bifurcation in the quadrant. * The toxic release rate \(\xi\) only affects the non-toxic species density, while the fear effect \(\eta\) can lead to the extinction of non-toxic species. * In the spatially explicit system or the PDE case, we analogously see that attraction to boundary equilibrium or an interior equilibrium are both possible depending on parametric restrictions and initial conditions, see theorems 14 & 15. Furthermore strong competition type dynamics are also possible, again depending on parametric restrictions and initial conditions, see theorem 16. The rest of this paper is organized as follows: The conditions for the system's permanence are laid forth in Section 2, which also demonstrates the solution's positivity and boundness. We examine the existence and types of all equilibria in Section 3 and Section 4. Also, the global stability of positive equilibria is studied in Section 5. In Section 6, we analyze the bifurcation of the system around the equilibria. Numerical simulations are performed in Section 7 to verify the theoretical analysis's feasibility, showing how fear effect and toxin release rate can affect species density. We end this paper with a brief conclusion. ## 2 Preliminaries In order to reduce the parameters of system (5), the following dimensionless quantities are applied to the non-dimensionalize model system (5) \[t=r_{2}\tau,\quad\frac{x_{1}}{k_{1}}=x,\quad\frac{x_{2}}{k_{2}}=y,\quad\eta k_{ 1}=k,\quad\frac{\xi k_{1}k_{2}}{r_{2}}=m,\quad\frac{\beta_{2}k_{1}}{r_{2}}=a, \quad\frac{r_{1}}{r_{2}}=b,\quad\frac{\beta_{1}k_{2}}{r_{1}}=c,\] then system (5) becomes the following system: \[\left\{\begin{aligned} &\frac{\mathrm{d}x}{\mathrm{d}t}=bx\left(1-x-cy \right)=xf(x,y)\equiv F(x,y),\\ &\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy \right)=yg(x,y)\equiv G(x,y),\end{aligned}\right. \tag{6}\] all parameters in system (6) are positive. Based on biological considerations, the initial condition of system (6) satisfies \[x(0)>0,y(0)>0. \tag{7}\] ### Positivity and boundedness of the solutions **Theorem 1**: _All solutions of system (6) are positive._ Since \[x(t)=x(0)\mathrm{exp}\left[\int_{0}^{t}f(x(s),y(s))\mathrm{d}s\ \right]>0,\] and \[y(t)=y(0)\mathrm{exp}\left[\int_{0}^{t}g(x(s),y(s))\mathrm{d}s\ \right]>0.\] So all solutions of system (6) with initial condition (7) are positive. This completes the proof. **Lemma 1**: _[_Chen_, 2005_]_ _If \(a,b>0\) and \(x(0)>0\),_ * \(\limsup_{t\to+\infty}x(t)\leq\frac{a}{b}\) _when_ \(x^{{}^{\prime}}(t)\leq x(t)(a-bx(t))\)_,_ * \(\liminf_{t\to+\infty}x(t)\geq\frac{a}{b}\) _when_ \(x^{{}^{\prime}}(t)\geq x(t)(a-bx(t))\)_._ **Theorem 2**: _The solutions of system (6) are bounded._ _Proof._ According to the first equation of system (6), \[\frac{\mathrm{d}x}{\mathrm{d}t}=bx\left(1-x-cy\right)\leq x(b-bx),\] by applying Lemma 1 to the above inequality, we have \[\limsup_{t\rightarrow+\infty}x(t)\leq\frac{b}{b}=1. \tag{8}\] Similarly, according to the second equation of system (6), we have \[\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy\right)\leq y(1- y),\] so \[\limsup_{t\rightarrow+\infty}y(t)\leq 1. \tag{9}\] This completes the proof. ### Permanence of the system **Definition 2.1**.: System (6) is considered to be permanent if there are two positive constants, denoted as \(m\) and \(M\), which are not dependent on the solutions of system (6), such that each positive solution \((x(t,x_{0},y_{0}),y(t,x_{0},y_{0}))\) of system (6) with the initial condition \((x_{0},y_{0})\in Int(R_{+}^{2})\) satisfies \[m\leq\liminf_{t\rightarrow+\infty}x(t,x_{0},y_{0})\leq\limsup_{t\rightarrow+ \infty}x(t,x_{0},y_{0})\leq M,\] \[m\leq\liminf_{t\rightarrow+\infty}y(t,x_{0},y_{0})\leq\limsup_{t\rightarrow+ \infty}y(t,x_{0},y_{0})\leq M.\] **Theorem 3**.: _System (6) is permanent if \(0<k<k^{*}\) and \(0<c<1\)._ _Proof._ From (8) and (9), for \(\varepsilon>0\) small enough without loss of generality, there is \(T>0\) such that, for \(t>T\), we have \[x(t)\leq 1+\varepsilon,\quad y(t)\leq 1+\varepsilon.\] According to the first equation of system (6), \[\frac{\mathrm{d}x}{\mathrm{d}t}=x\left[(b-bcy)-bx\right]\geq x\left[(b-bc(1+ \varepsilon))-bx\right],\] by applying Lemma 1 to above differential inequality, we have \[\liminf_{t\rightarrow+\infty}x(t)\geq 1-c(1+\varepsilon).\] Setting \(\varepsilon\to 0\) in above inequality leads to \[\liminf_{t\rightarrow+\infty}x(t)\geq 1-c. \tag{10}\] Similarly, according to the second equation of system (6), \[\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy\right)\geq y \left[(\frac{1}{1+k(1+\varepsilon)}-a(1+\varepsilon))-(1+m(1+\varepsilon)) y\right],\] by applying Lemma 1 to above differential inequality, we have \[\liminf_{t\rightarrow+\infty}y(t)\geq\frac{\frac{1}{1+k(1+\varepsilon)}-a(1+ \varepsilon)}{1+m(1+\varepsilon)}.\] Setting \(\varepsilon\to 0\) in above inequality leads to \[\liminf_{t\rightarrow+\infty}y(t)\geq\frac{\frac{1}{1+k}-a}{1+m}. \tag{11}\] In summary, we select \(M=1\), \(m=\min\left\{1-c,\frac{\frac{1}{1+k}-a}{1+m}\right\}\), which obviously independent of the solution of system (6). Let \(\frac{1}{a}-1\triangleq k^{*}\). Then, (8), (9), (10) and (11) show that system (6) is permanent under the assumption of Theorem 3. This completes the proof. ## 3 Boundary Equilibria and Their Types It is obvious that system(6) includes two boundary equilibria \(E_{1}(1,0)\), \(E_{2}(0,1)\), as well as a constant equilibrium point \(E_{0}(0,0)\). In the following, we will examine the types of them. The Jacobian matrix of system (6) is given by \[J(E)=\begin{bmatrix}-b(2x+cy-1)&-bcx\\ -y\left[\frac{k}{(1+kx)^{2}}+a+my\right]\frac{1}{1+kx}-(2my+a)x-2y\end{bmatrix} \triangleq\begin{bmatrix}B_{1}\ B_{2}\\ B_{3}\ B_{4}\end{bmatrix}. \tag{12}\] From this, we can obtain \[J(E_{0})=\begin{bmatrix}b\ 0\\ 0\ 1\end{bmatrix}, \tag{13}\] \[J(E_{1})=\begin{bmatrix}-b&-bc\\ 0&\frac{1}{1+k}-a\end{bmatrix}, \tag{14}\] \[J(E_{2})=\begin{bmatrix}b(-c+1)&0\\ -a-k-m\ -1\end{bmatrix}. \tag{15}\] Then we get the following theorem. **Theorem 4**: _The types of boundary equilibria are illustrated in the following:_ 1. \(E_{0}\) _is always a source._ 2. \(E_{1}\) _is a hyperbolic stable node when_ \(k>k^{*}\)_._ 3. _When_ \(k=k^{*}\)_,_ 1. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the upper half-plane if_ \(m>m^{*}\) _(Fig._ 1_(a))._ 2. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the lower half-plane if_ \(0<m<m^{*}\) _(Fig._ 1_(b))._ 3. \(E_{1}\) _is a nonhyperbolic saddle if_ \(m=m^{*}\) _(Fig._ 1_(c))._ 4. \(E_{1}\) _is a hyperbolic saddle when_ \(0<k<k^{*}\)_._ 5. \(E_{2}\) _is a hyperbolic stable node when_ \(c>1\)_._ 6. _When_ \(c=1\)_,_ 1. \(E_{2}\) _is an attracting saddle-node, and the parabolic sector is on the right half-plane if_ \(0<m<m^{**}\) _(Fig._ 2_(a))._ 2. \(E_{2}\) _is an attracting saddle-node, and the parabolic sector is on the left half-plane if_ \(m>m^{**}\) _(Fig._ 2_(b))._ 3. \(E_{2}\) _is a degenerate stable node if_ \(m=m^{**}\) _(Fig._ 2_(c))._ _._ 3. \(E_{2}\) _is a hyperbolic saddle if_ \(0<c<1\)_._ Due to \(\lambda_{1}^{E_{0}}=b>0\), \(\lambda_{2}^{E_{0}}=1>0\), so \(E_{0}\) is always a source. For \(E_{1}\), \(\lambda_{1}^{E_{1}}=-b<0\). When \(\lambda_{2}^{E_{1}}<0\), i.e., \(k>k^{*}\), \(E_{1}\) is a hyperbolic stable node. When \(\lambda_{2}^{E_{1}}>0\), i.e., \(0<k<k^{*}\), \(E_{1}\) is a hyperbolic saddle. When \(\lambda_{2}^{E_{1}}=0\), i.e., \(k=k^{*}\), \(E_{1}\) is a degenerate equilibrium point. We then have the following debate. The equilibrium point \(E_{1}\) is translated to the origin by applying the transformation \((X,Y)=(x-1,y)\). We perform a Taylor expansion around the origin, then system (6) becomes \[\left\{\begin{aligned} \frac{\mathrm{d}X}{\mathrm{d}t}& =-bX-bcY-bcXY-bX^{2},\\ \frac{\mathrm{d}Y}{\mathrm{d}t}&=-\left(1+m\right)Y ^{2}+a\left(-2+a\right)XY+a\left(-1+a\right)^{2}X^{2}Y-mXY^{2}+P_{1}(X,Y),\end{aligned}\right.\] where \(P_{i}(X,Y)\) are power series in \((X,Y)\) with terms \(X^{I}Y^{J}\) satisfying \(I+J\geq 4\) (the same below). Figure 1: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. The value of the toxin release rate \(m\) affects the solution orbit near the boundary equilibrium point \(E_{1}\). In the next step, we make the following transformations to the above system \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}-bc&-b\\ b&0\end{bmatrix}\begin{bmatrix}X_{1}\\ Y_{1}\end{bmatrix},\] and letting \(\tau=-bt\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\begin{cases}\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{20}X_{1}^{2}+a_{11}X_{1}Y _{1}+a_{30}X_{1}^{3}+a_{21}X_{1}^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2},\\ \dfrac{\mathrm{d}Y_{1}}{\mathrm{d}t}=Y_{1}+b_{20}X_{1}^{2}+b_{11}X_{1}Y_{1}+b_ {02}Y_{1}^{2}+b_{30}X_{1}^{3}+b_{21}X_{1}^{2}Y_{1}+b_{12}X_{1}Y_{1}^{2}+P_{2}(X _{1},Y_{1}),\end{cases} \tag{16}\] where \[\begin{split} a_{20}&=\left(a^{2}c-2ac+m+1\right),\quad a_{11 }=a(-2+a),\quad a_{30}=-bc\left(a^{3}c-2a^{2}c+ac+m\right),\\ a_{21}&=-b\left(2a^{3}c-4a^{2}c+2ac+m\right),\quad a_{12}=-a\left(-1+a \right)^{2}b,\quad b_{20}=-\left(a^{2}c-2ac+m+1\right)c,\\ b_{11}&=c\left(a^{2}-2a+b\right),\quad b_{02}=-b,\quad b_{30}=bc^{2}\left(a ^{3}c-2a^{2}c+ac+m\right),\quad b_{12}=a\left(-1+a\right)^{2}bc,\\ b_{21}&=b\left(2a^{3}c-4a^{2}c+2ac+m\right)c.\end{split}\] Therefore, according to Theorem 7.1 in Chapter 2 of [22], if \(a_{02}>0\), i.e., \(m>-1+\left(-a^{2}+2a\right)c\triangleq m^{*}\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the upper half-plane (Fig. 1(a)). If \(a_{02}<0\), i.e., \(0<m<m^{*}\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the lower half-plane (Fig. 1(b)). If \(a_{02}=0\), i.e., \(m=m^{*}\), system (16) becomes \[\begin{cases}\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{11}X_{1}Y_{1}+a_{30}X_{1 }^{3}+a_{21}X_{1}^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2},\\ \dfrac{\mathrm{d}Y_{1}}{\mathrm{d}t}=Y_{1}+b_{11}X_{1}Y_{1}+b_{02}Y_{1}^{2}+b_ {30}X_{1}^{3}+b_{21}X_{1}^{2}Y_{1}+b_{12}X_{1}Y_{1}^{2}+P_{2}(X_{1},Y_{1}). \end{cases} \tag{17}\] By the existence theorem of the implicit function, it follows that \(Y_{1}=\phi(X_{1})\) can be solved from the second equation of system (17) in a sufficiently small domain at the origin \((0,0)\) and satisfies \(\phi(0)=\phi^{{}^{\prime}}(0)=0\). Substituting \[Y_{1}=\phi(X_{1})=-b_{30}X_{1}^{3}+\cdots\cdots.\] into the first equation of system (17), we get \[\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{30}X_{1}^{3}+\cdots\cdots.\] where \[a_{30}=-bc\left(a^{3}c-3a^{2}c+3ac-1\right).\] From \(m^{*}=-1+\left(-a^{2}+2a\right)c>0\), we get \[-bc\left(a^{3}c-3a^{2}c+3ac-1\right)<-abc^{2}(a-1)^{2}<0.\] According to Theorem 7.1 again, \(E_{1}\) is a nonhyperbolic saddle since \(a_{03}<0\) (Fig. 1(c)). For \(E_{2}\), \(\lambda_{2}^{E_{2}}=-1<0\). When \(\lambda_{1}^{E_{2}}<0\), i.e., \(c>1\), \(E_{2}\) is a hyperbolic stable node. When \(\lambda_{1}^{E_{2}}>0\), i.e., \(0<c<1\), \(E_{2}\) is a hyperbolic saddle. When \(\lambda_{1}^{E_{2}}=0\), i.e., \(c=1\), \(E_{2}\) is a degenerate equilibrium point. Then we conduct the following discussion. We move equilibrium \(E_{2}\) to the origin by transforming \((X_{2},Y_{2})=(x,y-1)\) and make Taylor's expansion around the origin, then system (6) becomes \[\begin{cases}\dfrac{\mathrm{d}X_{2}}{\mathrm{d}t}=-bX_{2}^{2}-bX_{2}Y_{2},\\ \dfrac{\mathrm{d}Y_{2}}{\mathrm{d}t}=-\left(a+k+m\right)X_{2}-Y_{2}+k^{2}X_{2 }^{2}-\left(2m+a+k\right)X_{2}Y_{2}-Y_{2}^{2}-k^{3}X_{2}^{3}+k^{2}X_{2}^{2}Y_{ 2}-mX_{2}Y_{2}^{2}+P_{3}(X_{2},Y_{2}).\end{cases}\] In the next step, we make the following transformations to the above system \[\begin{bmatrix}X_{2}\\ Y_{2}\end{bmatrix}=\begin{bmatrix}-\dfrac{1}{a+k+m}\;0\\ 1\end{bmatrix}\begin{bmatrix}X_{3}\\ Y_{3}\end{bmatrix},\] and letting \(\tau=-t\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\left\{\begin{aligned} &\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=-\dfrac{b \left(-1+a+k+m\right)}{\left(a+k+m\right)^{2}}{X_{3}}^{2}-\dfrac{b}{a+k+m}X_{3} Y_{3},\\ &\frac{\mathrm{d}Y_{3}}{\mathrm{d}t}=Y_{3}+c_{20}X_{3}^{2}+c_{11}X _{3}Y_{3}+Y_{3}^{2}+c_{30}X_{3}^{3}+c_{21}X_{3}^{2}Y_{3}+c_{12}X_{3}Y_{3}^{2}+ P_{4}(X_{3},Y_{3}),\end{aligned}\right. \tag{18}\] where \[c_{20} =\frac{ma+k^{2}+mk+m^{2}}{\left(a+k+m\right)^{2}},\quad c_{11}= \frac{a+k}{a+k+m},\quad c_{21}=-\frac{2ma+k^{2}+2mk+2m^{2}}{\left(a+k+m\right)^ {2}},\quad c_{12}=-\frac{m}{a+k+m},\] \[c_{30} =-\frac{a^{2}m+k^{2}a+2akm+2a\,m^{2}+2k^{3}+2k^{2}m+2k\,m^{2}+m^{ 3}}{\left(a+k+m\right)^{3}}.\] Figure 2: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. The value of the toxin release rate \(m\) affects the solution orbit near the boundary equilibrium point \(E_{2}\). We define \(m^{**}=1-a-k\). Hence by Theorem 7.1, if \(0<m<m^{**}\), \(E_{2}\) is an attracting saddle-node, and the parabolic sector is on the right half-plane (Fig. 2(a)). If \(m>m^{**}\), \(E_{2}\) is an attracting saddle-node, and the parabolic sector is on the left half-plane (Fig. 2(b)). If \(m=m^{**}\), system (18) becomes \[\left\{\begin{aligned} &\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=-bX_{3}Y_{3},\\ &\frac{\mathrm{d}Y_{3}}{\mathrm{d}t}=Y_{3}+c_{20}X_{3}^{2}+c_{11 }X_{3}Y_{3}+Y_{3}^{2}+c_{30}X_{3}^{3}+c_{21}X_{3}^{2}Y_{3}+c_{12}X_{3}Y_{3}^{2} +P_{4}(X_{3},Y_{3}).\end{aligned}\right. \tag{19}\] By using the second equation of system (19), we obtain the implicit function \[Y_{3}=-c_{20}X_{3}^{2}+(c_{11}c_{20}-c_{30})X_{3}^{3}+\cdots\cdots\] and \[\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=bc_{20}X_{3}^{3}+\cdots\cdots\,,\] where \[bc_{20}=\frac{b(ma+k^{2}+mk+m^{2})}{\left(a+k+m\right)^{2}}>0.\] According to Theorem 7.1 again, \(E_{2}\) is a degenerate stable node due to the negative time transformations (Fig. 2(c)). Remark 1: The biological significance of the parameters \(k\) and \(c\) are the fear effect of non-toxic (\(y\)) species and the interspecific competition rate of toxic (\(x\)) species, respectively. By analyzing the type of boundary equilibria, non-toxic and toxic species will become extinct when \(k>k^{*}\) and \(c>1\), respectively. Figure 3: Schematic representation of the biological significance of parameters \(k\) and \(c\). Range of parameters: \(a\in(0,1)\), \(k\in(1,2)\), \(c\in(0,2)\). ## 4 Positive Equilibria and Their Types The intersections of two isoclines \(f(x,y)=0\), \(g(x,y)=0\) in the first quadrant is the point of positive equilibria. Denote the positive equilibria of system (6) as \(E_{i*}(x_{i},y_{i})\) (i=1, 2, 3), from \(f(x,y)=g(x,y)\), we obtain \[u(x)=A_{1}x^{3}+A_{2}x^{2}+A_{3}x+A_{4}, \tag{20}\] \[v(x)=u^{\prime}(x)=3A_{1}x^{2}+2A_{2}x+A_{3}, \tag{21}\] where \[A_{1}=km>0,\] \[A_{2}=(-ac-m+1)k+m=(A_{3}+k)k+m,\] \[A_{3}=-ac-k-m+1,\] \[A_{4}=c-1.\] Denote the discriminant of (21) as \(\Delta=4A_{2}^{2}-12A_{1}A_{3}\). When \(\Delta>0\), (21) has two real roots, which can be expressed as follows: \[x_{v1}=\frac{(ac+m-1)k-m-\sqrt{\Delta}}{3km},\quad x_{v2}=\frac{(ac+m-1)k-m+ \sqrt{\Delta}}{3km}.\] Let \(u(x)=0\), we have \[m=\frac{ackx^{2}+acx-kx^{2}+kx-c-x+1}{(-1+x)\left(kx+1\right)x}. \tag{22}\] Substituting (22) into \(\det(J(E))\) and \(v(x)\), we get \[\det(J(E))=-\frac{x\left(-1+x\right)b}{(kx+1)}v(x). \tag{23}\] The positive of system (6) is \((x_{i},y_{i})\) where \( y_{i}=\frac{1-x_{i}}{c}\). Let \( m_{1}\triangleq 1-ac-k\) and \( m_{2}\triangleq\frac{2ack+ac-k-1}{1+k}\). From Theorem 1 and 2, we know that \( 0<x(t)<1\) and \( 0<y(t)<1\). By a simple analysis, we can obtain the following theorem. **Theorem 5**: _The existence of positive equilibria for system (6) is shown below:_ 1. \(m=m_{1}\) _(Fig._ 4_(a))_ 1. _For_ \(0<c<1\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when_ \(0<k<k^{*}\)_._ 2. \(m>m_{1}\)__ 1. _For_ \(c>1\) _(Fig._ 4_(b)),_ 1. _If_ \(u(x_{v2})=0\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{3*}\) _when_ \(m>m_{2}\)_._ 2. _If_ \(u(x_{v2})<0\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(m>m_{2}\) _and_ \(k\geq k^{*}\)_._ 2. _System (_6_) has two positive equilibria_ \(E_{1*}\) _and_ \(E_{2*}\) _when_ \(m>m_{2}\) _and_ \(0<k<k^{*}\)_._ 3. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(m=m_{2}\)_._ 3. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(0<m<m_{2}\) _and_ \(k>k^{*}\)_._ 2. _For_ \(0<c\leq 1\) _(Fig._ 4_(c)),_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when_ \(0<k<k^{*}\) 3. \(0<m<m_{1}\) _(Fig._ 4_(d))_ 1. _For_ \(0<c<1\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when and_ \(0<k<k^{*}\)_._ Next, we analyze the types of positive equilibria. Since \(-\dfrac{x\left(-1+x\right)b}{\left(kx+1\right)c}>0\), we can easily determine the sign of \(\det(J(E_{*}))\) by (23). We conclude that \(\det(J(E_{1*}))<0\), \(\det(J(E_{2*}))>0\), \(\det(J(E_{3*}))=0\). Therefore \(E_{1*}\) is a saddle point. For \(E_{2*}\), we have \[\det(J(E_{2*}))=B_{1}B_{4}-B_{2}B_{3}>0.\] The signs of \(B_{2}\), \(B_{3}\), and \(B_{4}\) have been determined, and we can thus know that \(B_{1}<0\). Finally, we can determine that \(\operatorname{tr}(J(E_{2*}))=B_{1}+B_{4}<0\) by the above analysis. From \(\det(J(E_{2*}))>0\), \(\operatorname{tr}(J(E_{2*}))<0\), we know that \(E_{2*}\) is a stable node. Since \(\det(J(E_{3*}))=0\), the positive equilibrium point \(E_{3*}\) is clearly a degenerate equilibrium point. Next, we analyze the specific type of degenerate equilibrium point \(E_{3*}\). First, it is clear from Theorem 4.1 and Fig. 4(b) that if \(E_{3*}\) exists then the parametric condition needs to satisfy \(u(E)=v(E)=0\), where \(E=x_{v2}\). From this, we get \[a=\dfrac{3kmx^{2}-2kmx+2kx+2mx-k-m+1}{k^{2}mx^{4}+2kmx^{3}+k^{2}x^{2}+mx^{2}+2 kx+1}\triangleq a^{*},\] \[c=\dfrac{k^{2}mx^{4}+2kmx^{3}+k^{2}x^{2}+mx^{2}+2kx+1}{2kx+1}\triangleq c^{*}.\] Figure 4: The number of positive real roots of \(u(x)\). We move equilibrium \(E_{3*}\) to the origin by transforming \((X,Y)=(x-E,y-\dfrac{1-E}{c})\), make Taylor's expansion around the origin, and substitute \(a=a^{*}\), \(c=c^{*}\), then system (6) becomes \[\left\{\begin{aligned} \dfrac{\mathrm{d}X}{\mathrm{d}t}& =e_{10}X+e_{01}Y+e_{20}X^{2}+e_{11}XY,\\ \dfrac{\mathrm{d}Y}{\mathrm{d}t}&=d_{10}X+d_{01}Y+d_ {20}X^{2}+d_{02}Y^{2}+d_{11}XY+P_{5}(X,Y),\end{aligned}\right. \tag{24}\] where \[e_{10} =-bE,\quad e_{01}=-\dfrac{bE\left(Ek+1\right)^{2}\left(E^{2}m+1 \right)}{2Ek+1},\quad e_{20}=-b,\quad e_{11}=-\dfrac{b\left(Ek+1\right)^{2} \left(E^{2}m+1\right)}{2Ek+1},\] \[d_{10} =\dfrac{\left(Em+1\right)\left(2Ek+1\right)^{2}\left(-1+E\right) }{\left(Ek+1\right)^{4}\left(E^{2}m+1\right)^{2}},\quad d_{01}=\dfrac{\left(-1 +E\right)\left(Em+1\right)\left(2Ek+1\right)}{\left(Ek+1\right)^{2}\left(E^{2} m+1\right)},\] \[d_{20} =-\dfrac{\left(-1+E\right)\left(2Ek+1\right)k^{2}}{\left(Ek+1 \right)^{5}\left(E^{2}m+1\right)},\quad d_{02}=-Em-1,\quad d_{11}=-\dfrac{ \left(m+1\right)\left(2Ek+1\right)}{\left(Ek+1\right)^{2}\left(E^{2}m+1\right)}.\] We make the following transformations to system (24) \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}e_{01}&e_{10}\\ -e_{10}&d_{10}\end{bmatrix}\begin{bmatrix}X_{4}\\ Y_{4}\end{bmatrix},\] and letting \(\tau=Lt\), where \[L= -\dfrac{E^{5}b\,k^{2}m+2E^{4}bkm+E^{3}b\,k^{2}+E^{3}bm-2E^{3}km+2E ^{2}bk+2E^{2}km}{\left(Ek+1\right)^{2}\left(E^{2}m+1\right)}\] \[+\dfrac{+2E^{2}k+E^{2}m-bE-2Ek-Em+E-1}{\left(Ek+1\right)^{2} \left(E^{2}m+1\right)},\] \begin{table} \begin{tabular}{c c c c c c} \hline \(m\sim m_{1}\) & \(c\) & \(u(E)\) & \(m\sim m_{2}\) & \(k\) & Positive Equilibria \\ \hline \(m=m_{1}\) & \(0<c<1\) & / & / & \(0<k<k^{*}\) & \(E_{2*}\) \\ \hline \multirow{4}{*}{\(m>m_{1}\)} & \multirow{4}{*}{\(c>1\)} & \multirow{4}{*}{\(u(E)<0\)} & \multirow{4}{*}{\(m>m_{2}\)} & \multirow{4}{*}{\(k\geq k^{*}\)} & \multirow{4}{*}{\(E_{1*}\)} \\ \cline{3-3} & & & & \(0<k<k^{*}\) & & \(E_{1*}\) \\ \cline{3-3} & & & & \(m=m_{2}\) & / & \(E_{1*}\) \\ \cline{3-3} & & & \(0<m<m_{2}\) & \(k>k^{*}\) & \(E_{1*}\) \\ \cline{3-3} & & & & \(0<c\leq 1\) & / & \(0<k<k^{*}\) & \(E_{2*}\) \\ \hline \multirow{4}{*}{\(0<m<m_{1}\)} & \multirow{4}{*}{\(0<c<1\)} & \multirow{4}{*}{\(/\)} & \multirow{4}{*}{\(/\)} & \multirow{4}{*}{\(0<k<k^{*}\)} & \multirow{4}{*}{\(E_{2*}\)} \\ \cline{3-3} & & & & & \(0<c<1\) & / & \(0<k<k^{*}\) \\ \cline{1-1} \cline{3-3} & & & & & \(0<c<1\) & / & \(0<k<k^{*}\) \\ \hline \end{tabular} _Note:_\(E_{1*}\) is a saddle, \(E_{2*}\) is a stable node, and \(E_{3*}\) is a saddle-node, where \(m_{1}=1-ac-k\), \(m_{2}=\frac{2ac+ac-k-1}{1+k}\), \(k^{*}=\frac{1}{a}-1\). \end{table} Table 1: Positive Equilibria of System (6). for which we will retain \(t\) to denote \(\tau\) for notational simplicity. We get \[\left\{\begin{aligned} \frac{\mathrm{d}X}{\mathrm{d}t}& =g_{20}X^{2}+g_{02}Y^{2}+g_{11}XY,\\ \frac{\mathrm{d}Y}{\mathrm{d}t}&=Y+f_{20}X^{2}+f_{0 2}Y^{2}+f_{11}XY+P_{6}(X,Y),\end{aligned}\right. \tag{25}\] where \[g_{20}=\frac{\left(E^{2}m+1\right)^{2}\left(Ek+1\right)^{3}\left(-1+E\right) \left(3E^{2}k^{2}m+3Ekm+k^{2}+m\right)E^{2}b^{2}}{H^{2}\left(2Ek+1\right)},\] and \[H= E^{5}bk^{2}m+2E^{4}bkm+E^{3}bk^{2}+E^{3}bm-2E^{3}km+2E^{2}bk\] \[+2E^{2}km-2E^{2}k-E^{2}m+bE+2Ek+Em-E+1,\] please see Appendix A for the rest of the parameters. We note that \(g_{20}<0\). Hence by Theorem 7.1 in Chapter 2 in [22], \(E_{3*}\) is a saddle-node. In summary, together with Theorem 5, we obtain Table 1. ## 5 Global Stability of Positive Equilibria **Lemma 2**: _Bendixson-Dulac Criteria [14]: If in a single connected domain \(O\), there exists a function \(B(x,y)\in C^{1}(O)\), such that_ \[\frac{\partial(BF)}{\partial x}+\frac{\partial(BG)}{\partial y}\geq 0(\leq 0), \quad\forall(x,y)\in O,\] _and is not constant to zero in any subregion of O. Then system (8) does not have closed trajectories that all lie within O and singular closed trajectories with finitely many singular points. The function \(B(x,y)\) is often called the Dulac function._ **Theorem 6**: _System (6) cannot have any limit cycle in the interior of the positive quadrant \(R_{+}^{2}\)._ We use the _Bendixson-Dulac_ criteria [14] to prove Theorem 6. Construct a Dulac function \(B(x,y)=\dfrac{1}{xy}\). Then it is clear that \(B(x,y)\) is positive and so is smooth in a connected domain: \[\mathrm{Int}(R_{+}^{2})=\left\{(x,y)\in R^{2}\mid x>0,y>0\right\}.\] Let \[\Delta(x,y)=\frac{\partial(BF)}{\partial x}+\frac{\partial(BG)}{\partial y}=- \frac{b}{y}+\frac{-mx-1}{x}<0.\] Thus, \(\Delta(x,y)\) is neither identically zero nor changing sign in the interior of the positive quadrant of the \(xy\)-plane. Using the _Bendixson-Dulac_ criteria [14], system (6) has no closed trajectory, so there is no periodic solution in the first quadrant. The proof of Theorem 6 is finished. From Theorem 5 and Table 1, when system (6) satisfies \(0<c<1\), \(0<k<k^{*}\), the boundary equilibria are all unstable, and there is a unique stable positive equilibrium \(E_{2*}\) in system (6). Since Theorem 6 has proved that system (6) cannot have any limit cycle in the interior of the positive quadrant, we can obtain the following theorem. **Theorem 7**: _The locally stable positive equilibria \(E_{2*}\) is globally stable when \(0<c<1\), \(0<k<k^{*}\)._ ## 6 Bifurcation Analysis ### Transcritical bifurcation In proving Theorem 5, we found an interesting phenomenon: when \(u(1)=0\), i.e., \(k=k^{*}\), the positive equilibrium point \(E_{2*}\) will merge with the boundary equilibrium point \(E_{1}\). Also, the stability of the boundary equilibrium point \(E_{1}\) will change when the parameter \(k\) is in different intervals \((0,\frac{1}{\alpha}-1)\) and \((\frac{1}{\alpha}-1,+\infty)\), respectively. Moreover, we find a similar phenomenon for the boundary equilibrium point \(E_{2}\). From this, we conjecture that system (7) experiences transcritical bifurcations around \(E_{1}\) and \(E_{2}\). We proceed to a rigorous proof below. **Theorem 8**: _System (6) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(k_{TR}=k^{*}\) when \(u(E)<0\) and \(m\neq-a^{2}c+2ac-1\) (Fig. 5)._ From Theorem 4, we know that the eigenvalues of \(J(E_{1})\) are \(\lambda_{1}^{E_{1}}=-b\), \(\lambda_{2}^{E_{1}}=0\) if \(k=k_{TR}=k^{*}\). Now, let \(\mathbf{V_{1}}=(v_{1},v_{2})^{T}\) and \(\mathbf{W_{1}}=(w_{1},w_{2})^{T}\) be the eigenvectors of \(J(E_{1})\) and \(J^{T}(E_{1})\) corresponding to Figure 5: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. System (6) undergoes a transcritical bifurcation around \(E_{1}\). \(\lambda_{1}^{E_{1}}=0\), respectively. By calculating, we obtain \[\mathbf{V_{1}}=\begin{bmatrix}v_{1}\\ v_{2}\end{bmatrix}=\begin{bmatrix}-c\\ 1\end{bmatrix},\mathbf{W_{1}}=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}=\begin{bmatrix}0\\ 1\end{bmatrix}. \tag{26}\] We assume that \[Q(x,y)=\begin{bmatrix}F(x,y)\\ G(x,y)\end{bmatrix}=\begin{bmatrix}bx\left(-cy-x+1\right)\\ y\left(\dfrac{1}{kx+1}-y-ax-mxy\right)\end{bmatrix}.\] Furthermore, \[Q_{k}(E_{1};k_{TR})=\begin{bmatrix}\dfrac{\partial F}{\partial k}\\ \dfrac{\partial G}{\partial k}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\] \[DQ_{k}(E_{1};k_{TR})\mathbf{V_{1}}=\left[\begin{bmatrix}0\\ \dfrac{2yak}{\left(kx+1\right)^{3}}-\dfrac{y}{\left(kx+1\right)^{2}}-\dfrac{x }{\left(kx+1\right)^{2}}\end{bmatrix}\right]\Bigg{|}_{(E_{1};k_{TR})}\begin{bmatrix} -c\\ 1\end{bmatrix}=\begin{bmatrix}0\\ -a^{2}\end{bmatrix},\] \[D^{2}Q(E_{1};k_{TR})(\mathbf{V_{1}},\mathbf{V_{1}})=\begin{bmatrix}\dfrac{ \partial^{2}F}{\partial x^{2}}v_{1}^{2}+2\dfrac{\partial^{2}F}{\partial x \partial y}v_{1}v_{2}+\dfrac{\partial^{2}F}{\partial y^{2}}v_{2}^{2}\\ \dfrac{\partial^{2}G}{\partial x^{2}}v_{1}^{2}+2\dfrac{\partial^{2}G}{\partial x \partial y}v_{1}v_{2}+\dfrac{\partial^{2}G}{\partial y^{2}}v_{2}^{2}\end{bmatrix} \Bigg{|}_{(E_{1};k_{TR})}=\begin{bmatrix}0\\ \left(-2a^{2}+4a\right)c-2m-2\end{bmatrix}.\] Thus, we have \[\mathbf{W_{1}}^{T}Q_{k}(E_{1};k_{TR})=0,\] \[\mathbf{W_{1}}^{T}\left[DQ_{k}(E_{1};k_{TR})\mathbf{V_{1}}\right]=-a^{2}\neq 0,\] \[\mathbf{W_{1}}^{T}\left[D^{2}Q(E_{1};c_{TR})(\mathbf{V_{1}},\mathbf{V_{1}}) \right]=\left(-2a^{2}+4a\right)c-2m-2\neq 0.\] Based on _Sotomayor's Theorem_[20], all the transversality conditions for system (6) to experience a transcritical bifurcation are satisfied. Consequently, system (6) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(k_{TR}=k^{*}\). **Theorem 9**.: _System (6) undergoes a transcritical bifurcation around \(E_{2}\) at the bifurcation parameter threshold \(c_{TR}=1\) when \(u(E)<0\) and \(m\neq 1-a-k\) (Fig. 6)._ _Proof._ From Theorem 4, we know that the eigenvalues of \(J(E_{1})\) are \(\lambda_{1}^{E_{2}}=-1\), \(\lambda_{2}^{E_{2}}=0\) if \(c=c_{TR}=1\). Now, let \(\mathbf{V_{2}}=(v_{3},v_{4})^{T}\) and \(\mathbf{W_{2}}=(w_{3},w_{4})^{T}\) be the eigenvectors of \(J(E_{2})\) and \(J^{T}(E_{2})\) corresponding to \(\lambda_{2}^{E_{2}}=0\), respectively. By calculating, we obtain \[\mathbf{V_{1}}=\begin{bmatrix}v_{3}\\ v_{4}\end{bmatrix}=\begin{bmatrix}-\dfrac{1}{a+k+m}\\ 1\end{bmatrix},\mathbf{W_{1}}=\begin{bmatrix}w_{3}\\ w_{4}\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}. \tag{27}\] Furthermore, \[Q_{c}(E_{2};c_{TR})=\begin{bmatrix}\dfrac{\partial F}{\partial c}\\ \dfrac{\partial G}{\partial c}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\] \[DQ_{c}(E_{2};c_{TR})\mathbf{V_{2}}=\left[\begin{array}{c}-by\ -bx\\ 0\ Based on _Sotomayor's Theorem_[Perko, 2013], all the transversality conditions for system (6) to experience a transcritical bifurcation are satisfied, so system (6) undergoes a transcritical bifurcation around \(E_{2}\) at the bifurcation parameter threshold \(c_{TR}=1\). ### Pitchfork bifurcation According to Theorem 9, the third transversality condition about transcritical bifurcation on \(E_{2}\) will equal \(0\) when \(m=1-a-k\), i.e., \(m=m^{**}\). Also by Theorem 4, \(E_{2}\) is a degenerate stable node when \(m=m^{**}\). We select \(a=0.2\), \(b=0.2\), \(k=0.2\), \(m=0.6\), \(c=1\pm 0.1\). By numerical simulation, we find that the number of equilibria near \(E_{2}\) undergoes a \(1-1-3\) transformation. From this we conclude that system (6) will experience a pitchfork bifurcation around \(E_{2}\) when \(c=c_{TR}\) and \(m=m^{**}\) (Fig. 7). ### Saddle-node bifurcation Under the condition \(m>m_{1}\), \(c>1\), and \(0<k<k^{*}\), we note that when \(u(E)>0\), \(u(E)=0\), and \(u(E)<0\), system (6) has \(0\), \(1\), and \(2\) positive equilibria, respectively. Therefore we consider system (6) undergoing a saddle-node bifurcation around the positive equilibrium point \(E_{3*}\). We selected the toxic release rate \(m\) as Figure 7: System (6) undergoes a pitchforkbifurcation around \(E_{2}\) when \(c=c_{TR}\) and \(m=1-a-k\). the bifurcation parameter. By calculating \(u(E)=v(E)=0\), we obtain the bifurcation parameter threshold \(m=\frac{-E^{2}k^{2}+2Eck-2Ek+c-1}{E^{2}(E^{2}k^{2}+2Ek+1)}\triangleq m_{SN}\), and \(a=\frac{E^{4}k^{2}-2E^{3}k^{2}+2E^{3}k+3E^{2}ck+E^{2}k^{2}-4E^{2}k-2Eck+E^{2}+2Ec +2Ek-2E-c+1}{c\,E^{2}(E^{2}k^{2}+2Ek+1)}\triangleq a_{1}\). Next, we use _Sotomayor's Theorem_[2013] to verify that the transversality conditions for saddle-node bifurcation are satisfied. **Theorem 10**.: _System (6) undergoes a saddle-node bifurcation around \(E_{3*}\) at the bifurcation parameter threshold \(m=m_{SN}\) when \(m>m_{1}\), \(0<k<k^{*}\), \(c>1\), and \(c\neq\frac{E^{3}k^{3}+3E^{2}k^{2}+3Ek+1}{3E^{2}k^{2}+3Ek+1}\) (Fig. 8, Fig. 9)._ _Proof_. According to (12), we know that the Jacobi matrix of the positive equilibrium point \(E_{3*}\) can be expressed in the following form by substituting \(m=m_{SN}\), \(a=a_{1}\), and one of the eigenvalues is \(\lambda=0\). \[J(E_{3*})=\left[\begin{array}{c}-bE\\ \frac{(E^{3}k^{2}+(-k^{2}+2k)E^{2}+(1+(2c-2)k)E+c-1)(-1+E)}{E(Ek+1)^{2}c^{2}} \end{array}\frac{\left(E^{3}k^{2}+\left(-k^{2}+2k\right)E^{2}+(1+(2c-2)k)E+c-1 \right)(-1+E)}{cE(Ek+1)^{2}}\end{array}\right].\] Now, let \(\mathbf{V_{3}}=(v_{5},v_{6})^{T}\) and \(\mathbf{W_{3}}=(w_{5},w_{6})^{T}\) be the eigenvectors of \(J(E_{3*})\) and \(J^{T}(E_{3*})\) corresponding to Figure 8: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. System (6) undergoes a saddle-node bifurcation around \(E_{3*}\). \(\lambda=0\), respectively. By calculating, we obtain \[\mathbf{V_{3}}=\begin{bmatrix}v_{5}\\ v_{6}\end{bmatrix}=\begin{bmatrix}-c\\ 1\end{bmatrix},\mathbf{W_{3}}=\begin{bmatrix}w_{5}\\ w_{6}\end{bmatrix}=\begin{bmatrix}\frac{\left(E^{3}k^{2}+\left(-k^{2}+2k\right) E^{2}+\left(1+\left(2c-2\right)k\right)E+c-1\right)\left(-1+E\right)}{E^{2}b \,c^{2}\left(Ek+1\right)^{2}}\\ 1\end{bmatrix}. \tag{28}\] Furthermore, \[Q_{m}(E_{3*};m_{SN})=\begin{bmatrix}\frac{\partial F}{\partial m }\\ \frac{\partial G}{\partial m}\end{bmatrix}=\begin{bmatrix}0\\ -\frac{\left(1-E\right)^{2}E}{c^{2}}\end{bmatrix},\] \[D^{2}Q(E_{3*};m_{SN})(\mathbf{V_{3}},\mathbf{V_{3}})=\begin{bmatrix} \frac{\partial^{2}F}{\partial x^{2}}v_{5}^{2}+2\frac{\partial^{2}F}{\partial x \partial y}v_{5}v_{6}+\frac{\partial^{2}F}{\partial y^{2}}v_{6}^{2}\\ \frac{\partial^{2}G}{\partial x^{2}}v_{5}^{2}+2\frac{\partial^{2}G}{\partial x \partial y}v_{5}v_{6}+\frac{\partial^{2}G}{\partial y^{2}}v_{6}^{2}\end{bmatrix} =\begin{bmatrix}0\\ \frac{2\left(-1+E\right)\left(E^{3}k^{3}-3k^{2}\left(c-1\right)E^{2}-3k\left(c- 1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\end{bmatrix},\] Thus, we have \[\mathbf{W_{3}}^{T}Q_{m}(E_{3*};m_{SN})=-\frac{\left(1-E\right)^{2}E}{c^{2}} \neq 0,\] \[\mathbf{W_{3}}^{T}\left[D^{2}Q(E_{3*};q_{SN})(\mathbf{V_{3}},\mathbf{V_{3}}) \right]=\frac{2\left(-1+E\right)\left(E^{3}k^{3}-3k^{2}\left(c-1\right)E^{2} -3k\left(c-1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\neq 0.\] According to _Sotomayor's Theorem_[Perko, 2013], all the transversality conditions for system (6) to experience a saddle-node bifurcation are satisfied, so system (6) undergoes a saddle-node bifurcation around \(E_{3*}\) at the bifurcation parameter threshold \(m_{SN}=\frac{2\left(-1+E\right)\left(E^{3}k^{2}-3k^{2}\left(c-1\right)E^{2} -3k\left(c-1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\). \(\blacksquare\) _Remark 2_.: This section discusses all possible bifurcations of system (6). Through the above analysis, we find that by varying the value of the fear effect \(k\) for non-toxic species or the interspecific competition rate \(c\) for toxic species, both cause system (6) to undergo a transcritical bifurcation on the boundary. When a particular value is taken for the toxin release rate \(m\), this may also cause system (6) to experience a pitchfork bifurcation on the boundary. In addition, parameter \(m\) will also lead system (6) to undergo a saddle-node bifurcation in the first quadrant. Thus we can determine that the fear effect and toxic release rate can cause complex dynamics in the classical Lotka-Volterra competition model. ## 7 Effect of Toxic Release Rate and Fear Through the studies in Section 6, we learned that the toxic release rate \(m\) and the fear effect \(k\) produce rich bifurcations in system (6). Then returning to the biological significance, how exactly do \(m\) and \(k\) affect the species? Observing Table 1, we note that whenever \(k\) falls in the interval \((0,k^{*})\), there must be a stable positive equilibrium point \(E_{2*}\) in system (6). Therefore we conclude that, regardless of the value of the toxic release rate, the only factor that can affect the survival of non-toxic species is the competition fear. Next, we use numerical simulation to verify this through the time-course plots of solutions. **Example 7.1**.: For \(m>m_{1}\) and \(0<c<1\). We select \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.4\), \(m=0.5\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 10). When \(m>m_{1}\), the value of fear effect \(k\) leads to the extinction of the non-toxic species (\(x\)) if it satisfies \(k>k^{*}\), and is not in contrast. **Example 7.2**.: For \(m>m_{1}\) and \(c>1\). We select \(a=0.3\), \(b=0.5\), \(c=1.1\), \(k_{1}=1.1\), \(k_{2}=4\), \(m=0.15\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 11). We note that although toxic species are subject to an interspecific competition rate \(c>1\), they still survive by releasing toxins and by causing fear in competitor. **Example 7.3**.: For \(0<m<m_{1}\). We select \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.3\), \(m=0.1\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 12). _Remark 3_.: Comparing Example 7.1-3, we find that non-toxic species can survive regardless of the level of toxic release rate. When the fear effect on the non-toxic species is too large, it leads to extinction. Numerical simulation effectively verifies the correctness of our above analysis. ## 8 The PDE Case We will now cover several preliminary concepts that will pave the way for proving the global existence of solutions to (32). To achieve this objective, it is sufficient to establish a uniform estimate on the \(\mathbb{L}^{p}\) norms of the right-hand side of (32), where \(p\) exceeds \(\frac{p}{2}\). By doing so, we can then apply classical theory, as outlined in [1], to guarantee global existence. In this context, the standard norms in the spaces \(\mathbb{L}^{p}(\Omega)\), \(\mathbb{L}^{\infty}(\Omega)\), and \(\mathbb{C}(\overline{\Omega})\) are denoted as follows: Figure 10: \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.4\), \(m=0.5\). (a) Non-toxic species survives when \(0<k_{1}<k^{*}\). (b) Non-toxic species become extinct when \(k_{2}>k^{*}\). \[\|u\|_{p}^{p}=\int_{\Omega}|u(x)|^{p}\,dx,\ \left\|u\right\|_{\infty}=\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _then_ \[[\forall i=1,...,m,u_{i0}\geq 0]\implies[\forall i=1,...,m,\ \forall t\in[0,T^{*}),u_{ i}(t)\geq 0].\] **Lemma 4**.: _Using the same notations and hypotheses as in Lemma 3, suppose moreover that \(f\) has at most polynomial growth and that there exists \(\mathbf{b}\in\mathbb{R}^{m}\) and a lower triangular invertible matrix \(P\) with nonnegative entries such that_ \[\forall r\in[0,+\infty)^{m},\quad Pf(r)\leq\Bigg{[}1+\sum_{i=1}^{m}r_{i} \Bigg{]}\mathbf{b}.\] _Then, for \(u_{0}\in L^{\infty}(\Omega,\mathbb{R}^{m}_{+}),\) the system (30) has a strong global solution._ Under these assumptions, the following local existence result is well known, see [Henry, 2006]. **Theorem 11**.: _The system (30) admits a unique, classical solution \((u,v)\) on \([0,T_{\max}]\times\Omega.\) If \(T_{\max}<\infty\) then_ \[\lim_{t\nearrow T_{\max}}\Big{\{}\left\|u(t,.)\right\|_{\infty}+\left\|v(t,.) \right\|_{\infty}\Big{\}}=\infty, \tag{31}\] _where \(T_{\max}\) denotes the eventual blow-up time in \(\mathbb{L}^{\infty}(\Omega).\)_ The next result follows from the application of standard theory [Kishimoto & Weinberger, 1985]. **Theorem 12**.: _Consider the reaction-diffusion system (30). For spatially homogenous initial data \(u_{0}\equiv c,v_{0}\equiv d\), with \(c,d>0\), then the dynamics of (30) and its resulting kinetic (ODE) system, when \(d_{1}=d_{2}=0\) in (30), are equivalent._ Our current aim is to explore the scenario where the fear function exhibits spatial heterogeneity. This perspective finds motivation in various ecological and sociological contexts. For instance, it is quite common for prey to exhibit higher fear levels in proximity to a predator's lair but lower fear levels in regions of refuge, as mentioned in [Zhang _et al._, 2019]. Additionally, areas with high population density may lead to reduced fear due to group defense mechanisms, as discussed in [Sasmal & Takeuchi, 2020]. Given these considerations, it is plausible to assume that the fear coefficient \(k\) is not a constant but varies across the spatial domain \(\Omega\), i.e., \(k=k(x).\) The specific form of \(k(x)\) may differ depending on the particular application, aligning with the concept of the Landscape of Fear (LOF) [Brown _et al._, 1999]. Consequently, we now consider the following spatially explicit version of (6), featuring a heterogeneous fear function \(k(x)\), which results in the following reaction-diffusion system: \[\left\{\begin{aligned} & u_{t}=d_{1}\Delta u+bu\left(1-u-cv\right), \quad x\in\Omega,\\ & v_{t}=d_{2}\Delta v+v\left(\frac{1}{1+k(x)u}-v-au-muv\right), \quad x\in\Omega,\\ &\frac{\partial u}{\partial\nu}=\frac{\partial v}{\partial\nu}= 0,\quad\text{on}\quad\partial\Omega.\\ & u(x,0)=u_{0}(x)\equiv c>0,\quad v(x,0)=v_{0}(x)\equiv d>0,\\ \end{aligned}\right. \tag{32}\] Furthermore, we impose the following restrictions on the fear function \(k(x)\), \[\begin{aligned} &(i)\quad k(x)\in C^{1}(\Omega),\\ &(ii)\quad k(x)\geq 0,\\ &(iii)\quad\text{If }k(x)\equiv 0\text{ on }\Omega_{1}\subset\Omega,\text{ then }|\Omega_{1}|=0.\\ &(iv)\quad\text{If }k(x)\equiv 0\text{ on }\cup_{i=1}^{n}\Omega_{i}\subset\Omega,\text{ then }\Sigma_{i=1}^{n}|\Omega_{i}|=0.\end{aligned} \tag{33}\] _Remark 4_.: If \(k(x)\equiv 0\) on \(\Omega_{1}\subset\Omega\), with \(|\Omega_{1}|>\delta>0\), or \(q(x)\equiv 0\) on \(\cup_{i=1}^{n}\Omega_{i}\subset\Omega\), with \(\Sigma_{i=1}^{n}|\Omega_{i}|>\delta>0\), that is, on non-trivial parts of the domain, the analysis is notoriously difficult, as one now is dealing with a _degenerate_ problem. See [Du, 2002A,B] for results on this problem. This case is not in the scope of the current manuscript. Since the nonlinear right hand side of (32) is continuously differentiable on \(\mathbb{R}^{+}\times\)\(\mathbb{R}^{+}\), then for any initial data in \(\mathbb{C}\left(\overline{\Omega}\right)\) or \(\mathbb{L}^{p}(\Omega),\ p\in(1,+\infty)\), it is standard to estimate the \(\mathbb{L}^{p}-\)norms of the solutions and thus deduce global existence. The standard theory will apply even in the case of a bonafide fear function \(k(x)\) because due to our assumptions on the form of \(k\), standard comparison arguments will apply [Gilbarg & Trudinger, 1977]. Thus applying the classical methods above, via Theorem 11, and Lemmas 3-4, we can state the following lemmas: **Lemma 5**.: _Consider the reaction-diffusion system (32), for \(k(x)\) such that the assumptions via (33) hold. Then, the solutions to (32) are non-negative as long as they initiate from positive initial conditions._ **Lemma 6**.: _Consider the reaction-diffusion system (32). For \(k(x)\) such that the assumptions via (33) hold. The solutions to (32) are classical. That is for \((u_{0},v_{0})\in\mathbb{L}^{\infty}(\Omega)\), \((u,v)\in C^{1}(0,T;C^{2}(\Omega))\), \(\forall T\)._ Our goal in this section is to investigate the dynamics of (32). Herein, we will use the comparison technique and compare it to the ODE cases of classical competition or the constant fear function case, where the dynamics are well known. _Remark 5_.: This section's analysis primarily focuses on the choice of spatially homogenous (flat) initial data. Let's define some PDE systems, \[\begin{split}&\overline{u}_{t}=d_{1}\overline{u}_{xx}+b\overline{ u}\left(1-\overline{u}-c\overline{v}\right),\\ &\overline{v}_{t}=d_{2}\overline{v}_{xx}+\overline{v}\left(1-v- au-muv\right),\end{split} \tag{34}\] \[\begin{split}&\widehat{u}_{t}=d_{1}\widehat{u}_{xx}+b\widehat{u} \left(1-\widehat{u}-c\widehat{v}\right),\\ &\widehat{v}_{t}=d_{2}\widehat{v}_{xx}+\widehat{v}\left(\frac{1 }{1+\widehat{\mathbf{k}}\widehat{u}}-\widehat{v}-a\widehat{u}-m\widehat{u} \widehat{v}\right),\end{split} \tag{35}\] \[\begin{split}&\widetilde{u}_{t}=d_{1}\widetilde{u}_{xx}+bu\left(1 -\widetilde{u}-c\widehat{v}\right),\\ &\widetilde{v}_{t}=d_{2}\widetilde{v}_{xx}+\widehat{v}\left(\frac {1}{1+\widehat{\mathbf{k}}\widehat{u}}-\widehat{v}-a\widehat{u}-m\widehat{u} \widehat{v}\right),\end{split} \tag{36}\] \[\begin{split}&\tilde{u}_{t}=d_{1}\tilde{u}_{xx}+b\tilde{u}\left(1 -\tilde{u}-c\widehat{v}\right),\\ &\tilde{v}_{t}=d_{2}\tilde{v}_{xx}+\widehat{v}\left(\frac{1}{1+ \widehat{\mathbf{k}}}-\tilde{v}-a\tilde{u}-m\tilde{u}\widehat{v}\right),\end{split} \tag{37}\] where \[\widehat{\mathbf{k}}=\min_{x\in\Omega}k(x),\qquad\widetilde{\mathbf{k}}=\max_ {x\in\Omega}k(x). \tag{38}\] We assume Neumann boundary conditions for all of the reaction diffusion systems (34)-(37). Also, we prescribe spatially homogenous (flat) initial conditions in each system: \(u(x,0)=u_{0}(x)\equiv c>\ 0,\quad v(x,0)=v_{0}(x)\equiv d>0\). **Theorem 13**.: _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\), as well as the reaction-diffusion systems (35)-(36). Then the following point-wise comparison holds,_ \[\widetilde{v}\leq v\leq\widehat{v}.\] _Proof._ From the positivity of the solutions to reaction-diffusion systems (35)-(37) and via comparison of (32) to the logistic equation to get upper bound for second species, i.e., \(v\leq 1\). Hence, we have \[\frac{1}{1+\widetilde{\mathbf{k}}}\leq\frac{1}{1+\widetilde{\mathbf{k}}}\; \widetilde{u}\leq\frac{1}{1+k(x)u}\leq\frac{1}{1+\widetilde{\mathbf{k}}}\; \widehat{u}\leq 1,\quad x\in\Omega.\] Hence, the result follows from the standard comparison theory [Gilbarg & Trudinger, 1977]. ### Attraction to boundary or interior equilibrium **Theorem 14**: _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction_ \[\widehat{\mathbf{k}}>\frac{1}{a}-1, \tag{39}\] _then there exits some flat initial data such that solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system given by equation (35). Since the parameter \(\widehat{\mathbf{k}}\) satisfies the specified condition, we can apply Theorem 4. This allows us to select initial values \([u_{0},v_{0}]\) where \(v_{0}\) is significantly smaller than \(u_{0}\) point wise, resulting in the convergence \((\widehat{u},\widehat{v})\) towards \((1,0)\). Furthermore, for the reaction-diffusion system given by Equation (36), due to the inequality \(\widetilde{\mathbf{k}}>\widehat{\mathbf{k}}\), the parameter \(\widetilde{\mathbf{k}}\) also adheres to the imposed conditions. Consequently, Theorem 4 is applicable again, leading to the conclusion that for the same initial values \([u_{0},v_{0}]\) with \(v_{0}\) much smaller than \(u_{0}\) point wise, the system \((\widetilde{u},\widetilde{v})\) converges to \((1,0)\). Moreover, employing Lemma 13, we establish the relation \(\widetilde{v}\leq v\leq\widehat{v}\). This implies: \[\lim_{t\to\infty}(\widetilde{u},\widetilde{v})\leq\lim_{t\to\infty}(u,v)\leq \lim_{t\to\infty}(\widehat{u},\widetilde{v}),\] and consequently: \[(1,0)\leq\lim_{t\to\infty}(u,v)\leq(1,0).\] By employing a squeezing argument, as \(t\) tends towards infinity, for initial data \([u_{0},v_{0}]\), we can deduce the uniform convergence of solutions for the Equation (32). This leads to the assertion that: \[(u,v)\to(1,0)\] as \(t\) approaches infinity. **Theorem 15**: _For the reaction diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\), and \(c>1\), then there exits some flat initial data such that solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system (35). Since \(c>1\) satisfies the parametric restriction, from Theorem 4, we can pick some initial data \([u_{1},v_{1}](u_{1}\ll v_{1}\) pointwise) such that \[(\widehat{u},\widetilde{v})\to(0,1).\] Similarly consider the reaction-diffusion system (36), from Theorem 4, for same set of initial data \([u_{1},v_{1}](u_{1}\ll v_{1}pointwise)\), we have \[(\widetilde{u},\widetilde{v})\to(0,1).\] Moreover, on using Lemma 13 we have, \[\widetilde{v}\leq v\leq\widetilde{v},\] which entails, \[\lim_{t\to\infty}(\widetilde{u},\widetilde{v})\leq\lim_{t\to\infty}(u,v)\leq\lim _{t\to\infty}(\widehat{u},\widetilde{v}),\] subsequently, \[(0,1)\leq\lim_{t\to\infty}(u,v)\leq(0,1).\] Now using a squeezing argument, in the limit that \(t\to\infty\), for initial data \([u_{1},v_{1}](u_{1}\ll v_{1}pointwise)\), we have uniform convergence of solutions of (32), i.e., \[(u,v)\to(0,1)\] as \(t\to\infty\). Fig. 14: Numerical simulation of (32) for the case of competition exclusion in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.4,b=0.2,c=2.1\) and \(m=0.4\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.4,2]\) (b) \([u_{0},v_{0}]=[0.4,1.2]\). Fig. 13: Numerical simulation of (32) for the case of competition exclusion in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.3,b=0.2,c=1.1\) and \(m=0.15\). The initial data are chosen (a) \([u_{0},v_{0}]=[2,0.4]\) (b) \([u_{0},v_{0}]=[1.2,0.4]\). _Remark 6._ We see that via theorems 14 & 15, that attraction to boundary equilibrium is possible for certain initial data. For other (positive) initial data, depending on parametric restrictions, one could have attraction to an interior state as well. ### A case of strong competition **Theorem 16.** _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction_ \[m>1-ac-{\bf k},\quad c>1,\quad u(E)<0,\quad m>\frac{2ac{\bf k}+ac-{\bf k}+1}{1+{ \bf k}},\quad{\bf k}\geq\frac{1}{a}-1,\] _for \({\bf k}=\widehat{{\bf k}},\widetilde{{\bf k}}\), and \(u\) is a cubic polynomial given by (20). Then there exists sufficiently small initial data \([u_{0}(x),v_{0}(x)]\)\((v_{0}(x)<<u_{0}(x)\) pointwise), such that the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\), while there exits also sufficiently large initial data \([u_{1}(x),v_{1}(x)]\)\((u_{1}(x)<<v_{1}(x)\) pointwise) for which the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system (35). Since the \(\widehat{{\bf k}}\) satisfies the parametric restriction, from Theorem 5, there exists a interior saddle equilibrium \(E_{1*}\) to the kinetic (ODE) system (35). On making use of the stable manifold theorem [20], i.e., \(\exists\;\;W^{1}_{s}(E_{1*})\in{\cal C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen right to \(W^{1}_{s}(E_{1*})\) the solution \((\widetilde{u},\widetilde{v})\to(1,0)\) and for initial data chosen left to \(W^{1}_{s}(E_{1*})\), \((\widetilde{u},\widetilde{v})\to(0,1)\). Moreover, notice that \(\frac{1}{1+\widetilde{{\bf k}}u}\leq\frac{1}{1+\widetilde{{\bf k}}u}\), we have that for the kinetic (ODE) system (36), we still remain in the strong competition case, and via standard theory again, \(\exists\;\;W_{s}(E_{1**})\in{\cal C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen left to \(W_{s}(E_{1**})\) the solution \((\widetilde{u},\widetilde{v})\to(0,1)\) and for initial data chosen right to \(W_{s}(E_{1**})\), \((\widetilde{u},\widetilde{v})\to(1,0)\). Here \(E_{1**}\) is the interior saddle equilibrium to the kinetic (ODE) system for (36). Now since \(\frac{1}{1+\widetilde{{\bf k}}u}\leq\frac{1}{1+\widetilde{{\bf k}}u}\), the \(v\) component of \(E_{1**}\) is more than the \(v\) component of \(E_{1*}\). Now using the \({\cal C}^{1}\) property of the separatricies \(W^{1}_{s}(E_{1*}),W_{s}(E_{1**})\), we have the existence of a wedge \(\mathbb{V}\) emanating from \(E_{1*}\), s.t within \(\mathbb{V}\) we have \(W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\). Note via Lemma 13 we have \(\widetilde{v}\leq v\leq\widehat{v}\). Let us consider positive initial data \((u_{0},v_{0})\) chosen small enough, within \(\mathbb{V}\) s.t. \((u_{0},v_{0})<W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\), we will have \[\Big{\{}(1,0)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(1,0)\Big{\}}.\] On the other hand, for sufficiently large initial data \((u_{1},v_{1})\) via an analogous construction we will have \[\Big{\{}(0,1)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(0,1)\Big{\}}.\] This proves the theorem. ### The weak competition case The Theorems 5 and 7, along with numerical simulations (Fig 18) motivate the following conjecture: Conjecture 1. For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction \[m>1-ac-{\bf k},\quad 0<c<1,\quad 0<{\bf k}<\frac{1}{a}-1,\] for \({\bf k}=\widehat{{\bf k}},\widetilde{{\bf k}}\). Then for any positive set of initial data \([u_{0}(x),v_{0}(x)]\), the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((u^{*},v^{*})\) as \(t\to\infty\). ### The case of multiple interiors The numerical simulations Fig 19 motivate the following conjecture: Conjecture 2. For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction \[m>1-ac-\mathbf{k},\quad m>\frac{2ac\mathbf{k}+ac-\mathbf{k}+1}{1+\mathbf{k}}, \quad u(E)<0,\quad c>1,\quad 0<\mathbf{k}<\frac{1}{a}-1,\] for \(\mathbf{k}=\widehat{\mathbf{k}},\widetilde{\mathbf{k}}\), and \(u\) is a cubic polynomial given by (20). Then there exists sufficiently small initial data \([u_{0}(x),v_{0}(x)]\) (\(v_{0}(x)<<u_{0}(x)\) pointwise), such that the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((u^{*},v^{*})\) as \(t\to\infty\), while there exits also sufficiently large initial data \([u_{1}(x),v_{1}(x)]\) (\(u_{1}(x)<<v_{1}(x)\) pointwise) for which the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\). Figure 16: Phase plots showing various dynamics under strong competition parametric restrictions. The parameters are chosen as \(a=0.2,b=0.2,c=1.1\) and \(m=0.15\). Figure 15: Numerical simulation of (32) for the case of strong comp in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=1.1\) and \(m=0.15\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.01,1.5]\) (b) \([u_{0},v_{0}]=[1.5,0.5]\). ## 9 Numerical Simulations The MATLAB R2021b software was employed to conduct a PDE simulation for a reaction-diffusion system (32) modeling Allelopathic Phytoplankton. This simulation considered spatially heterogeneous fear functions, denoted as \(k(x)\). The solution was obtained using the pdepe function to solve 1-D initial boundary value problems in a single spatial dimension. The computational task was performed on an 8-core CPU within an Apple M1 Pro-based workstation, taking approximately \(5-7\) seconds to complete when applied to the spatial domain interval \([0,\pi]\), which was divided into 1000 sub-intervals. Our theoretical findings and conjectures, specific to the spatially explicit context, were substantiated through a time series analysis conducted over an extended duration. Simulations were executed with parameters conforming to the constraints established by the theorems. In the spatially explicit setting, we used the standard comparison theory to derive point-wise constraints on the fear function \(k(x)\). This Figure 17: Numerical simulation for the reaction diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=1.1\) and \(m=0.15\). Equilibria: \(E_{1*}=(0.029,0.882),E_{1**}=(0.022,0.888),E_{2}=(0,1),E_{1}=(1,0)\) and \(E_{0}=(0,0)\). \(W_{s}^{2}(E_{1*})\) (\(k=4\)) and \(W_{s}(E_{1**})\) (\(k=5\)) are two separtrices passing through \(E_{1*}\) and \(E_{1**}\), respectively. The \(C^{1}\) property of the separtrices, \(W_{s}^{1}(E_{1*}),W_{s}(E_{1*})\), shows a wedge \(V\) emanating from \(E_{1*}\), such that within \(\nabla\) we have \(W_{s}^{1}(E_{1*})\leq W_{s}(E_{1**})\). The \(u\)-nullcline is in red for \(k=4\) and \(k=5\). For \(k=4\), \(v\)-nullcline is in orange. For \(k=5\), \(v\)-nullcline is in magenta. Figure 18: Numerical simulation of (32) for the case of weak comp in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=0.9\) and \(m=1.6\). The initial data are chosen (a) \([u_{0},v_{0}]=[4,4]\) (b) \([u_{0},v_{0}]=[0.1,0.1]\). analysis observed competitive exclusion, strong competition, and multiple equilibria-type dynamics within the reaction-diffusion system featuring a spatially heterogeneous fear function. The outcomes of Theorems 3, 3, 3, and Conjectures 1, 2 provided clear evidence of these phenomena. To further validate our numerical results, we utilized Figures [13, 14, 15, 18, 19]. Theoretical results were rigorously validated through numerical experiments employing various heterogeneous fear functions. Each figure caption includes details regarding the parameters used for these simulations and their relevance to specific theorems. It is important to note that all parameter choices remained within the range \([0,5]\), consistent with the model and its comparison to the logistic equation, indicating that any species' population cannot exceed unity. ## 10 Summary and Conclusion In this paper, we are the first to propose an allebotathic phytoplankton competition model influenced by the fear effect, where the parameters \(k\) and \(m\) denote the fear effect and the toxic release rate, respectively. Our study shows that \(k\) and \(m\) perturb the classical Lotka-Volterra competition model to cause rich dynamics. Meanwhile, \(k\) and \(m\) can significantly impact species density biologically. First, we give the conditions for persistence for system (6). When the persistence condition is satisfied, the two species will coexist. System (6) has three boundary equilibria. To study the positive equilibria, we construct a cubic function (20). By analyzing the original image of this function as well as the image of the corresponding derivative function, we find that there are at most two positive equilibria of system (6) and give the existence conditions for the corresponding cases. The next step is to analyze the stability of the equilibria. We investigate the Jacobian matrix corresponding to the boundary equilibria \(E_{0}\), \(E_{1}\), and \(E_{2}\), respectively. By analyzing the traces and the determinant of the matrix, it is found that \(E_{0}\) is always a source. The fear effect \(k\) and the interspecific competition rate \(c\) which the toxic species is subjected will affect \(E_{1}\) and \(E_{2}\), respectively. Furthermore, when the toxin release rate \(m\) reaches a certain threshold, either E1 or E2 will turn into a degraded equilibrium point. For the positive equilibria, we have used (23) to study the relationship between the determinant of its Jacobian matrix as well as the slope of the tangent line and further obtain that \(E_{1\star}\) is a saddle point and \(E_{2\star}\) is a stable node. At the point \(E_{3\star}\), we obtained that its determinant equals 0, so we translated this point to the origin and performed Taylor's expansion. Finally, we used Theorem 3.2 in Chapter 2 to prove that \(E_{3\star}\) is a saddle-node. In particular, we prove no closed orbit for system (6). Combined with the persistence condition, the locally stable positive equilibria \(E_{2\star}\) is also globally stable in system (6). In addition, by varying the fear effect \(k\) or the interspecific competition rate \(c\) to which toxic species is subjected, system (6) will experience transcritical bifurcation around \(E_{1}\) or \(E_{2}\). If the toxic release rate Figure 19: Numerical simulation of (32) for the case of bi-stability in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.3,b=0.2,c=1.1\) and \(m=0.5\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.05,2]\) (b) \([u_{0},v_{0}]=[2,2]\). \(m=1-a-k\), the transcritical bifurcation experienced around \(E_{2}\) will turn into a pitchfork bifurcation. When the toxic release rate \(m\) is used directly as a bifurcation parameter, it results in a saddle-node bifurcation of system (6) in the first quadrant. In essence, these results are seen in the spatially explicit case as well. For large fear coefficient extinction (for certain initial data) is seen for the non-toxic fearful species, see Theorem 14. Depending on the interplay of other parameters, one sees a strong competition type setting, see Theorem 16. Future work will explore a spatially heterogeneous toxic release rate \(m\) (perhaps even one that causes degeneracy), as well as different forms of this rate, including the non-smooth case (Parshad, 2021; Antwi-Fordjour, 2020). We will also explore global stability of the interior equilibrium in the PDE case, as well as the existence of non-constant steady states. To summarize all of the above analysis, the two species can coexist only if the fear effect \(k\) is within the interval \((0,k^{*})\). As for the toxic release rate \(m\), it does not directly change the survival of the non-toxic species but only affects the species' density. We can conclude that in the allelopathic phytoplankton competition model, the real cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. This article has some guidance for the conservation of species diversity. ## Appendix A \[g_{11}= \frac{\left(E^{2}m+1\right)Q_{1}b}{Q_{2}^{2}},\quad g_{02}=\frac{ \left(2Ek+1\right)\left(-1+E\right)Q_{3}}{Q_{2}^{2}\left(Ek+1\right)^{4} \left(E^{2}m+1\right)^{2}},\quad f_{20}=-\frac{Q_{4}}{Q_{2}^{2}\left(2Ek+1 \right)^{2}},\] \[f_{11}= -\frac{bE\left(Ek+1\right)^{2}\left(E^{2}m+1\right)Q_{5}}{Q_{2}^ {2}\left(2Ek+1\right)},\quad f_{02}=-\frac{Q_{6}}{Q_{2}^{2}\left(Ek+1\right)^{ 2}\left(E^{2}m+1\right)},\] \[Q_{1}= 4E^{6}bk^{3}m-6E^{5}bk^{3}m+7E^{5}b^{2}m-12E^{4}bk^{2}m+4E^{4} bkm+4E^{4}k^{2}m-2E^{3}bk^{3}-3E^{3}bk^{2}-8E^{3}bkm\] \[-4E^{3}k^{2}m+E^{3}bm+4E^{3}k^{2}+4E^{3}km-2E^{2}bk^{2}-4E^{2}bk-2 E^{2}bm-4E^{2}k^{2}-4E^{2}km+4E^{2}k\] \[+E^{2}m-bE-4Ek-Em+E-1,\] \[Q_{2}= E^{5}bk^{2}m+2E^{4}bkm+E^{3}bk^{2}+E^{3}bm-2E^{3}km+2E^{2}bk+2E^{2 }km-2E^{2}k-E^{2}m+bE\] \[+2Ek+Em-E+1,\] \[Q_{3}= 3E^{11}b^{2}k^{5}m^{3}+2E^{10}b^{2}k^{5}m^{2}+12E^{10}b^{2}k^{4} m^{3}+7E^{9}b^{2}k^{5}m^{2}+9E^{9}b^{2}k^{4}m^{2}+19E^{9}b^{2}k^{3}m^{3}-4E^{9} bk^{4}m^{3}\] \[+4E^{8}b^{2}k^{5}m+27E^{8}b^{2}k^{4}m^{2}+16E^{8}b^{2}k^{3}m^{2}+ 15E^{8}b^{2}k^{2}m^{3}-12E^{8}bk^{4}m^{2}-12E^{8}bk^{3}m^{3}+5E^{7}b^{2}k^{5}m\] \[+18E^{7}b^{2}k^{4}m+41E^{7}b^{2}k^{3}m^{2}+14E^{7}b^{2}k^{2}m^{2} +6E^{7}b^{2}km^{3}-8E^{7}bk^{4}m-36E^{7}bk^{3}m^{2}-13E^{7}bk^{2}m^{3}\] \[+8E^{7}k^{3}m^{3}+2E^{6}b^{2}k^{5}+18E^{6}b^{2}k^{4}m+32E^{6}b^{2 }k^{3}m+31E^{6}b^{2}k^{2}m^{2}-8E^{6}b\,k^{4}m-8E^{6}k^{3}m^{3}+E^{5}b^{2}k^{5}\] \[+6E^{6}b^{2}k^{2}m^{2}+E^{6}b^{2}m^{3}-24E^{6}b\,k^{3}m-39E^{6}b \,k^{2}m^{2}-6E^{6}b\,k^{3}m^{3}+24E^{6}k^{3}m^{2}+12E^{6}k^{2}m^{3}+9E^{5}b^{2 }k^{4}\] \[+25E^{5}b^{2}k^{3}m+4E^{5}b^{4}m+28E^{5}b^{2}k^{2}m+12E^{5}b^{2}k \,m^{2}-8E^{5}b\,k^{4}-24E^{5}b\,k^{3}m-24E^{5}k^{3}m^{2}\] \[-12E^{5}k^{2}m^{3}+3E^{4}b^{2}k^{4}+E^{5}b^{2}m^{2}-26E^{5}b\,k^{ 2}m-18E^{5}bk\,m^{2}-E^{5}b\,m^{3}+24E^{5}k^{3}m+36E^{5}k^{2}m^{2}\] \[+6E^{5}k\,m^{3}+16E^{4}b^{2}k^{3}+17E^{4}b^{2}k^{2}m+4E^{4}b\,k^{4 }+12E^{4}b\,k^{3}m+12E^{4}b^{2}km+2E^{4}b^{2}m^{2}-24E^{4}b\,k^{3}\] \[-26E^{4}b\,k^{2}m-24E^{4}k^{3}m-36E^{4}k^{2}m^{2}-6E^{4}b\,m^{3}+3 E^{3}b^{2}k^{3}-12E^{4}bkm-3E^{4}b\,m^{2}+8E^{4}k^{3}\] \[+36E^{4}k^{2}m+18E^{4}k\,m^{2}+E^{4}m^{3}+14E^{3}b^{2}k^{2}+6E^{3} b^{2}km+12E^{3}b^{3}+13E^{3}b^{2}m+2E^{3}b^{2}m\] \[-26E^{3}b\,k^{2}-12E^{3}bkm-8E^{3}k^{3}-36E^{3}k^{2}m-18E^{3}k\,m^{ 2}-E^{3}m^{3}+k^{2}b^{2}E^{2}-2E^{3}bm+12E^{3}k^{2}\] \[+18E^{3}km+3E^{3}m^{2}+6E^{2}b^{2}k+E^{2}b^{2}m+13E^{2}b\,k^{2}+6E^ {2}bkm-12E^{2}bk-2E^{2}bm-12E^{2}k^{2}\] \[-18E^{2}km-3E^{2}m^{2}+6E^{2}k+3E^{2}m+E\,b^{2}+6Ebk+Ebm-2bE-6Ek-3 Em+E+b-1,\] \[Q_{4}= \left(E^{2}m+1\right)^{3}\left(Ek+1\right)^{5}\left(-1+E\right) \left(3E^{2}k^{2}m+3Ekm+k^{2}+m\right)E^{2}b^{2},\] \[Q_{5}= E^{6}b^{2}k^{4}m^{2}+4E^{8}b^{2}k^{3}m^{2}+2E^{7}b^{2}k^{4}m+6E^{7}b^ {2}k^{2}m^{2}+8E^{6}b^{2}k^{3}m-2E^{6}b\,k^{3}m^{2}+4E^{6}b^{2}km^{2}\] \[-4E^{6}b\,k^{3}m-3E^{6}b\,k^{2}m^{2}+E^{5}b^{2}k^{4}+12E^{5}b^{2}k^ {2}m+4E^{5}b\,k^{3}m-2E^{5}b\,k^{2}m^{2}+E^{5}b^{2}m^{2}-10E^{5}b\,k^{2}m\] \[-4E^{5}bk\,m^{2}+8E^{5}k^{2}m^{2}+4E^{4}b^{2}k^{3}-4E^{4}b\,k^{3}m+8E^{ 4}b^{2}km-4E^{4}b\,k^{3}+4E^{4}b\,k^{2}m-12E^{4}k^{2}m^{2}\] \[-8E^{4}bkm-E^{4}b\,m^{2}+12E^{4}k^{2}m+8E^{4}k\,m^{2}+6E^{3}b^{2}k^{ 2}+4E^{3}b\,k^{3}-4E^{3}b\,k^{2}m+4E^{3}k^{2}m^{2}+2E^{3}b^{2}m\] \[-10E^{3}b\,k^{2}-16E^{3}k^{2}m-12E^{3}k\,m^{2}-2E^{2}b\,k^{3}-2E^{ 3}bm+4E^{3}k^{2}+12E^{3}km+2E^{3}m^{2}+4E^{2}b^{2}k\] \[+7E^{2}b\,k^{2}+4E^{2}k^{2}m+4E^{2}k\,m^{2}-8E^{2}bk-4E^{2}k^{2}-16E ^{2}km-3E^{2}m^{2}-2Eb\,k^{2}+4E^{2}k+3E^{2}m\] \[+b^{2}E+4Ebk+4Ekm+E\,m^{2}-2Eb-4Ek-4Em+E+b+m-1,\] \[Q6= E^{14}b^{3}k^{m}3+6E^{13}b^{3}k^{5}m^{3}+3E^{12}b^{3}k^{6}m^{2}+15 E^{12}b^{3}k^{4}m^{3}-E^{12}b^{2}k^{5}m^{3}+18E^{11}b^{3}k^{5}m^{2}+E^{11}b^{2}k ^{5}m^{3}\] \[+20E^{11}b^{3}k^{3}m^{3}-2E^{11}b^{2}k^{5}m^{2}-6E^{11}b^{2}k^{4}m^ {3}+3E^{10}b^{3}k^{6}m+45E^{10}b^{3}k^{4}m^{2}+E^{10}b^{2}k^{5}m^{2}\] \[+6E^{10}b^{2}k^{4}m^{3}+15E^{10}b^{3}k^{2}m^{3}-9E^{10}b^{2}k^{4}m^ {2}-13E^{10}b^{2}k^{3}m^{3}+18E^{9}b^{3}k^{5}m+E^{9}b^{2}k^{5}m^{2}+60E^{9}b^{3} k^{3}m^{2}\] \[-4E^{9}b^{2}k^{5}m+13E^{9}b^{2}k^{3}m^{3}-4E^{9}b\,k^{4}m^{3}+E^{8 }b^{3}k^{6}+6E^{9}b^{3}k^{3}m^{3}-16E^{9}b^{2}k^{3}m^{2}-13E^{9}b^{2}k^{2}m^{3}\] \[-4E^{9}b\,k^{4}m^{2}+45E^{8}b^{3}k^{4}m+5E^{8}b^{2}k^{5}m+9E^{8}b^ {2}k^{4}m^{2}+4E^{8}b\,k^{4}m^{3}+45E^{8}b^{3}k^{2}m^{2}-18E^{8}b^{2}k^{4}m\] \[-7E^{8}b^{2}k^{3}m^{2}+13E^{8}b^{2}k^{2}m^{3}-12E^{8}b\,k^{3}m^{3} +6E^{7}b^{3}k^{5}-E^{7}b^{2}k^{5}m+E^{8}b^{3}m^{3}-14E^{8}b^{2}k^{2}m^{2}\] \[-6E^{8}b^{2}k^{3}m^{4}-4E^{8}b\,k^{4}m-12E^{8}b\,k^{3}m^{2}+8E^{8}b ^{3}m^{3}+60E^{7}b^{3}k^{3}m-22E^{7}b^{2}k^{5}+18E^{7}b^{2}k^{4}m\] \[+23E^{7}b^{2}k^{3}m^{2}+12E^{7}b\,k^{3}m^{3}+18E^{7}b^{3}k^{m}-23E ^{7}b^{2}k^{3}m-11E^{7}b^{2}k^{2}m^{2}+6E^{7}b^{2}k\,m^{3}-13E^{7}b\,k^{2}m^{3}\] \[-16E^{7}k^{3}m^{3}+15E^{6}b^{3}k^{4}+3E^{6}b^{2}k^{5}+4E^{6}b^{4}m ^{2}-6E^{7}b^{2}k^{2}m^{2}-E^{7}b^{2}m^{3}-12E^{7}b\,k^{3}m-13E^{7}b\,k^{2}m^{2}\] \[+24E^{7}k^{3}m^{2}+12E^{7}k^{2}m^{3}+45E^{6}b^{3}k^{2}m-9E^{6}b^{2 }k^{4}+25E^{6}b^{2}k^{3}m+25E^{6}b^{2}k^{2}m^{2}+13E^{6}b\,k^{2}m^{3}\] \[+8E^{6}k^{3}m^{3}-E^{5}b^{2}k^{5}+3E^{6}b^{3}m^{2}-28E^{6}b^{2}k^{ 2}m-6E^{6}b^{2}k^{2}m^{2}+E^{6}b^{2}m^{3}-4E^{6}b\,k^{4}-6E^{6}b\,k^{3}m^{3}\] \[-48E^{6}k^{3}m^{2}-24E^{6}k^{2}m^{3}+20E^{5}b^{3}k^{3}+12E^{5}b^{2 }k^{4}+7E^{5}b^{2}k^{3}m+4E^{5}b\,k^{4}m+12E^{5}b\,k^{3}m^{2}-E^{6}b^{2}m^{2}\] \[-13E^{6}b\,k^{2}m-6E^{6}b\,m^{2}+24E^{6}k^{3}m+36E^{6}k^{2}m^{2}+6 E^{6}k\,m^{3}+18E^{5}b^{3}km-16E^{5}b^{2}k^{3}+17E^{5}b^{2}k^{2}m\] \[+12E^{5}b^{2}k^{2}m^{2}+4E^{5}b\,k^{4}+6E^{5}b\,km^{3}+24E^{5}k^{3} m^{2}+12E^{5}k^{2}m^{3}-3E^{4}b^{2}k^{4}-12E^{5}b^{2}km-E^{5}b^{2}m^{2}\] \[-12E^{5}b\,k^{3}-E^{5}b\,m^{3}-48E^{5}k^{3}m-72E^{5}k^{2}m^{2}-12E ^{5}k\,m^{3}+15E^{4}b^{3}k^{2}+19E^{4}b^{2}k^{3}+11E^{4}b^{2}k^{2}m\] \[+12E^{4}b\,k^{3}m+13E^{4}b\,k^{2}m^{2}-6E^{5}bkm-E^{5}b\,m^{2}+8E^{ 5}k^{3}+36E^{5}k^{2}m+18E^{5}k\,m^{2}+E^{5}m^{3}+3E^{4}b^{3}m\] \[-14E^{4}b^{2}k^{2}+6E^{4}b^{2}km+2E^{4}b^{2}m^{2}+12E^{4}b\,k^{3}+E ^{4}b\,m^{3}+24E^{4}k^{3}m+36E^{4}k^{2}m^{2}+6E^{4}k\,m^{3}\] \[-3E^{3}b^{2}k^{3}-2E^{4}b^{2}m-13E^{4}b\,k^{2}-16E^{4}k^{3}-72E^{ 4}k^{2}m-36E^{4}k\,m^{2}-2E^{4}m^{3}+6E^{3}b^{3}k+15E^{3}b^{2}k^{2}\] \[+6E^{3}b^{2}km+13E^{3}b\,k^{2}m+6E^{3}bk\,m^{2}-E^{4}bm+12E^{4}k^{ 2}+18E^{4}km+3E^{4}m^{2}-6E^{3}b^{2}k+E^{3}b^{2}m\] \[+13E^{3}b\,k^{2}+8E^{3}k^{3}+36E^{3}k^{2}m+18E^{3}k\,m^{2}+E^{3}m^{ 3}-E^{2}b^{2}k^{2}-6E^{3}bk-24E^{3}k^{2}-36E^{3}km-6E^{3}m^{2}\] \[+E^{2}b^{3}+6E^{2}b^{2}k+E^{2}b^{2}m+6E^{2}bkm+E^{2}b\,m^{2}+6E^{3}k+3E ^{3}m-E^{2}b^{2}+6E^{2}bk+12E^{2}k^{2}+18E^{2}km\]
2309.16803
Local boundedness of minimizers under unbalanced Orlicz growth conditions
Local minimizers of integral functionals of the calculus of variations are analyzed under growth conditions dictated by different lower and upper bounds for the integrand. Growths of non-necessarily power type are allowed. The local boundedness of the relevant minimizers is established under a suitable balance between the lower and the upper bounds. Classical minimizers, as well as quasi-minimizers are included in our discussion. Functionals subject to so-called $p,q$-growth conditions are embraced as special cases and the corresponding sharp results available in the literature are recovered.
Andrea Cianchi, Mathias Schäffner
2023-09-28T19:12:20Z
http://arxiv.org/abs/2309.16803v2
# Local boundedness of minimizers under unbalanced Orlicz growth conditions ###### Abstract. Local minimizers of integral functionals of the calculus of variations are analyzed under growth conditions dictated by different lower and upper bounds for the integrand. Growths of non-necessarily power type are allowed. The local boundedness of the relevant minimizers is established under a suitable balance between the lower and the upper bounds. Classical minimizers, as well as quasi-minimizers are included in our discussion. Functionals subject to so-called \(p,q\)-growth conditions are embraced as special cases and the corresponding sharp results available in the literature are recovered. Key words and phrases:Local minimizers, local boundedness, unbalanced Orlicz growth, Orlicz-Sobolev inequalities 2000 Mathematics Subject Classification: 49N60 ## 1. Introduction We are concerned with the local boundedness of local minimizers, or quasi-minimizers, of integral functionals of the form \[\mathcal{F}(u,\Omega)=\int_{\Omega}f(x,u,\nabla u)\,dx, \tag{1.1}\] where \(\Omega\) is an open set in \(\mathbb{R}^{n}\), with \(n\geq 2\), and \(f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) is a Caratheodory function subject to proper structure and growth conditions. Besides its own interest, local boundedness is needed to ensure certain higher regularity properties of minimizers. Interestingly, some regularity results for minimizers admit variants that require weaker hypotheses under the a priori assumption of their local boundedness. Local boundedness of local minimizers of the functional \(\mathcal{F}\) is classically guaranteed if \(f(x,t,\xi)\) is subject to lower and upper bounds in terms of positive multiples of \(|\xi|^{p}\), for some \(p\geq 1\). This result can be traced back to the work of Ladyzhenskaya and Ural'ceva [LaUr], which, in turn, hinges upon methods introduced by De Giorgi in his regularity theory for linear elliptic equations with merely measurable coefficients. The study of functionals built on integrands \(f(x,t,\xi)\) bounded from below and above by different powers \(|\xi|^{p}\) and \(|\xi|^{q}\), called with \(p,q\)-growth in the literature, was initiated some fifty years ago. A regularity theory for minimizers under assumptions of this kind calls for additional structure conditions on \(f\), including convexity in the gradient variable. As shown in various papers starting from the nineties of the last century, local minimizers of functional with \(p,q\)-growth are locally bounded under diverse structure conditions, provided that the difference between \(q\) and \(p\) is not too large, depending on the dimension \(n\). This issue was addressed in [MaPa, MoNa] and, more recently, in [CMM1, CMM2]. Related questions are considered in [BMS, FuSb, Str] in connection with anisotropic growth conditions. By contrast, counterexamples show that unbounded minimizers may exist if the exponents \(p\) and \(q\) are too far apart [Gia, Ho, Ma1, Ma2]. The gap between the assumptions on \(p\) and \(q\) in these examples and in the regularity result has recently been filled in the paper [HiSch], where the local boundedness of minimizers is established for the full range of exponents \(p\) and \(q\) excluded from the relevant counterexamples. An extension of the techniques from [HiSch] has recently been applied in [DeGr] to extend the boundedness result to obstacle problems. In the present paper, the conventional realm of polynomial growths is abandoned and the question of local boundedness of local minimizers, and quasi-minimizers, is addressed under bounds on \(f\) of Orlicz type. More specifically, the growth of \(f\) is assumed to be governed by Young functions, namely nonnegative convex functions vanishing at \(0\). The local boundedness of minimizers in the case when lower and upper bounds on \(f\) are imposed in terms of the same Young function follows via a result from [Ci3], which also deals with anisotropic Orlicz growths. The same problem for solutions to elliptic equations is treated in [Ko]. Our focus here is instead on the situation when different Young functions \(A(|\xi|)\) and \(B(|\xi|)\) bound \(f(x,t,\xi)\) from below and above. Functionals with \(p,q\)-growth are included as a special instance. A sharp balance condition between the Young functions \(A\) and \(B\) is exhibited for any local minimizer of the functional \(\mathcal{F}\) to be locally bounded. Bounds on \(f(x,t,\xi)\) depending on a function \(E(|t|)\) are also included in our discussion. Let us mention that results in the same spirit can be found in the paper [DMP], where, however, more restrictive non-sharp assumptions are imposed. The global boundedness of global minimizers of functionals and of solutions to boundary value problems for elliptic equations subject to Orlicz growth conditions has also been examined in the literature and is the subject e.g. of [Al, BCM, Ci2, Ta1, Ta2]. Note that, unlike those concerning the local boundedness of local minimizers and local solutions to elliptic equations, global boundedness results in the presence of prescribed boundary conditions just require lower bounds in the gradient variable for integrands of functionals or equation coefficients. Therefore, the question of imposing different lower and upper bounds does not arise with this regard. Beyond boundedness, several further aspects of the regularity theory of solutions to variational problems and associated Euler equations, under unbalanced lower and upper bounds, have been investigated. The early papers [Ma2, Ma3] have been followed by various contributions on this topic, a very partial list of which includes [BCM, BeMi, BeSch2, BeSch3, BoBr, BCSV, BGS, ByOh, CKP1, CKP2, CoMi, DeMi1, DeMi3, ELP, ELM, HaOk1, Ma4]. A survey of investigations around this area can be found in [MiRa]. In particular, results from [BCM, BoBr, CKP1, DeMi2, HaOk2] demonstrate the critical role of local boundedness for higher regularity of local minimizers, which we alluded to above. ## 2. Main result We begin by enucleating a basic case of our result for integrands in (1.1) which do not depend on \(u\). Namely, we consider functionals of the form \[\mathcal{F}(u,\Omega)=\int_{\Omega}f(x,\nabla u)\,dx, \tag{2.1}\] where \[f:\Omega\times\mathbb{R}^{n}\to\mathbb{R}.\] A standard structure assumption to be fulfilled by \(f\) is that \[\text{the function}\quad\mathbb{R}^{n}\ni\xi\mapsto f(x,\xi)\quad\text{ is convex for a.e. }x\in\Omega. \tag{2.2}\] Next, an \(A,B\)-growth condition on \(f\) is imposed, in the sense that \[A(|\xi|)-L\leq f(x,\xi)\leq B(|\xi|)+L\quad\text{for a.e. }x\in\Omega\text{ and every }\xi\in\mathbb{R}^{n}, \tag{2.3}\] where \(A\) is a Young function and \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity. By contrast, the latter condition is not required on the lower bound \(A\). The function \(A\) dictates the natural functional framework for the trial functions \(u\) in the minimization problem for \(\mathcal{F}\). It is provided by the Orlicz-Sobolev class \(V^{1}_{\text{loc}}K^{A}(\Omega)\) of those weakly differentiable functions on \(\Omega\) such that \[\int_{\Omega^{\prime}}A(|\nabla u|)\,dx<\infty\] for every open set \(\Omega^{\prime}\Subset\Omega\). Besides standard local minimizers, we can as well deal with so-called quasi-minimizers, via the very same approach. A function \(u\in V^{1}_{\text{loc}}K^{A}(\Omega)\) is said to be a local quasi-minimizer of \(\mathcal{F}\) if \[\mathcal{F}(u,\Omega^{\prime})<\infty\] for every open set \(\Omega^{\prime}\Subset\Omega\), and there exists a constant \(Q\geq 1\) such that \[\mathcal{F}(u,\operatorname{supp}\varphi)\leq Q\mathcal{F}(u+\varphi, \operatorname{supp}\varphi) \tag{2.4}\] for every \(\varphi\in V^{1}_{\text{loc}}K^{A}(\Omega)\) such that \(\operatorname{supp}\varphi\Subset\Omega\). Plainly, \(u\) is a standard local minimizer of \(\mathcal{F}\) provided that inequality (2.4) holds with \(Q=1\). Throughout the paper, we shall assume that \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt=\infty. \tag{2.5}\] Indeed, if \(A\) grows so fast near infinity that \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt<\infty, \tag{2.6}\] then every function \(u\in V^{1}_{\rm loc}K^{A}(\Omega)\) is automatically bounded, irrespective of whether it minimizes \(\mathcal{F}\) or not. This is due to the inclusion \[V^{1}_{\rm loc}K^{A}(\Omega)\subset L^{\infty}_{\rm loc}(\Omega), \tag{2.7}\] which holds as a consequence of a Sobolev-Poincare inequality in Orlicz spaces. Heuristically speaking, our result ensures that any local quasi-minimizer of \(\mathcal{F}\) as in (2.1) is locally bounded, provided that the function \(B\) does not grow too quickly near infinity compared to \(A\). The maximal admissible growth of \(B\) is described through the sharp Sobolev conjugate \(A_{n-1}\) of \(A\) in dimension \(n-1\), whose definition is recalled in the next section. More precisely, if \[n\geq 3\quad\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n- 2}}dt=\infty, \tag{2.8}\] then \(B\) has to be dominated by \(A_{n-1}\) near infinity, in the sense that \[B(t)\leq A_{n-1}(Lt)\qquad\text{for }t\geq t_{0}, \tag{2.9}\] for some positive constants \(L\) and \(t_{0}\). On the other hand, in the regime complementary to (2.8), namely in either of the following cases \[\begin{cases}n=2\\ n\geq 3\quad\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n- 2}}dt<\infty,\end{cases} \tag{2.10}\] no additional hypothesis besides the \(\Delta_{2}\)-condition near infinity is needed on \(B\). Notice that, by an Orlicz-Poincare-Sobolev inequality on \(\mathbb{S}^{n-1}\), both options in (2.10) entail that \(V^{1}_{\rm loc}K^{A}(\mathbb{S}^{n-1})\subset L^{\infty}_{\rm loc}(\mathbb{S}^ {n-1})\). Altogether, our boundedness result for functionals of the form (2.1) reads as follows. **Theorem 2.1**.: _Let \(f:\Omega\times\mathbb{R}^{n}\to\mathbb{R}\) be a Caratheodory function satisfying the structure assumption (2.2). Suppose that the growth condition (2.3) holds for some Young functions \(A\) and \(B\), such that \(B\in\Delta_{2}\) near infinity. Assume that either condition (2.10) is in force, or condition (2.8) is in force and \(B\) fulfills estimate (2.9). Then any local quasi-minimizer of the functional \(\mathcal{F}\) in (2.1) is locally bounded in \(\Omega\)._ Assume now that \(\mathcal{F}\) has the general form (1.1), and hence \[f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}.\] Plain convexity in the gradient variable is no longer sufficient, as a structure assumption, for a local boundedness result to hold. One admissible strengthening consists of coupling it with a kind of almost monotonicity condition in the \(u\) variable. Precisely, one can suppose that \[\begin{cases}\text{the function}\quad\mathbb{R}^{n}\ni\xi\mapsto f(x,t,\xi)& \text{is convex for a.e. }x\in\Omega\text{ and every }t\in\mathbb{R},\\ f(x,t,\xi)\leq Lf(x,s,\xi)+E(|s|)+L&\text{if }|t|\leq|s|,\text{ for a.e. }x\in\Omega\text{ and every }\xi\in\mathbb{R}^{n},\end{cases} \tag{2.11}\] where \(L\) is a positive constant and \(E:[0,\infty)\to[0,\infty)\) is a non-decreasing function fulfilling the \(\Delta_{2}\)-condition near infinity. An alternate condition which still works is the joint convexity of \(f\) in the couple \((t,\xi)\), in the sense that \[\text{the function}\quad\mathbb{R}\times\mathbb{R}^{n}\ni(t,\xi)\mapsto f(x,t, \xi)\quad\text{ is convex for a.e. }x\in\Omega. \tag{2.12}\] The growth of \(f\) is governed by the following bounds: \[A(|\xi|)-E(|t|)-L\leq f(x,t,\xi)\leq B(|\xi|)+E(|t|)+L\quad\text{for a.e. }x\in\Omega\text{ and every }t\in\mathbb{R}\text{ and }\xi\in\mathbb{R}^{n}, \tag{2.13}\] where \(A\) is a Young function, \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity, and \(E\) is the same function as in (2.11), if this assumption is in force. The appropriate function space for trial functions in the definition of quasi-minimizer of the functional \(\mathcal{F}\) is still \(V^{1}_{\rm loc}K^{A}(\Omega)\), and the definition given in the special case (2.1) carries over to the present general framework. The bound to be imposed on the function \(B\) is the same as in the \(u\)-free case described above. On the other hand, the admissible growth of the function \(E\) is dictated by the Sobolev conjugate \(A_{n}\) of \(A\) in dimension \(n\). Specifically, we require that \[E(t)\leq A_{n}(Lt)\qquad\text{for }t\geq t_{0}, \tag{2.14}\] for some positive constants \(L\) and \(t_{0}\). Our comprehensive result then takes the following form. **Theorem 2.2**.: _Let \(f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) be a Caratheodory function satisfying either the structure assumption (2.11) or (2.12). Suppose that the growth condition (2.13) holds for some Young functions \(A\) and \(B\) and a non-decreasing function \(E\), such that \(B,E\in\Delta_{2}\) near infinity. Assume that either condition (2.10) is in force, or condition (2.8) is in force and \(B\) fulfills estimate (2.9). Moreover, assume that \(E\) fulfills estimate (2.14). Then any local quasi-minimizer of the functional \(\mathcal{F}\) in (1.1) is locally bounded in \(\Omega\)._ Our approach to Theorems 2.1 and 2.2 follows along the lines of De Giorgi's regularity result for linear equations with merely measurable coefficients, on which, together with Moser's iteration technique, all available proofs of the local boundedness of local solutions to variational problems or elliptic equations are virtually patterned. The main novelties in the present framework amount to the use of sharp Poincare and Sobolev inequalities in Orlicz spaces and to an optimized form of the Caccioppoli-type inequality. The lack of homogeneity of non-power type Young functions results in Orlicz-Sobolev inequalities whose integral form necessarily involves a gradient term on both sides. This creates new difficulties, that also appear, again because of the non-homogeneity of Young functions, in deriving the optimized Caccioppoli inequality. The latter requires an ad hoc process in the choice of trial functions in the definition of quasi-minimizers. The advantage of the use of the relevant Caccioppoli inequality is that its proof only calls into play Sobolev-type inequalities on \((n-1)\)-dimensional spheres, instead of \(n\)-dimensional balls. This allows for growths of the function \(B\) dictated by the \((n-1)\)-dimensional Sobolev conjugate of \(A\). By contrast, a more standard choice of trial functions would only permit slower growths of \(B\), not exceeding the \(n\)-dimensional Sobolev conjugate of \(A\). Orlicz-Sobolev and Poincare inequalities in dimension \(n\) just come into play in the proof of Theorem 2.2, when estimating terms depending on the variable \(u\). The trial function optimization strategy is reminiscent of that used in diverse settings in recent years. The version exploited in [11] - a variant of [10] - to deal with functionals subject to \(p,q\)-growth conditions is sensitive to the particular growth of the integrand. The conditions imposed in the situation under consideration here are so general to force us to resort to a more robust optimization argument, implemented in Lemma 5.1, Section 5. The latter is inspired to constructions employed in [12] in the context of div-curl lemmas, and in [13] in the proof of absence of Lavrientiev-phenomena in vector-valued convex minimization problems. We conclude this section by illustrating Theorems 2.1 and 2.2 with applications to a couple of special instances. The former corresponds to functionals with \(p,q\)-growth. It not only recovers the available results but also augments and extends them in some respects. The latter concerns functionals with "power-times-logarithmic" growths, and provides us with an example associated with genuinely non-homogenous Young functions. **Example 2.1**.: In the standard case when \[A(t)=t^{p},\] with \(1\leq p\leq n\), Theorem 2.1 recovers a result of [11]. Indeed, if \(n\geq 3\) and \(1\leq p<n-1\), we have that \(A_{n-1}(t)\approx t^{\frac{(n-1)p}{(n-1)-p}}\), and assumption (2.9) is equivalent to \[B(t)\lesssim t^{\frac{(n-1)p}{(n-1)-p}}\quad\text{near infinity}. \tag{2.15}\] Here, the relations \(\lesssim\) and \(\approx\) mean domination and equivalence, respectively, in the sense of Young functions. If \(p=n-1\), then \(A_{n-1}(t)\approx e^{t^{\frac{n-1}{n-2}}}\) near infinity, whereas if \(p>n-1\), then the second alternative condition (2.10) is satisfied. Hence, if either \(n=2\) or \(n\geq 3\) and \(p\geq n-1\), then any Young function \(B\in\Delta_{2}\) near infinity is admissible. Condition (2.15) is sharp, since the functionals with \(p,q\)-growth exhibited in [14, 15, 16] admit unbounded local minimizers if assumption (2.15) is dropped. Let us point out that the result deduced from Theorem 2.1 also enhances that of [11], where the function \(\xi\mapsto f(x,\xi)\) is assumed to fulfil a variant of the \(\Delta_{2}\)-condition, which is not imposed here. On the other hand, Theorem 2.2 extends the result of [11], where integrands only depending on \(x\) and \(\nabla u\) are considered. The conclusion of Theorem 2.2 hold under the same bound (2.15) on the function \(B\). Moreover, \(A_{n}(t)\approx t^{\frac{np}{n-p}}\) if \(1\leq p<n\) and \(A_{n}(t)\approx e^{t^{\frac{n}{n-1}}}\) near infinity if \(p=n\). Hence, if \(1\leq p<n\), then assumption (2.14) reads: \[E(t)\lesssim t^{\frac{np}{n-p}}\quad\text{near infinity}.\] If \(p=n\), then any non-decreasing function \(E\) satisfying the \(\Delta_{2}\)-condition near infinity satisfies assumption (2.14), and it is therefore admissible. **Example 2.2**.: Assume that \[A(t)\approx t^{p}(\log t)^{\alpha}\quad\text{near infinity},\] where \(1<p<n\) and \(\alpha\in\mathbb{R}\), or \(p=1\) and \(\alpha\geq 0\), or \(p=n\) and \(\alpha\leq n-1\). Observe that these restrictions on the exponents \(p\) and \(\alpha\) are required for \(A\) to be a Young function fulfilling condition (2.5). From an application of Theorem 2.2 one can deduce that any local minimizer of \(\mathcal{F}\) is locally bounded under the following assumptions., If \(n\geq 3\) and \(p<n-1\), then we have to require that \[B(t)\lesssim t^{\frac{(n-1)p}{(n-1)-p}}(\log t)^{\frac{(n-1)\alpha}{(n-1)-p}} \quad\text{near infinity}.\] If either \(n=2\), or \(n\geq 3\) and \(n-1\leq p<n\), then any Young function \(B\in\Delta_{2}\) near infinity is admissible. Moreover, if \(p<n\), then our assumption on \(E\) takes the form: \[E(t)\lesssim t^{\frac{np}{n-p}}(\log t)^{\frac{n\alpha}{n-p}}\quad\text{near infinity}.\] If \(p=n\), then any non-decreasing function \(E\in\Delta_{2}\) near infinity is admissible. ## 3. Orlicz-Sobolev spaces This section is devoted to some basic definitions and properties from the theory of Young functions and Orlicz spaces. We refer the reader to the monograph [RaRe] for a comprehensive presentation of this theory. The Sobolev and Poincare inequalities in Orlicz-Sobolev spaces that play a role in our proofs are also recalled. Orlicz spaces are defined in terms of Young functions. A function \(A:[0,\infty)\to[0,\infty]\) is called a Young function if it is convex (non trivial), left-continuous and \(A(0)=0\). The convexity of \(A\) and its vanishing at \(0\) imply that \[\lambda A(t)\leq A(\lambda t)\quad\text{for $\lambda\geq 1$ and $t\geq 0$}, \tag{3.1}\] and that the function \[\frac{A(t)}{t}\quad\text{is non-decreasing in $(0,\infty)$}. \tag{3.2}\] The Young conjugate \(\widetilde{A}\) of \(A\) is defined by \[\widetilde{A}(t)=\sup\{\tau t-A(\tau):\,\tau\geq 0\}\qquad\text{for}\qquad t \geq 0\,.\] The following inequalities hold: \[s\leq A^{-1}(s)\widetilde{A}^{-1}(s)\leq 2s\qquad\text{for $s\geq 0$}, \tag{3.3}\] where \(A^{-1}\) and \(\widetilde{A}^{-1}\) denote the generalized right-continuous inverses of \(A\) and \(\widetilde{A}\), respectively. A Young function \(A\) is said to satisfy the \(\Delta_{2}\)-condition globally - briefly \(A\in\Delta_{2}\) globally - if there exists a constant \(c\) such that \[A(2t)\leq cA(t)\quad\text{for $t\geq 0$}. \tag{3.4}\] If inequality (3.4) just holds for \(t\geq t_{0}\) for some \(t_{0}>0\), then we say that \(A\) satisfies the \(\Delta_{2}\)-condition near infinity, and write \(A\in\Delta_{2}\) near infinity. One has that \[A\in\Delta_{2}\text{ globally [near infinity] if and only if there exists $q\geq 1$ such that $\frac{tA^{\prime}(t)}{A(t)}\leq q$ for a.e. $t>0$ [$t\geq t_{0}$]}. \tag{3.5}\] A Young function \(A\) is said to dominate another Young function \(B\) globally if there exists a positive constant \(c\) such that \[B(t)\leq A(ct) \tag{3.6}\] for \(t\geq 0\). The function \(A\) is said to dominate \(B\) near infinity if there exists \(t_{0}\geq 0\) such that (3.6) holds for \(t\geq t_{0}\). If \(A\) and \(B\) dominate each other globally [near infinity], then they are called equivalent globally [near infinity]. We use the notation \(B\lesssim A\) to denote that \(A\) dominates \(B\), and \(B\approx A\) to denote that \(A\) and \(B\) are equivalent. This terminology and notation will also be adopted for merely nonnegative functions, which are not necessarily Young functions. Let \(\Omega\) be a measurable set in \(\mathbb{R}^{n}\). The Orlicz class \(K^{A}(\Omega)\) built upon a Young function \(A\) is defined as \[K^{A}(\Omega)=\bigg{\{}u:\text{$u$ is measurable in $\Omega$ and $\int_{\Omega}A(|u|)\,dx<\infty$}\bigg{\}}. \tag{3.7}\] The set \(K^{A}(\Omega)\) is convex for every Young function \(A\). The Orlicz space \(L^{A}(\Omega)\) is the linear hull of \(K^{A}(\Omega)\). It is a Banach function space, equipped with the Luxemburg norm defined as \[\|u\|_{L^{A}(\Omega)}=\inf\left\{\lambda>0:\int_{\Omega}A\left(\frac{|u|}{ \lambda}\right)dx\leq 1\right\} \tag{3.8}\] for a measurable function \(u\). These notions are modified as usual to define the local Orlicz class \(K^{A}_{\rm loc}(\Omega)\) and the local Orlicz space \(L^{A}_{\rm loc}(\Omega)\). If either \(A\in\Delta_{2}\) globally, or \(|\Omega|<\infty\) and \(A\in\Delta_{2}\) near infinity, then \(K^{A}(\Omega)\) is, in fact, a linear space, and \(K^{A}(\Omega)=L^{A}(\Omega)\). Here, \(|\Omega|\) denotes the Lebesgue measure of \(\Omega\). Notice that, in particular, \(L^{A}(\Omega)=L^{p}(\Omega)\) if \(A(t)=t^{p}\) for some \(p\in[1,\infty)\), and \(L^{A}(\Omega)=L^{\infty}(\Omega)\) if \(A(t)=0\) for \(t\in[0,1]\) and \(A(t)=\infty\) for \(t\in(1,\infty)\). The identity \[\|\chi_{E}\|_{L^{A}(\Omega)}=\frac{1}{A^{-1}(1/|E|)} \tag{3.9}\] holds for every Young function \(A\) and any measurable set \(E\subset\Omega\). Here, \(\chi_{E}\) stands for the characteristic function of \(E\). The Holder inequality in Orlicz spaces tells us that \[\int_{\Omega}|uv|\,dx\leq 2\|u\|_{L^{A}(\Omega)}\|v\|_{L^{\widetilde{A}}(\Omega)} \tag{3.10}\] for \(u\in L^{A}(\Omega)\) and \(v\in L^{\widetilde{A}}(\Omega)\). Assume now that \(\Omega\) is an open set. The homogeneous Orlicz-Sobolev class \(V^{1}K^{A}(\Omega)\) is defined as the convex set \[V^{1}K^{A}(\Omega)=\left\{u\in W^{1,1}_{\rm loc}(\Omega):\,|\nabla u|\in K^{A} (\Omega)\right\} \tag{3.11}\] and the inhomogeneous Orlicz-Sobolev class \(W^{1}K^{A}(\Omega)\) is the convex set \[W^{1}K^{A}(\Omega)=K^{A}(\Omega)\cap V^{1}K^{A}(\Omega). \tag{3.12}\] The homogenous Orlicz-Space \(V^{1}L^{A}(\Omega)\) and its inhomogenous counterpart \(W^{1}L^{A}(\Omega)\) are accordingly given by \[V^{1}L^{A}(\Omega)=\left\{u\in W^{1,1}_{\rm loc}(\Omega):\,|\nabla u|\in L^{A} (\Omega)\right\} \tag{3.13}\] and \[W^{1}L^{A}(\Omega)=L^{A}(\Omega)\cap V^{1}L^{A}(\Omega). \tag{3.14}\] The latter is a Banach space endowed with the norm \[\|u\|_{W^{1,A}(\Omega)}=\|u\|_{L^{A}(\Omega)}+\|\nabla u\|_{L^{A}(\Omega)}. \tag{3.15}\] Here, and in what follows, we use the notation \(\|\nabla u\|_{L^{A}(\Omega)}\) as a shorthand for \(\|\,|\nabla u|\,\|_{L^{A}(\Omega)}\). The local versions \(V^{1}_{\rm loc}K^{A}(\Omega)\), \(W^{1}_{\rm loc}K^{A}(\Omega)\), \(V^{1}_{\rm loc}L^{A}(\Omega)\), and \(W^{1}_{\rm loc}L^{A}(\Omega)\) of these sets/spaces is obtained by modifying the above definitions as usual. In the case when \(L^{A}(\Omega)=L^{p}(\Omega)\) for some \(p\in[1,\infty]\), the standard Sobolev space \(W^{1,p}(\Omega)\) and its homogeneous version \(V^{1,p}(\Omega)\) are recovered. Orlicz and Orlicz-Sobolev classes of weakly differentiable functions \(u\) defined on the \((n-1)\)-dimensional unit sphere \(\mathbb{S}^{n-1}\) in \(\mathbb{R}^{n}\) also enter our approach. These spaces are defined as in (3.7), (3.8), (3.11), (3.13), and (3.14), with the Lebesgue measure replaced with the \((n-1)\)-dimensional Hausdorff measure \(\mathcal{H}^{n-1}\), and \(\nabla u\) replaced with \(\nabla_{\mathbb{S}}u\), the vector field on \(\mathbb{S}^{n-1}\) whose components are the covariant derivatives of \(u\) As highlighted in the previous section, sharp embedding theorems and corresponding inequalities in Orlicz-Sobolev spaces play a critical role in the formulation of our result and in its proof. As shown in [10] (see also [10] for an equivalent version), the optimal \(n\)-dimensional Sobolev conjugate of a Young function \(A\) fulfilling \[\int_{0}\!\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt<\infty \tag{3.16}\] is the Young function \(A_{n}\) defined as \[A_{n}(t)=A(H_{n}^{-1}(t))\qquad\text{for }t\geq 0, \tag{3.17}\] where the function \(H_{n}:[0,\infty)\to[0,\infty)\) is given by \[H_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt\right) ^{\frac{n-1}{n}}\qquad\text{for $s\geq 0$}. \tag{3.18}\] The function \(A_{n-1}\) is defined analogously, by replacing \(n\) with \(n-1\) in equations (3.17) and (3.18). In the statements of Theorems 2.1 and 2.2, the functions \(A_{n}\),and \(A_{n-1}\) are defined after modifying \(A\) near \(0\), if necessary, in such a way that condition (3.16) be satisfied. Assumptions (2.3) and (2.13) are not affected by the choice of the modified function \(A\), thanks to the presence of the additive constant \(L\). Membership of a function in an Orlicz-Sobolev local class or space associated with \(A\) is also not influenced by this choice, inasmuch as the behavior of \(A\) near \(0\) is irrelevant (up to additive and/or multiplicative constants) whenever integrals or norms over sets with finite measure are concerned. An optimal Sobolev-Poincare inequality on balls \(\mathbb{B}_{r}\subset\mathbb{R}^{n}\), centered at \(0\) and with radius \(r\) reads as follows. In its statement, we adopt the notation \[u_{\mathbb{B}_{r}}=\fint_{\mathbb{B}_{r}}u(x)\,dx,\] where \(\fint\) stands for integral average. **Theorem A**.: _Let \(n\geq 2\), let \(r>0\), and let \(A\) be a Young function fulfilling condition (3.16). Then, there exists a constant \(\kappa=\kappa(n)\) such that_ \[\int_{\mathbb{B}_{r}}A_{n}\!\left(\frac{|u-u_{\mathbb{B}_{r}}|}{\kappa\big{(} \int_{\mathbb{B}_{r}}A(|\nabla u|)dy\big{)}^{\frac{1}{n}}}\right)dx\leq\int_{ \mathbb{B}_{r}}A(|\nabla u|)\,dx \tag{3.19}\] _for every \(u\in V^{1}K^{A}(\mathbb{B}_{r})\)._ As a consequence of inequality (3.19) and of Lemma 4.1, Section 4, the following inclusion holds: \[V^{1}_{\mathrm{loc}}K^{A}(\Omega)\subset K^{A}_{\mathrm{loc}}(\Omega) \tag{3.20}\] for any open set \(\Omega\subset\mathbb{R}^{n}\) and any Young function \(A\). Thereby, \[V^{1}_{\mathrm{loc}}K^{A}(\Omega)=W^{1}_{\mathrm{loc}}K^{A}(\Omega).\] Hence, in what follows, the spaces \(V^{1}_{\mathrm{loc}}K^{A}(\Omega)\) and \(W^{1}_{\mathrm{loc}}K^{A}(\Omega)\) will be equally used. Besides the Sobolev-Poincare inequality of Theorem A, a Sobolev type inequality is of use in our applications and is the subject of the following theorem. Only Part (i) of the statement will be needed. Part (ii) substantiates inclusion (2.7). **Theorem B**.: _Let \(n\geq 2\), let \(r>0\), and let \(A\) be a Young function fulfilling condition (3.16)._ * _Assume that condition (_2.5_) holds. Then, there exists a constant_ \(\kappa=\kappa(n,r)\) _such that_ (3.21) \[\int_{\mathbb{B}_{r}}A_{n}\!\left(\frac{|u|}{\kappa\big{(}\int_{\mathbb{B}_{r} }A(|u|)+A(|\nabla u|)dy\big{)}^{\frac{1}{n}}}\right)dx\leq\int_{\mathbb{B}_{r }}A(|u|)+A(|\nabla u|)\,dx\] _for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{r})\)_._ * _Assume that condition (_2.6_) holds. Then, there exists a constant_ \(\kappa=\kappa(n,r,A)\) _such that_ (3.22) \[\|u\|_{L^{\infty}(\mathbb{B}_{r})}\leq\kappa\bigg{(}\int_{\mathbb{B}_{r}}A(|u |)+A(|\nabla u|)\,dx\bigg{)}^{\frac{1}{n}}\] _for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{r})\)_._ _In particular, if \(r\in[r_{1},r_{2}]\) for some \(r_{2}>r_{1}>0\), then the constant \(\kappa\) in inequalities (3.21) and (3.22) depends on \(r\) only via \(r_{1}\) and \(r_{2}\)._ A counterpart of Theorem B for Orlicz-Sobolev functions on the sphere \(\mathbb{S}^{n-1}\) takes the following form. **Theorem C**.: _Let \(n\geq 2\) and let \(A\) be a Young function such that_ \[\int_{0}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2}}dt<\infty \tag{3.23}\] _if \(n\geq 3\)._ 1. _Assume that_ \(n\geq 3\) _and_ (3.24) \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2}}dt=\infty.\] _Then, there exists a constant_ \(\kappa=\kappa(n)\) _such that_ (3.25) \[\int_{\mathbb{S}^{n-1}}A_{n-1}\bigg{(}\frac{|u|}{\kappa\big{(}\int_{\mathbb{S} ^{n-1}}A(|u|)+A(|\nabla_{\mathbb{S}}u|)d\mathcal{H}^{n-1}(y)\big{)}^{\frac{1}{n -1}}}\bigg{)}\,d\mathcal{H}^{n-1}(x)\leq\int_{\mathbb{S}^{n-1}}A(|u|)+A(| \nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n-1}(x)\] _for_ \(u\in W^{1}K^{A}(\mathbb{S}^{n-1})\)_._ 2. _Assume that one of the following situations occurs:_ (3.26) \[\begin{cases}n=2&\text{and}\quad\lim_{t\to 0^{+}}\frac{A(t)}{t}>0\\ \\ n\geq 3&\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2 }}dt<\infty.\end{cases}\] _Then, there exists a constant_ \(\kappa=\kappa(n,A)\) _such that_ (3.27) \[\|u\|_{L^{\infty}(\mathbb{S}^{n-1})}\leq\kappa\bigg{(}\int_{\mathbb{S}^{n-1}} A(|u|)+A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n-1}(x)\bigg{)}^{\frac{1}{n-1}}\] _for_ \(u\in W^{1}K^{A}(\mathbb{S}^{n-1})\)_._ Theorems A and B are special cases of [15, Theorems 4.4 and 3.1], respectively, which hold in any Lipschitz domain in \(\mathbb{R}^{n}\) (and for Orlicz-Sobolev spaces of arbitrary order). The assertions about the dependence of the constants can be verified via a standard scaling argument. Theorem C can be derived via arguments analogous to those in the proof of [15, Theorem 3.1]. For completeness, we offer the main steps of the proof. Proof of Theorem C.: _Part (i)._ Let us set \[u_{\mathbb{S}^{n-1}}=\fint_{\mathbb{S}^{n-1}}u(x)\,d\mathcal{H}^{n-1}(x).\] A key step is a Sobolev-Poincare type inequality, a norm version of (3.19) on \(\mathbb{S}^{n-1}\), which tells us that \[\|u-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq c\|\nabla_{ \mathbb{S}}u\|_{L^{A}(\mathbb{S}^{n-1})} \tag{3.28}\] for some constant \(c=c(n)\) and for \(u\in V^{1}L^{A}(\mathbb{S}^{n-1})\). A proof of inequality (3.28) rests upon the following symmetrization argument combined with a one-dimensional Hardy-type inequality in Orlicz spaces. Set \[c_{n}=\mathcal{H}^{n-1}(\mathbb{S}^{n-1}) \tag{3.29}\] and denote by \(u^{\circ}:[0,c_{n}]\to[-\infty,\infty]\) the signed decreasing rearrangement of \(u\), defined by \[u^{\circ}(s)=\inf\{t\in\mathbb{R}:\mathcal{H}^{n-1}(\{u>t\})\leq s\}\quad \text{for $s\in[0,c_{n}]$}.\] Moreover, define the signed symmetral \(u^{\sharp}:\mathbb{S}^{n-1}\to[-\infty,\infty]\) of \(u\) as \[u^{\sharp}(x)=u^{\circ}(V(x))\quad\text{for $x\in\mathbb{S}^{n-1}$},\] where \(V(x)\) denotes the \(\mathcal{H}^{n-1}\)-measure of the spherical cap on \(\mathbb{S}^{n-1}\), centered at the north pole on \(\mathbb{S}^{n-1}\), whose boundary contains \(x\). Thus, \(u^{\sharp}\) is a function, which is equimeasurable with \(u\), and whose level sets are spherical caps centered at the north pole. The equimeasurability of the functions \(u\), \(u^{\circ}\) and \(u^{\sharp}\) ensures that \[\|u-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}=\|u^{\sharp}-u_{ \mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}=\|u^{\circ}-u_{\mathbb{S} ^{n-1}}\|_{L^{A_{n-1}}(0,c_{n})}. \tag{3.30}\] Moreover, since \(u^{\circ}(c_{n}/2)\) is a median of \(u^{\circ}\) on \((0,c_{n})\) and \(u_{\mathbb{S}^{n-1}}\) agrees with the mean value of \(u^{\circ}\) over \((0,c_{n})\), one has that \[\|u^{\circ}-u^{\circ}(c_{n}/2)\|_{L^{A_{n-1}}(0,c_{n})}\geq\tfrac{1}{2}\|u^{ \circ}-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(0,c_{n})}=\tfrac{1}{2}\|u-u_{ \mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}, \tag{3.31}\] see e.g. [CMP, Lemma 2.2]. On the other hand, a version of the Polya-Szego principle on \(\mathbb{S}^{n-1}\) tells us that \(u^{\circ}\) is locally absolutely continuous, \(u^{\sharp}\in V^{1}L^{A}(\mathbb{S}^{n-1})\), and \[\left\|I_{\mathbb{S}^{n-1}}(s)\,\Big{(}-\frac{du^{\circ}}{ds}\Big{)}\right\|_{L ^{A}(0,c_{n})}=\|\nabla_{\mathbb{S}}u^{\sharp}\,\|_{L^{A}(\mathbb{S}^{n-1})} \leq\|\nabla_{\mathbb{S}}u\|_{L^{A}(\mathbb{S}^{n-1})}, \tag{3.32}\] where \(I_{\mathbb{S}^{n-1}}:[0,c_{n}]\to[0,\infty)\) denotes the isoperimetric function of \(\mathbb{S}^{n-1}\) (see [BrZi]). It is well-known that there exists a positive constant \(c=c(n)\) such that \[I_{\mathbb{S}^{n-1}}(s)\geq c\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\quad\text{ for }s\in(0,c_{n}). \tag{3.33}\] Hence, \[c\bigg{\|}\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\Big{(}-\frac{du^{\circ}}{ds} \Big{)}\bigg{\|}_{L^{A}(0,c_{n})}\leq\|\nabla_{\mathbb{S}}u\|_{L^{A}(\mathbb{ S}^{n-1})}, \tag{3.34}\] The absolute continuity of \(u^{\circ}\) ensures that \[u^{\circ}(s)-u^{\circ}(c_{n})=\int_{s}^{c_{n}/2}\bigg{(}-\frac{du^{\circ}}{dr} \bigg{)}\,dr\qquad\text{for }s\in(0,c_{n}). \tag{3.35}\] Thanks to equations (3.30), (3.31), (3.34), (3.35), and to the symmetry of the function \(\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\) about \(c_{n}/2\), inequality (3.28) is reduced to the inequality \[\bigg{\|}\int_{s}^{c_{n}/2}r^{-\frac{n-2}{n-1}}\phi(r)\,dr\bigg{\|}_{L^{A_{n-1 }}(0,c_{n}/2)}\leq c\|\phi\|_{L^{A}(0,c_{n}/2)} \tag{3.36}\] for a suitable constant \(c=c(n)\) and for every \(\phi\in L^{A}(0,c_{n}/2)\). Inequality (3.36) is in turn a consequence of [Ci2, inequality (2.7)]. Next, by Lemma 4.2, Section 4, applied with \(n\) replaced with \(n-1\), \[\frac{1}{\widetilde{A}^{-1}(1/(t))}\frac{1}{A_{n-1}^{-1}(t))}\leq\frac{1}{t^{ \frac{n-2}{n-1}}}\quad\text{for }t>0.\] Hence, by inequality (3.10), with \(\Omega\) replaced with \(\mathbb{S}^{n-1}\), one has that \[\|u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})} =|u_{\mathbb{S}^{n-1}}|\|1\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq \frac{2}{c_{n}}\|u\|_{L^{A}(\mathbb{S}^{n-1})}\|1\|_{L^{A}(\mathbb{S}^{n-1})}\] \[=\frac{2}{c_{n}}\frac{1}{\widetilde{A}^{-1}(1/c_{n})}\frac{1}{A_{ n-1}^{-1}(1/c_{n})}\|u\|_{L^{A}(\mathbb{S}^{n-1})}\leq\frac{2}{c_{n}^{\frac{1}{n-1} }}\|u\|_{L^{A}(\mathbb{S}^{n-1})}. \tag{3.37}\] Coupling inequality (3.28) with (3.37) and making use of the triangle inequality entail that \[\|u\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq c\big{(}\|\nabla_{\mathbb{S}}u\|_{L ^{A}(\mathbb{S}^{n-1})}+\|u\|_{L^{A}(\mathbb{S}^{n-1})}\big{)} \tag{3.38}\] for some constant \(c=c(n)\) and for \(u\in W^{1}L^{A}(\mathbb{S}^{n-1})\). Now set \[M=\int_{\mathbb{S}^{n-1}}A(|\nabla_{\mathbb{S}}u|)+A(|u|)\,d\mathcal{H}^{n-1}( x),\] and apply inequality (3.38) with the function \(A\) replaced with the Young function \(A_{M}\) given by \[A_{M}(t)=\frac{A(t)}{M}\qquad\text{for }t\geq 0.\] Hence, \[\|u\|_{L^{(A_{M})_{n-1}}(\mathbb{S}^{n-1})}\leq c\big{(}\|\nabla_{\mathbb{S}}u \|_{L^{A_{M}}(\mathbb{S}^{n-1})}+\|u\|_{L^{A_{M}}(\mathbb{S}^{n-1})}\big{)}, \tag{3.39}\] where \((A_{M})_{n-1}\) denotes the function obtained on replacing \(A\) with \(A_{M}\) in the definition of \(A_{n-1}\). The fact that the constant \(c\) in (3.38) is independent of \(A\) is of course crucial in deriving inequality (3.39). Observe that \[(A_{M})_{n-1}(t)=\frac{1}{M}A_{n-1}\Big{(}\frac{t}{M^{\frac{1}{n-1}}}\Big{)} \qquad\text{for }t\geq 0. \tag{3.40}\] On the other hand, by the definition of Luxemburg norm and the choice of \(M\), \[\|u\|_{L^{A_{M}}(\mathbb{S}^{n-1})}\leq 1\quad\text{and}\quad\|\nabla_{\mathbb{S}}u\|_{L ^{A}(\mathbb{S}^{n-1})}\leq 1. \tag{3.41}\] Therefore, by the definition of Luxemburg norm again, inequality (3.39) tells us that \[\frac{1}{M}\int_{\mathbb{S}^{n-1}}A_{n-1}\bigg{(}\frac{|u(x)|}{2cM^{\frac{1}{n-1 }}}\bigg{)}\,d\mathcal{H}^{n-1}(x)\leq 1.\] Hence, inequality (3.25) follows. _Part (ii)._ First, assume that \(n\geq 3\) and the integral condition in (3.26) holds. Let \(\overline{A}\) be the Young function defined as \[\overline{A}(t)=\bigg{(}t^{\frac{n-1}{n-2}}\,\int_{t}^{\infty}\frac{ \widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr\bigg{)}^{\widetilde{\phantom{ \text{for}}}}\quad\text{ for }t\geq 0, \tag{3.42}\] where \((\cdots)^{\widetilde{\phantom{\text{\tiny$\frown$ stands for the Young conjugate of the function in parenthesis. Notice that the convergence of integral on the right-hand side of equation (3.42) is equivalent to the convergence of the integral in (3.26), see [14, Lemma 2.3]. Since we are assuming that \(A\) fulfills condition (3.23), the same lemma also ensures that \[\int_{0}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr<\infty. \tag{3.43}\] From [14, Theorem 4.1] one has that \[\overline{A}\big{(}c\|u-u_{\mathbb{S}^{n-1}}\|_{L^{\infty}(\mathbb{S}^{n-1})} \big{)}\leq\fint_{\mathbb{S}^{n-1}}A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n -1} \tag{3.44}\] for some positive constant \(c=c(n)\) and for \(u\in V^{1}K^{A}(\mathbb{S}^{n-1})\). Furthermore, by Jensen's inequality, \[A\big{(}\|u_{\mathbb{S}^{n-1}}\|_{L^{\infty}(\mathbb{S}^{n-1})} \big{)}\leq A\bigg{(}\fint_{\mathbb{S}^{n-1}}|u|\,d\mathcal{H}^{n-1}\bigg{)} \leq\fint_{\mathbb{S}^{n-1}}A(|u|)\,d\mathcal{H}^{n-1}. \tag{3.45}\] Thanks to [14, Inequality (4.6)], \[\overline{A}(t)\leq A(t)\qquad\text{for }t\geq 0. \tag{3.46}\] Moreover, inequality (3.43) ensures that \[t^{\frac{n-1}{n-2}}\,\int_{t}^{\infty}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1} {n-2}}}\,dr\leq c\,t^{\frac{n-1}{n-2}}\qquad\text{for }t\geq 0, \tag{3.47}\] where we have set \[c=\int_{0}^{\infty}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr.\] Taking the Young conjugates of both sides of inequality (3.47) results in \[\overline{A}(t)\geq ct^{n-1}\qquad\text{for }t\geq 0, \tag{3.48}\] for some constant \(c=c(n,A)\). Inequality (3.27) follows, via the triangle inequality, from inequalities (3.44), (3.45), (3.46) and (3.48). Assume next that \(n=2\) and the limit condition in (3.26) holds. If we denote by \(a\) this limit, then \[A(t)\geq at\qquad\text{for }t\geq 0. \tag{3.49}\] A simple one-dimensional argument, coupled with Jensen's inequality and the increasing monotonicity of the function \(tA^{-1}(1/t)\) shows that \[A\big{(}\tfrac{1}{2\pi}\|u-u_{\mathbb{S}^{1}}\|_{L^{\infty}(\mathbb{S}^{1})} \big{)}\leq\fint_{\mathbb{S}^{1}}A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{1} \tag{3.50}\] for \(u\in V^{1}K^{A}(\mathbb{S}^{1})\) (see [14, Inequality (4.8) and below]). Inequality (3.27) now follows from (3.45) (which holds also when \(n=2\)), (3.49) and (3.50). ## 4. Analitic lemmas Here, we collect a few technical lemmas about one-variable functions. We begin with two inequalities involving a Young function and its Sobolev conjugate. **Lemma 4.1**.: _Let \(n\geq 2\) and let \(A\) be a Young function fulfilling condition (3.16). Then, for every \(k>0\) there exists a positive constant \(c=c(k,A,n)\) such that_ \[A(t)\leq A_{n}(kt)+c\qquad\text{for }t\geq 0. \tag{4.1}\] Proof.: Fix \(k>0\). Since \(A_{n}(t)=A(H_{n}^{-1}(t))\) and \(\lim_{t\to\infty}\frac{H_{n}^{-1}(t)}{t}=\infty\), there exists \(t\geq t_{0}\) such that \(A(t)\leq A_{n}(kt)\) for \(t\geq t_{0}\). Inequality (4.1) hence follows, with \(c=A(t_{0})\). **Lemma 4.2**.: _Let \(n\geq 2\) and let \(A\) be a Young function fulfilling condition (3.16). Then,_ \[\frac{1}{\widetilde{A}^{-1}(t)}\frac{1}{A_{n}^{-1}(t)}\leq\frac{1}{t^{\frac{1 }{n^{\prime}}}}\qquad\text{for }t>0. \tag{4.2}\] Proof.: Holder's inequality and property (3.2) imply that \[t =\int_{0}^{t}\left(\frac{A(r)}{r}\right)^{\frac{1}{n}}\!\left( \frac{r}{A(r)}\right)^{\frac{1}{n}}dr\leq\left(\int_{0}^{t}\frac{A(r)}{r}\, dr\right)^{\frac{1}{n}}\!\left(\int_{0}^{t}\left(\frac{r}{A(r)}\right)^{\frac{1}{n-1}}dr \right)^{\frac{1}{n^{\prime}}}\] \[\leq\left(\frac{A(t)}{t}\right)^{\frac{1}{n}}\!t^{\frac{1}{n}}H _{n}(t)=A(t)^{\frac{1}{n}}H_{n}(t)\quad\text{for }t>0. \tag{4.3}\] Hence, \[A^{-1}(t)\leq t^{\frac{1}{n}}H_{n}(A^{-1}(t))\quad\text{for }t\geq 0. \tag{4.4}\] The first inequality in (3.3) and inequality (4.4) imply that \[t\leq A^{-1}(t)\widetilde{A}^{-1}(t)\leq t^{\frac{1}{n}}H_{n}(A^{-1}(t)) \widetilde{A}^{-1}(t)\quad\text{for }t\geq 0. \tag{4.5}\] Hence, inequality (4.2) follows. The next result ensures that the functions \(A\), \(B\) and \(E\) appearing in assumption (2.13) can be modified near \(0\) in such a way that such an assumption is still fulfilled, possibly with a different constant \(L\), and the conditions imposed on \(A\), \(B\) and \(E\) in Theorem 2.2 are satisfied globally, instead of just near infinity. Of course, the same applies to the simpler conditions of Theorem 2.1, where the function \(E\) is missing. **Lemma 4.3**.: _Assume that the functions \(f\), \(A\), \(B\) and \(E\) are as in Theorem 2.2. Then, there exist two Young functions \(\widehat{A},\widehat{B}:[0,\infty)\to[0,\infty)\), an increasing function \(\widehat{E}:[0,\infty)\to[0,\infty)\), and constants \(\widehat{L}\geq 1\) and \(q>n\) such that:_ \[\widehat{A}(|\xi|)-\widehat{E}(|t|)-\widehat{L}\leq f(x,t,\xi)\leq\widehat{ B}(|\xi|)+\widehat{E}(|t|)+\widehat{L}\qquad\text{for a.e.}\ x\in\Omega\text{, for every }t\in\mathbb{R}\text{, and every }\xi\in\mathbb{R}^{n}\text{,} \tag{4.7}\] \[t^{\frac{n}{n-1}}\leq\widehat{L}\,\widehat{A}_{n}(t)\quad\text{ for }t\geq 0\text{,}\] (4.8) \[\lim_{t\to 0^{+}}\frac{\widehat{A}(t)}{t}>0,\] (4.9) \[\widehat{E}(2t)\leq\widehat{L}\widehat{E}(t)\quad\text{for }t\geq 0\text{,}\] (4.10) \[\widehat{E}(t)\leq\widehat{A}_{n}(\widehat{L}t)\quad\text{for }t\geq 0\text{,}\] (4.11) \[\widehat{B}(\lambda t)\leq\lambda^{q}\widehat{B}(t)\quad\text{for }t \geq 0\text{ and }\lambda\geq 1\text{.} \tag{4.6}\] _Moreover, if assumption (2.8) is in force, then the function \(B\) satisfies assumption (2.9) and_ \[\widehat{B}(t)\leq\widehat{A}_{n-1}(\widehat{L}t)\qquad\text{for }t\geq 0\text{;} \tag{4.12}\] _if assumption (2.10) is in force, then_ \[\widehat{B}(t)\leq\widehat{L}t^{q}\quad\text{for }t\geq 0. \tag{4.13}\] _Here, \(\widehat{A}_{n-1}\) and \(\widehat{A}_{n}\) denote the functions defined as \(A_{n-1}\) and \(A_{n}\), with \(A\) replaced with \(\widehat{A}\)._ Proof.: _Step 1._ _Construction of \(\widehat{A}\). Denote by \(t_{1}\) the maximum among \(1\), the constant \(t_{0}\) appearing in inequalities (2.14) and (2.9), and the lower bound for \(t\) in the definition of the \(\Delta_{2}\)-condtion near infinity for the functions \(B\) and \(E\). Let us set \(a=\frac{A(t_{1})}{t_{1}}\), and define the Young function \(\widehat{A}\) as \[\widehat{A}(t)=\begin{cases}at&\text{if }0\leq t<t_{1}\\ A(t)&\text{if }t\geq t_{1}.\end{cases} \tag{4.14}\] Clearly, \(\widehat{A}\) satisfies property (4.8) and \[A(t)\leq\widehat{A}(t)\qquad\text{for }t\geq 0. \tag{4.15}\] Also, the convexity of \(A\) ensures that \[\widehat{A}(t)\geq at\quad\text{for }t\geq 0. \tag{4.16}\] Since \[\widehat{H}_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{\widehat{A}(t)}\right) ^{\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}\qquad\text{for }s\geq 0,\] we deduce that \[\widehat{H}_{n}(s)\leq a^{-\frac{1}{n}}s^{\frac{n-1}{n}}\quad\text{for }s\geq 0,\] whence \[a^{\frac{1}{n-1}}t^{\frac{n}{n-1}}\leq\widehat{H}_{n}^{-1}(t)\quad\text{for }t\geq 0.\] Inasmuch as \(\widehat{A}_{n}=\widehat{A}\circ\widehat{H}_{n}^{-1}\), the latter inequality and inequality (4.16) yield: \[\widehat{A}_{n}(t)\geq\widehat{A}((a^{\frac{1}{n-1}}t^{\frac{n}{n-1}})\geq(at )^{\frac{n}{n-1}}\quad\text{for }t\geq 0.\] This shows that inequality (4.7) holds for sufficiently large \(\widehat{L}\). For later reference, also note that \[\widehat{A}_{n}(t)=(at)^{\frac{n}{n-1}}\quad\text{for }t\in[0,t_{1}]. \tag{4.17}\] Next, we have that \[A_{n}(t)\leq\widehat{A}_{n}(t)\qquad\text{for }t\geq 0. \tag{4.18}\] Indeed, inequality (4.15) implies that \[\widehat{H}_{n}(s)\leq H_{n}(s)\qquad\text{for }s\geq 0.\] Thus, \(H_{n}^{-1}(t)\leq\widehat{H}_{n}^{-1}(t)\) for \(t\geq 0\), whence inequality (4.18) follows, on making use of (4.15) again. Moreover, there exists \(t_{2}\geq t_{1}\), depending on \(n\) and \(A\), such that \[\widehat{A}_{n}(t)\leq A_{n}(2t)\quad\text{for }t\geq t_{2}. \tag{4.19}\] Actually, if \(s\geq t_{1}\) and is sufficiently large, then \[\widehat{H}_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{\widehat{A}(t)}\right)^ {\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}\geq\left(\int_{t_{1}}^{s}\!\left( \frac{t}{\widehat{A}(t)}\right)^{\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}= \left(\int_{t_{1}}^{s}\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt\right)^ {\frac{n-1}{n}}\geq\frac{1}{2}H_{n}(s).\] Observe that the last inequality holds, for large \(s\), thanks to assumption (2.5). Hence, \(\widehat{H}_{n}^{-1}(t)\leq H_{n}^{-1}(2t)\) for sufficiently large \(t\) and thereby \[\widehat{A}_{n}(t)=\widehat{A}(\widehat{H}_{n}^{-1}(t))=A(\widehat{H}_{n}^{-1 }(t))\leq A(H_{n}^{-1}(2t)))=A_{n}(2t)\quad\text{for }t\geq t_{2},\] provided that \(t_{2}\) is sufficiently large. Inequality (4.19) is thus established. _Step 2._ _Construction of \(\widehat{B}\)._ First, consider the case when (2.8) and (2.9) hold. Since \(B\) is a Young function, there exists \(t_{3}\geq t_{2}\), where \(t_{2}\) is the number from Step 2, such that \(B(t_{3})>A_{n-1}(t_{1})\). Define the Young function \(\widehat{B}\) as \[\widehat{B}(t)=\begin{cases}\widehat{A}_{n-1}(t)&\text{if }0\leq t<t_{2}\\ \frac{t_{3}-t_{2}}{t_{3}-t_{2}}\widehat{A}_{n-1}(t_{2})+\frac{t-t_{2}}{t_{3}- t_{2}}B(t_{3})&\text{if }t_{2}\leq t<t_{3}\\ B(t)&\text{if }t\geq t_{3}.\end{cases}\] We claim that inequality (4.12) holds with this choice of \(\widehat{B}\), provided that \(\widehat{L}\) is large enough. If \(t\in[0,t_{2})\), the inequality in question is trivially satisfied with \(\widehat{L}=1\). If \(t\in[t_{2},t_{3})\), then \[\widehat{B}(t)\leq\widehat{B}(t_{3})=B(t_{3})\leq A_{n-1}(Lt_{3})\leq\widehat{ A}_{n-1}(Lt_{3})\leq\widehat{A}_{n-1}((Lt_{3}/t_{2})t),\] where the third inequality holds thanks to (4.18). Finally, if \(t>t_{3}\), then \[\widehat{B}(t)=B(t)\leq A_{n-1}(Lt)\leq\widehat{A}_{n-1}(Lt).\] Altogether, inequality (4.12) is fulfilled with \(\widehat{L}=\max\left\{1,\frac{Lt_{3}}{t_{2}}\right\}\). In order to establish inequality (4.11), it suffices to show that \(\widehat{B}\) satisfies the \(\Delta_{2}\)-condition globally. Since \(\widehat{B}\) is a Young function, this condition is in turn equivalent to the fact that there exists a constant \(c\) such that \[\frac{t\widehat{B}^{\prime}(t)}{\widehat{B}(t)}\leq c\quad\text{for a.e. }t>0. \tag{4.20}\] Since \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity, and \(\widehat{B}(t)=B(t)\) for large \(t\), condition (4.20) certainly holds for large \(t\). On the other hand, since \[\lim_{t\to 0+}\frac{t\widehat{B}^{\prime}(t)}{\widehat{B}(t)}=\lim_{t\to 0+} \frac{t\widehat{A}^{\prime}_{n-1}(t)}{\widehat{A}_{n-1}(t)}=\frac{n-1}{n-2},\] condition (4.20) also holds for \(t\) close to \(0\). Hence, it holds for every \(t>0\). Next, consider the case when (2.10) holds. The \(\Delta_{2}\)-condition near infinity for \(B\) implies that there exist constants \(q>1\), \(t_{4}>1\) and \(c>0\) such that with \(B(t)\leq ct^{q}\) for all \(t\geq t_{4}\). Since \(t_{4}>1\), we may suppose, without loss of generality, that \(q>n\). Since \(B(t)\leq\widehat{L}(t^{q}+1)\) for \(t\geq 0\), provided that \(\widehat{L}\) is sufficiently large, the choice \(\widehat{B}(t)=\widehat{L}t^{q}\) makes inequalities (4.11) and (4.13) true. _Step 3._ Construction of \(\widehat{E}\). We define \(\widehat{E}\) analogously to \(\widehat{B}\), by replacing \(B\) with \(E\) and \(\widehat{A}_{n-1}\) with \(\widehat{A}_{n}\). The same argument as in Step 2 tells us that inequalities (4.9) and (4.10) hold for a suitable choice of the constant \(\widehat{L}\). _Step 4._ Conclusion. Since \[f(x,t,\xi)\leq B(|\xi|)+E(|t|)+L\leq\widehat{B}(|\xi|)+\widehat{E}(|\xi|)+B(t _{3})+E(t_{3})+L\] and \[f(x,t,\xi)\geq A(|\xi|)-E(|t|)-L\geq\widehat{A}(|\xi|)-\widehat{E}(|\xi|)-A(t _{1})-E(t_{3})-L,\] for a.e. \(x\in\Omega\), and for every \(t\in\mathbb{R}\) and \(\xi\in\mathbb{R}^{n}\), equation (4.6) follows, provided that \(\widetilde{L}\) is chosen sufficiently large. We conclude this section by recalling the following classical lemma - see e.g. [Giu, Lemma 6.1] **Lemma 4.4**.: _Let \(Z:[\rho,\sigma]\to[0,\infty)\) be a bounded function. Assume that there exist constants \(a,b\geq 0\), \(\alpha>0\) and \(\theta\in[0,1)\) such that_ \[Z(r)\leq\theta Z(s)+(s-r)^{-\alpha}a+b\quad\text{if }\rho\leq r<s\leq\sigma.\] _Then,_ \[Z(r)\leq c\big{(}(s-r)^{-\alpha}a+b\big{)}\quad\text{if }\rho\leq r<s\leq\sigma,\] _for some constant \(c=c(\alpha,\theta)>1\)._ ## 5. Proof of Theorem 2.2 We shall limit ourselves to proving Theorem 2.2, since the content of Theorem 2.1 is just a special case of the former. A key ingredient is provided by Lemma 5.1 below. In the statement, \(\Phi_{q}:[0,\infty)\to[0,\infty)\) denotes the function defined for \(q\geq 1\) as \[\Phi_{q}(t)=\begin{cases}t&\text{if }0\leq t<1\\ t^{q}&\text{if }t\geq 1.\end{cases} \tag{5.1}\] One can verify that \[\Phi_{q}(\lambda t)\leq\lambda^{q}\Phi_{q}(t)\qquad\text{for }\lambda\geq 1\text{ and }t\geq 0. \tag{5.2}\] Moreover, given a function \(u\in W^{1}K^{A}(\mathbb{B}_{1})\), we set \[F(u,\rho,\sigma)=\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|u|)+A(| \nabla u|)\,dx \tag{5.3}\] for \(0<\rho<\sigma<1\). **Lemma 5.1**.: _Let \(A\) and \(B\) be Young functions and \(0<\rho<\sigma<1\)._ 1. _Suppose that condition (_2.8_) is in force. Assume that there exist constants_ \(L\geq 1\) _and_ \(q>1\) _such that_ (5.4) \[B(t)\leq A_{n-1}(Lt)\quad\text{and}\quad B(\lambda t)\leq\lambda^{q}B(t)\quad \text{for $t\geq 0$ and $\lambda\geq 1$.}\] _Then, for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{1})\) _there exists a function_ \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) _satisfying_ (5.5) _and such that_ (5.6) \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq c\,\Phi_{q}\bigg{(}\frac{\kappa F(u,\rho,\sigma)^{\frac{1}{n-1}}}{( \sigma-\rho)^{\frac{1}{n-1}}\rho}\bigg{)}F(u,\rho,\sigma)\] _for some constant_ \(c=c(n,q,L)\geq 1\)_. Here,_ \(\kappa\) _denotes the constant appearing in inequality (_3.25_)._ 2. _Suppose that condition (_3.26_) is in force. Assume that there exist constants_ \(L\geq 1\) _and_ \(q>n\) _such that_ (5.7) \[B(t)\leq Lt^{q}\qquad\text{ for all $t\geq 0$.}\] _Then, for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{1})\) _there exists a function_ \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) _satisfying conditions (_5.5_), such that_ (5.8) \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq \frac{c\kappa^{q}F(u,\rho,\sigma)^{\frac{q}{n-1}}}{(\sigma-\rho) ^{q-1+\frac{q}{n-1}}\rho^{q-(n-1)}}\] _for some constant_ \(c=c(n,q,L)\geq 1\)_. Here,_ \(\kappa\) _denotes the constant appearing in inequality (_3.27_)._ Proof.: Let \(u\in W^{1}K^{A}(\mathbb{B}_{1})\). Define, for \(r\in[0,1]\), the function \(u_{r}:\mathbb{S}^{n-1}\to\mathbb{R}\) as \(u_{r}(z)=u(rz)\) for \(z\in\mathbb{S}^{n-1}\). By classical properties of restrictions of Sobolev functions to \((n-1)\)-dimensional concentric spheres, one has that \(u_{r}\) is a weakly differentiable function for a.e. \(r\in[0,1]\). Hence, by Fubini's theorem, there exists a set \(N\subset[0,1]\) such that \(|N|=0\), and \(u_{r}\in W^{1}K^{A}(\mathbb{S}^{n-1})\) for every \(r\in[0,1]\setminus N\). Set \[U_{1}=\bigg{\{}r\in[\rho,\sigma]\setminus N\,:\,\int_{\mathbb{S}^{n-1}}A(| \nabla_{\mathbb{S}}u_{r}(z)|)\,d\mathcal{H}^{n-1}(z)\leq\frac{4}{(\sigma-\rho )r^{n-1}}\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u|)\, \mathrm{d}x\bigg{\}}. \tag{5.9}\] From Fubini's Theorem, the inequality \(|\nabla_{\mathbb{S}}u_{r}(z)|\leq|\nabla u(rz)|\) for \(\mathcal{H}^{n-1}\)-a.e. \(z\in\mathbb{S}^{n-1}\), and the very definition of the set \(U_{1}\) we infer that \[\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u| )\,dx= \int_{\rho}^{\sigma}r^{n-1}\int_{\mathbb{S}^{n-1}}A(|\nabla u(rz)| )\,d\mathcal{H}^{n-1}(z)\,dr\] \[\geq \int_{(\rho,\sigma)\setminus U_{1}}r^{n-1}\int_{\mathbb{S}^{n-1} }A(|\nabla_{\mathbb{S}}u_{r}(z)|)\,d\mathcal{H}^{n-1}(z)\,dr\] \[> \frac{4((\sigma-\rho)-|U_{1}|)}{(\sigma-\rho)}\int_{\mathbb{B}_{ \sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u|)\,dx.\] Hence, \(|U_{1}|\geq\frac{3}{4}(\sigma-\rho)\). An analogous computation ensures that the set \[U_{2}=\bigg{\{}r\in[\rho,\sigma]\setminus N\,:\,\int_{\mathbb{S}^{n-1}}A(|u_{ r}(z)|)\,d\mathcal{H}^{n-1}(z)\leq\frac{4}{(\sigma-\rho)r^{n-1}}\int_{\mathbb{B}_{ \sigma}\setminus\mathbb{B}_{\rho}}A(|u|)\,dx\bigg{\}} \tag{5.10}\] has the property that \(|U_{2}|\geq\frac{3}{4}(\sigma-\rho)\). Thereby, if we define the set \[U=U_{1}\cap U_{2},\] then \[|U|\geq|(\rho,\sigma)|-|(\rho,\sigma)\setminus U_{1}|-|(\rho,\sigma)\setminus U _{2}|\geq\frac{1}{2}(\sigma-\rho). \tag{5.11}\] Next, define the function \(\eta:\mathbb{B}_{1}\to[0,1]\) as \[\eta(x)=\begin{cases}1&\text{if }0\leq|x|<\rho\\ \frac{1}{|U|}\int_{|x|}^{\sigma}\chi_{U}(s)\,ds&\text{if }\rho\leq|x|\leq \sigma\\ 0&\text{if }\sigma<|x|\leq 1.\end{cases}\] One has that \(0\leq\eta\leq 1\), \(\eta=1\) in \(\mathbb{B}_{\rho}\), \(\eta=0\) in \(\mathbb{B}_{1}\setminus\mathbb{B}_{\sigma}\), \(\eta\in W_{0}^{1,\infty}(\mathbb{B}_{1})\) and \[|\nabla\eta(rz)|=\begin{cases}0&\text{for a.e. }r\notin U\\ \frac{1}{|U|}&\text{for a.e. }r\in U,\end{cases} \tag{5.12}\] and for \(z\in\mathbb{S}^{n-1}\). Hence, the function \(\eta\) satisfies the properties claimed in (5.5). Next, set, for \(r\in[0,1]\setminus N\), \[F_{r}(u)=\int_{\mathbb{S}^{n-1}}A(|u_{r}(z)|)+A(|\nabla_{\mathbb{S}}u_{r}(z) |)\,d\mathcal{H}^{n-1}(z). \tag{5.13}\] By the definition of the set \(U\), \[F_{r}(u)\leq\frac{4}{(\sigma-\rho)r^{n-1}}F(u,\rho,\sigma)\quad\text{for a.e. }r\in U. \tag{5.14}\] We have now to make use of different inequalities, depending on whether we deal we case (i) or (ii). Case (i). Owing to inequality (3.1) and to the second inequality in (5.4), \[B(\lambda t)\leq\Phi_{q}(\lambda)B(t)\qquad\quad\text{for }\lambda\geq 0\text{ and }t\geq 0. \tag{5.15}\] The following chain holds: \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq \int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}B\bigg{(}\bigg{|}\frac{2}{ (\sigma-\rho)}u_{r}(z)\bigg{|}\bigg{)}\,d\mathcal{H}^{n-1}(z)\,dr\] \[= \int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}B\bigg{(}\bigg{|}\frac{2 \kappa u_{r}(z)F_{r}(u)^{\frac{1}{n-1}}}{\kappa(\sigma-\rho)F_{r}(u)^{\frac{1 }{n-1}}}\bigg{|}\bigg{)}\,d\mathcal{H}^{n-1}(z)\,dr\] \[\leq \int_{U}r^{n-1}\Phi_{q}\bigg{(}\bigg{|}\frac{2L\kappa F_{r}(u)^{ \frac{1}{n-1}}}{(\sigma-\rho)r}\bigg{|}\bigg{)}F_{r}(u)\,dr\] \[\leq \Phi_{q}\bigg{(}\bigg{|}\frac{2L\kappa\epsilon^{\frac{1}{n-1}}F (u,\rho,\sigma)^{\frac{1}{n-1}}}{(\sigma-\rho)^{1+\frac{1}{n-1}}\rho}\bigg{|} \bigg{)}4F(u,\rho,\sigma),\] where the second inequality holds by inequality (5.15) and the first inequality in (5.4), the third inequality follows from the Sobolev inequality (3.25), and the last inequality relies upon inequality (5.14) and the fact that \(|U|\leq(\sigma-\rho)\). Clearly, inequality (5.6) follows from (5.16). Case (ii). The following chain holds: \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq L\int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}\bigg{|}\frac{2}{(\sigma-\rho)}u_{r} (z)\bigg{|}^{q}\,d\mathcal{H}^{n-1}(z)\,dr\] \[\leq \frac{L2^{q}\alpha_{n}\kappa^{q}}{(\sigma-\rho)^{q}}\int_{U}r^{n- 1}F_{r}(u)^{\frac{q}{n-1}}\,dr\] \[\leq \frac{L2^{q}\alpha_{n}\kappa^{q}}{(\sigma-\rho)^{q}}\int_{U}r^{n- 1}\bigg{(}\frac{4F(u,\rho,\sigma)}{(\sigma-\rho)r^{n-1}}\bigg{)}^{\frac{q}{n-1 }}\,dr\] \[\leq \frac{L2^{q}4\frac{q}{n-1}c_{n}\kappa^{q}}{(\sigma-\rho)^{q-1+ \frac{q}{n-1}}\rho\mu^{q-(n-1)}}F(u,\rho,\sigma)^{\frac{q}{n-1}},\] where \(c_{n}\) is given by (3.29), the first inequality holds by inequality (5.7), the second one by inequality (3.27), the third one by inequality (5.14), and the last one since \(|U|\leq(\sigma-\rho)\). Inequality (5.8) follows via (5.17). We are now in a position to accomplish the proof of our main result. Proof of Theorem 2.2.: Owing to Lemma 4.3, without loss of generality we can assume that the functions \(A\), \(B\) and \(E\) also satisfy the properties stated for the functions \(\widehat{A}\), \(\widehat{B}\) and \(\widehat{E}\) in the lemma. When we refer to properties in the statement of this lemma, we shall mean that they are applied directly to \(A\), \(B\) and \(E\). In particular, \(q\) denotes the exponent appearing in the statement of the lemma. Moreover, \(Q\) is the constant from the definition of quasi-minimizer. We also assume that \(\mathbb{B}_{1}\Subset\Omega\) and prove that \(u\) is bounded in \(\mathbb{B}_{\frac{1}{2}}\). The general case follows via a standard scaling and translation argument. For ease of presentation, we split the proof in steps. _Step 1. Basic energy estimate._ Set, for \(r>0\) and \(l>0\), \[\mathcal{A}_{l,r}=\mathbb{B}_{r}\cap\{x\in\Omega\,:\,u(x)>l\} \tag{5.18}\] and \[J(l,r)=\int_{\mathbb{B}_{r}}A((u-l)_{+})+A(|\nabla(u-l)_{+}|)\,dx. \tag{5.19}\] Here, the subscript "\(+\)" stands for the positive part. If assumption (2.8) holds, then we claim that there exists a constant \(c=c(n,q,L,Q)\geq 1\) such that \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx\leq c\bigg{(}\frac{\Phi_{q}(\kappa J(k,\sigma)^{\frac{1}{n-1}})}{( \sigma-\rho)^{\frac{q}{n-1}}}J(k,\sigma)+\int_{\mathcal{A}_{k,\sigma}}(E(|u|) +1)\,dx\bigg{)} \tag{5.20}\] for \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma<1\), where \(\kappa\) denotes the constant from inequality (3.25) If assumption (2.10) holds, then we claim that there exists a constant \(c=c(n,q,L,Q)\geq 1\) such that \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx\leq c\bigg{(}\frac{\kappa^{q}}{(\sigma-\rho)^{\frac{q}{n-1}}}J(k,\sigma)^{ \frac{q}{n-1}}+\int_{\mathcal{A}_{k,\sigma}}(E(|u|)+1)\,dx\bigg{)} \tag{5.21}\] for \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma<1\), where \(\kappa\) denotes the constant from inequality (3.27). We shall first establish inequalities (5.20) and (5.21) under assumption (2.11). Given \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma\leq 1\), let \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) be as in the statement of Lemma 5.1, applied with \(u\) replaced with \((u-k)_{+}\). Choose the function \(\varphi=-\eta^{q}(u-k)_{+}\) in the definition of quasi-minimizer for \(u\). From this definition and the first property in (2.11) one infers that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx \leq Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,\nabla(u+\varphi ))\,dx\] \[=Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,(1-\eta^{q})\nabla u -q\eta^{q-1}\nabla\eta(u-k))\,dx\] \[\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u+\varphi, \nabla u)+\eta^{q}f\bigg{(}x,u+\varphi,-\frac{q\nabla\eta}{\eta}(u-k)\bigg{)} \,dx\,.\] Hence, since \(0\leq u+\varphi\leq u\) on \(\mathcal{A}_{k,\sigma}\), the second property in (2.11), the upper bound in (2.13), and the monotonicity of the function \(E\) ensure that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k, \sigma}}(1-\eta^{q})\big{(}Lf(x,u,\nabla u)+E(u)+L\big{)}+\eta^{q}\bigg{(}B \bigg{(}\frac{q|\nabla\eta|}{\eta}(u-k)\bigg{)}+E(u)+L\bigg{)}\,dx. \tag{5.22}\] Inasmuch as \(0\leq\eta\leq 1\) and \(\eta=1\) in \(\mathbb{B}_{\rho}\), the use of inequality (4.11) on the right-hand side of (5.22) yields: \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq QL\int_{\mathcal{A}_{k, \sigma}\setminus\mathbb{B}_{\rho}}f(x,u,\nabla u)\,dx+Q\int_{\mathcal{A}_{k, \sigma}}q^{q}B\big{(}|\nabla\eta|(u-k)\big{)}+E(|u|)+L\,dx. \tag{5.23}\] Now, suppose that assumption (2.8) holds. Combining inequality (5.23) with estimate (5.6) (applied to \((u-k)_{+}\)) tells us that \[\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\leq QL\int_{\mathcal{A}_{k,\sigma}\setminus\mathbb{B}_{\rho}}f(x,u, \nabla u)\,dx+cQ\Phi_{q}\bigg{(}\frac{2\kappa J(k,\sigma)^{\frac{1}{n-1}}}{( \sigma-\rho)^{\frac{n}{n-1}}}\bigg{)}J(k,\sigma)+Q\int_{\mathcal{A}_{k,\sigma} }(E(u)+L)\,dx \tag{5.24}\] for some constant \(c=c(n,q,L)\geq 1\). Observe that in deriving inequality (5.24), we have exploited the inequalities \(\frac{1}{2}\leq\rho\) and \(F((u-k)_{+},\rho,\sigma)\leq J(k,\sigma)\). Adding the expression \(QL\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\) to both sides of inequality (5.24) and using inequality (5.2) enable one to deduce that \[\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\leq\frac{QL}{QL+1}\int_{ \mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx+c\bigg{(}\frac{\Phi_{q}\big{(} \kappa J(k,\sigma)^{\frac{1}{n-1}}\big{)}}{(\sigma-\rho)^{\frac{qn}{n-1}}}J(k,\sigma)+\int_{\mathcal{A}_{k,\sigma}}(E(u)+1)\,dx\bigg{)},\] for some constant \(c=c(n,q,L,Q)\geq 1\). Estimate (5.20) follows via Lemma 4.4 and the lower bound in (2.13). Assume next that assumtption (2.10) holds. Hence, the full assumption (3.26) holds, thanks to equation (4.8). One can start again from (5.23), make use of inequality (5.8), and argue as above to obtain inequality (5.21). The fact that \[\frac{1}{(\sigma-\rho)^{q-1+\frac{q}{n-1}}}\leq\frac{1}{(\sigma-\rho)^{\frac{q n}{n-1}}},\] since \(\sigma-\rho\leq 1\), is relevant in this argument. It remains to prove inequalities (5.20) and (5.21) under the alternative structure condition (2.12). Let \(\varphi\) be as above, and observe that \(u+\varphi=\eta^{q}k+(1-\eta^{q})u\) on \(\mathcal{A}_{k,\sigma}\). Hence, by property (2.12), \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,\nabla(u+\varphi))\,dx\] \[= Q\int_{\mathcal{A}_{k,\sigma}}f\big{(}x,(1-\eta^{q})u+\eta^{q}k,(1-\eta^{q})\nabla u-q\eta^{q-1}\nabla\eta(u-k)\big{)}\,dx\] \[\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u,\nabla u)+\eta^ {q}f\bigg{(}x,k,-\frac{q\nabla\eta}{\eta}(u-k)\bigg{)}\,dx.\] Thanks to assumption (2.13) and the monotonicity of \(E\), which guarantees that \(E(k)\leq E(u)\) in \(\mathcal{A}_{k,\sigma}\), we obtain that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u,\nabla u)+\eta^{q}(L+E(u) +B\bigg{(}\frac{q|\nabla\eta|}{\eta}(u-k)\bigg{)}\,dx. \tag{5.25}\] A replacement of inequality (5.22) with (5.25) and an analogous argument as above yields the same conclusions. _Step 2. One-step improvement._ Let us set \[c_{B}=\max\{\kappa,1\},\] where \(\kappa\) denotes a constant, depending only on \(n\), such that inequality (3.21) holds for every \(r\in[\frac{1}{2},1]\). We claim that, if \(h>0\) is such that \[c_{B}LJ(h,\sigma)^{\frac{1}{n}}\leq 1, \tag{5.26}\] then \[J(k,\rho)\leq c\bigg{(}\frac{1}{(\sigma-\rho)^{\frac{qn}{n-1}}}+\frac{1}{(k-h)^{ \frac{n}{n-1}}}+L^{\log_{2}(\frac{k}{k-h})}\bigg{)}J(h,\sigma)^{1+\frac{1}{n} }\qquad\text{if }k>h, \tag{5.27}\] for a suitable constant \(c=c(n,q,L,Q,A)\geq 1\). To this purpose, fix \(h>0\) such that inequality (5.26) holds. We begin by showing that there exists a constant \(c=c(n,L)\) such that \[|\mathcal{A}_{k,\sigma}|\leq c\frac{J(h,\sigma)^{\frac{n+1}{n}}}{(k-h)^{\frac{n}{n-1}}} \qquad\text{if }k>h. \tag{5.28}\] Inequality (5.28) is a consequence of the following chain: \[|\mathcal{A}_{k,\sigma}|A_{n}(k-h)= \int_{\mathcal{A}_{k,\sigma}}A_{n}(k-h)\,dx\leq\int_{\mathcal{A}_ {k,\sigma}}A_{n}(u-h)\,dx\] \[\leq \int_{\mathcal{A}_{k,\sigma}}A_{n}\bigg{(}\frac{c_{B}(u-h)J(h, \sigma)^{\frac{1}{n}}}{c_{B}J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx\leq c_{B}J(h,\sigma)^{\frac{1}{n}}\int_{\mathcal{A}_{k,\sigma}}A_{n}\bigg{(}\frac{u-h}{c_{B }J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx. \tag{5.29}\] Notice that the last inequality holds thanks to inequality (3.1), applied with \(A\) replaced with \(A_{n}\), and to assumption (5.26). Coupling inequality (5.29) with inequality (3.21) enables us to deduce that \[|\mathcal{A}_{k,\sigma}|\leq \frac{c_{B}J(h,\sigma)^{\frac{n+1}{n}}}{A_{n}(k-h)}.\] Hence inequality (5.28) follows, via (4.7). Next, by the monotonicity of \(E\) and assumption (4.9), \[\int_{\mathcal{A}_{k,\sigma}}E(u)\,dx= \int_{\mathcal{A}_{k,\sigma}}E((u-k)+k)\,dx\leq\int_{\mathcal{A}_{ k,\sigma}}E(2(u-k))+E(2k)\,dx\] \[\leq L\int_{\mathcal{A}_{k,\sigma}}E(u-k)+E(k)\,dx\quad\text{for }k>0. \tag{5.30}\] From inequality (3.1) applied to \(A_{n}\) and assumption (5.26) one infers that \[\int_{\mathcal{A}_{k,\sigma}}E(u-k)\,dx\leq \int_{\mathcal{A}_{k,\sigma}}E(u-h)\,dx\leq\int_{\mathcal{A}_{k, \sigma}}A_{n}(L(u-h))\,dx\] \[\leq c_{B}LJ(h,\sigma)^{\frac{1}{n}}\int_{\mathcal{A}_{h,\sigma}}A_{n} \bigg{(}\frac{u-h}{c_{B}J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx\leq c_{B}LJ(h, \sigma)^{1+\frac{1}{n}}\quad\text{if }k>h. \tag{5.31}\] Owing to assumption (4.9) and chain (5.31), \[\int_{\mathcal{A}_{k,\sigma}}E(k)= E\bigg{(}\frac{k}{k-h}(k-h)\bigg{)}|\mathcal{A}_{k,\sigma}|\leq E\bigg{(}2^{\lfloor\log_{2}\frac{k}{k-h} \rfloor+1}(k-h)\bigg{)}|\mathcal{A}_{k,\sigma}|\] \[\leq L^{\log_{2}(\frac{k}{k-h})+1}E(k-h)|\mathcal{A}_{k,\sigma}|\leq L ^{\log_{2}(\frac{k}{k-h})+1}\int_{\mathcal{A}_{h,\sigma}}E(u-h)\,dx\] \[\leq L^{\log_{2}(\frac{k}{k-h})+1}c_{B}LJ(h,\sigma)^{1+\frac{1}{n}} \quad\text{if }k>h, \tag{5.32}\] where \(\lfloor\,\cdot\,\rfloor\) stands for integer part. Combining inequalities (5.30)-(5.32) yields: \[\int_{\mathcal{A}_{k,\sigma}}E(u)\,dx\leq cL^{\log_{2}(\frac{k}{k-h})}J(h, \sigma)^{\frac{n+1}{n}}\quad\text{if }k>h, \tag{5.33}\] for some constant \(c=c(n,L)\). From this point, the argument slightly differs depending on whether condition (2.8) or (3.26) holds. Assume first that (2.8) is in force. Assumption (5.26) implies that there exists a constant \(c=c(n,q,L)\) such that \[\Phi_{q}(\kappa J(k,\sigma)^{\frac{1}{n-1}})\leq cJ(k,\sigma)^{\frac{1}{n-1}} \quad\text{if }k>h, \tag{5.34}\] where \(\kappa\) is the constant from inequality (3.25). Making use of inequalities (5.28), (5.33) and (5.34) to estimate the right-hand side of (5.20) results in the following bound for its left-hand side: \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx \leq c\Bigg{(}\frac{J(h,\sigma)^{\frac{n}{n-1}}}{(\sigma-\rho)^{ \frac{n}{n-1}}}+\frac{J(h,\sigma)^{\frac{n+1}{n-1}}}{(k-h)^{\frac{n}{n-1}}}+L^ {\log_{2}(\frac{k}{k-h})}J(h,\sigma)^{\frac{n+1}{n}}\Bigg{)}\] \[\leq c^{\prime}\Bigg{(}\frac{1}{(\sigma-\rho)^{\frac{n}{n-1}}}+ \frac{1}{(k-h)^{\frac{n}{n-1}}}+L^{\log_{2}(\frac{k}{k-h})}\Bigg{)}J(h,\sigma) ^{\frac{n+1}{n}}\quad\text{if }k>h, \tag{5.35}\] for suitable constants \(c=c(n,q,L,Q)\geq 1\) and \(c^{\prime}=c^{\prime}(n,q,L,Q)\geq 1\). From inequality (4.1) we infer that \[\int_{\mathbb{B}_{\rho}}A((u-k)_{+})\,dx\leq\int_{\mathbb{B}_{\rho}}A_{n}((u-k) _{+})\,dx+c|\mathcal{A}_{k,\rho}|\leq\int_{\mathbb{B}_{\sigma}}A_{n}((u-h)_{+} )\,dx+c|\mathcal{A}_{k,\sigma}|\quad\text{if }k>h, \tag{5.36}\] for some constant \(c=c(n,A)\). A combination of the latter inequality with (5.28) and (5.29) tells us that \[\int_{\mathbb{B}_{\rho}}A((u-k)_{+})\,dx\leq cJ(h,\sigma)^{1+\frac{1}{n}}+c \frac{J(h,\sigma)^{1+\frac{1}{n}}}{(k-h)^{\frac{n}{n-1}}}\quad\text{if }k>h, \tag{5.37}\] for some constant \(c=c(n,L,A)\). Coupling inequaliy (5.35) with (5.37) yields (5.27). Assume now that condition (3.26) holds. Assumption (5.26) and the inequality \(q>n\) guarantee that there exists a constant \(c=c(n,q,L)\) such that \[J(k,\sigma)^{\frac{q}{n-1}}\leq cJ(k,\sigma)^{\frac{n+1}{n}}\quad\text{if }k>h. \tag{5.38}\] From inequalities (5.28), (5.33) and (5.38) one obtains (5.35) also in this case. Inequality (5.27) again follows via (5.35) and (5.37). _Step 3. Iteration._ Given \(K\geq 1\) and \(\ell\in\mathbb{N}\cup\{0\}\), set \[k_{\ell}=K(1-2^{-(\ell+1)}),\quad\sigma_{\ell}=\frac{1}{2}+\frac{1}{2^{\ell+2}},\quad\text{and}\quad J_{\ell}=J(k_{\ell},\sigma_{\ell}). \tag{5.39}\] Thanks to inequality (5.27), if \(\ell\in\mathbb{N}\) is such that \[c_{B}LJ_{\ell}^{\frac{1}{n}}\leq 1, \tag{5.40}\] then \[J_{\ell+1}\leq c\bigg{(}2^{\ell\frac{q}{n-1}}+K^{-\frac{n}{n-1}}2^{\ell\frac{n}{n -1}}+L^{\ell}\bigg{)}J_{\ell}^{1+\frac{1}{n}} \tag{5.41}\] for a suitable constant \(c=c(n,q,L,Q,A)\geq 1\). Clearly, inequality (5.41) implies that \[J_{\ell+1}\leq c_{2}2^{\gamma\ell}J_{\ell}^{1+\frac{1}{n}} \tag{5.42}\] where \(\gamma=\max\{q\frac{n}{n-1},\log_{2}L\}\) and \(c_{2}=c_{2}(n,q,L,Q,A)\geq 1\) is a suitable constant. Let \(\tau=\tau(n,q,L,Q,A)\in(0,1)\) be such that \[c_{2}2^{\gamma}\tau^{\frac{1}{n}}=1. \tag{5.43}\] Set \[\varepsilon_{0}=\min\{(c_{B}L)^{-n},\tau^{n}\}.\] We claim that, if \[J_{0}\leq\varepsilon_{0}, \tag{5.44}\] then \[J_{\ell}\leq\tau^{\ell}J_{0}\qquad\text{for every $\ell\in\mathbb{N}\cup\{0\}$}. \tag{5.45}\] We prove this claim by induction. The case \(\ell=0\) is trivial. Suppose that inequality (5.45) holds for some \(\ell\in\mathbb{N}\). Assumption (5.44) entails that \[c_{B}LJ_{\ell}^{\frac{1}{n}}\leq c_{B}L(\tau^{\ell}J_{0})^{\frac{1}{n}}\leq c _{B}L\varepsilon_{0}^{\frac{1}{n}}\leq 1.\] Therefore, thanks to equations (5.42), (5.45), and (5.43), \[J_{\ell+1}\leq c_{2}2^{\gamma\ell}J_{\ell}^{1+\frac{1}{n}}\leq c_{2}(2^{\gamma} \tau^{\frac{1}{n}})^{\ell}J_{0}^{\frac{1}{n}}(\tau^{\ell}J_{0})\leq c_{2}^{1- \ell}\varepsilon_{0}^{\frac{1}{n}}\tau^{\ell}J_{0}\leq\tau^{\ell+1}J_{0}. \tag{5.46}\] Notice that the last inequality holds thanks to the inequalities \(c_{2}\geq 1\), \(\ell\geq 1\), and \(\varepsilon_{0}\leq\tau^{n}\). Inequality (5.45), with \(\ell\) replaced with \(\ell+1\), follows from (5.46). _Step 4. Assumption (5.44) holds for large \(K\)._ Since \[J_{0}=J(K/2,\mathbb{B}_{\frac{3}{4}}),\] inequality (5.44) will follow, for sufficiently large \(K\), if we show that \[\lim_{k\to\infty}J(k,\mathbb{B}_{\frac{3}{4}})=0. \tag{5.47}\] Inasmuch as \(u\in V^{1}_{\mathrm{loc}}K^{A}(\Omega)\), from inclusion (3.20) we infer that \(\lim_{k\to\infty}|\mathcal{A}_{k,\frac{3}{4}}|=0\). Hence, the dominated convergence theorem guarantees that \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx=\lim _{k\to\infty}\int_{\mathcal{A}_{k,\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx=0. \tag{5.48}\] It thus suffices to show that \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A(|(u-k)_{+}|)\,dx=0. \tag{5.49}\] To this purpose, note that, by inequality (4.1) and the monotonicity of \(A_{n}\), \[\int_{\mathbb{B}_{\frac{3}{4}}}A(|(u-k)_{+}|)\,dx\leq c|\mathcal{A}_{k,\frac{3 }{4}}|+\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}(|(u-k)_{+}|)\,dx \tag{5.50}\] \[\leq c|\mathcal{A}_{k,\frac{3}{4}}|+\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\! \left(2\bigg{|}(u-k)_{+}-\fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}dy\bigg{|} \right)dx+\int_{\mathbb{B}_{\frac{3}{2}}}A_{n}\!\left(2\bigg{|}\fint_{\mathbb{B }_{\frac{3}{4}}}(u-k)_{+}dy\bigg{|}\right)dx\] for some constant \(c=c(n,A)\). Moreover, \[\lim_{k\to\infty}\mathcal{A}_{k,\frac{3}{4}}=0,\] and \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\!\left(2\bigg{|}\fint_{ \mathbb{B}_{\frac{3}{4}}}(u-k)_{+}\bigg{|}\right)dx\leq\lim_{k\to\infty}| \mathbb{B}_{\frac{3}{4}}|A_{n}\!\left(\frac{2\|(u-k)_{+}\|_{L^{1}(\mathbb{B}_{ \frac{3}{4}})}}{|\mathbb{B}_{\frac{3}{4}}|}\right)=0.\] It remains to prove that the second addend on the rightmost side of chain (5.50) vanishes when \(k\to\infty\). Thanks to the limit in (5.48), for every \(\delta>0\) there exists \(k_{\delta}\in\mathbb{N}\) such that \[\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx\leq\delta\qquad\text{ if }k\geq k_{\delta}. \tag{5.51}\] Choose \(\delta\) in (5.51) such that \(2c_{B}\delta^{\frac{1}{n}}\leq 1\). Property (3.1) applied to \(A_{n}\), and the Sobolev-Poincare inequality in Orlicz spaces (3.19) applied to the function \((u-k)_{+}\) ensure that, if \(k>k_{\delta}\), then \[\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\!\left(2\bigg{|}(u-k)_{+}- \fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}\bigg{|}\right)dx\leq 2c_{B}\delta^{\frac{1}{n}}\int_{\mathbb{B}_{\frac{3}{4}}}A_{n} \!\left(\frac{|(u-k)_{+}-\fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}dy|}{c_{B} \big{(}\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)dy\big{)}^{\frac{1} {n}}}\right)dx\] \[\leq 2c_{B}\delta^{\frac{1}{n}}\int_{\mathbb{B}_{\frac{3}{4}}}A(| \nabla(u-k)_{+}|)dx.\] Since the last integral tends to \(0\) as \(k\to\infty\), equation (5.49) is establsied. _Step 5. Conclusion._ Inequality (5.45) tells us that \(\inf_{\ell\in\mathbb{N}}J_{\ell}=0\). Hence, from the definitions of \(J_{\ell}\) and \(J(h,\sigma)\) we deduce that \[\int_{\mathbb{B}_{\frac{1}{2}}}A((u-K)_{+})\,dx\leq J(K,\mathbb{B}_{\frac{1}{2 }})\leq\inf_{\ell\in\mathbb{N}}J_{\ell}=0.\] Therefore, \(u\leq K\) a.e. in \(\mathbb{B}_{\frac{1}{2}}\). In order to prove a parallel lower bound for \(u\), observe that the function \(-u\) is a quasiminimizer of the functional defined as in (1.1), with the integrand \(f\) replaced with the integral \(\widetilde{f}\) given by \[\widetilde{f}(x,t,\xi)=f(x,-t,-\xi)\quad\text{ for }(x,t,\xi)\in\Omega\times \mathbb{R}\times\mathbb{R}^{n}.\] The structure conditions (2.11) and (2.12) and the growth condition (2.13) on the function \(f\) are inherited by the function \(\widetilde{f}\). An application of the above argument to the function \(-u\) then tells us that there exists a constant \(K^{\prime}>0\) such that \(-u\leq K^{\prime}\) a.e. in \(\mathbb{B}_{\frac{1}{2}}\). The proof is complete. ## Compliance with Ethical Standards **Funding**. This research was partly funded by: (i) GNAMPA of the Italian INdAM - National Institute of High Mathematics (grant number not available) (A. Cianchi); (ii) Research Project of the Italian Ministry of Education, University and Research (MIUR) Prin 2017 "Direct and inverse problems for partial differential equations: theoretical aspects and applications", grant number 201758MTR2 (A. Cianchi); **Conflict of Interest**. The authors declare that they have no conflict of interest.
2304.00045
PyQBench: a Python library for benchmarking gate-based quantum computers
We introduce PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parametrized Fourier family of measurements. For more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones.
Konrad Jałowiecki, Paulina Lewandowska, Łukasz Pawela
2023-03-31T18:02:43Z
http://arxiv.org/abs/2304.00045v1
# PyQBench: a Python library for benchmarking gate-based quantum computers ###### Abstract We introduce PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parametrized Fourier family of measurements. For more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones. keywords: Quantum computing, Benchmarking quantum computers, Discrimination of quantum measurements, Discrimination of von Neumann measurements, Open-source, Python programming Pacs: 03.67.-a, 03.67.Lx Msc: 81P68 + Footnote †: journal: SoftwareX ## Current code version ## 1 Motivation and significance Noisy Intermediate-Scale Quantum (NISQ) [1] devices are storming the market, with a wide selection of devices based on different architectures and accompanying software solutions. Among hardware providers offering public access to their gate-based devices, one could mention Rigetti [2], IBM [3], Oxford Quantum Group [4], IonQ [5] or Xanadu [6]. Other vendors offer devices operating in different paradigms. Notably, one could mention D-Wave [7] and their quantum annealers, or QuEra devices [8] based on neural atoms. Most vendors provide their own software stack and application programming interface for accessing their devices. To name a few, Rigetti's computers are available through their Forest SDK [9] and PyQuil library [10] and IBM Q [3] \begin{table} \begin{tabular}{|c|l|l|} \hline C1 & Current code version & 0.1.1 \\ \hline C2 & Permanent link to code/repository used for this code version & [https://github.com/iitis/PyQBench](https://github.com/iitis/PyQBench) \\ \hline C3 & Code Ocean compute capsule & [https://codeocean.com/capsule/89088992-9a27-4712-8525-d92a9b23060f/tree](https://codeocean.com/capsule/89088992-9a27-4712-8525-d92a9b23060f/tree) \\ \hline C4 & Legal Code License & Apache License 2.0 \\ \hline C5 & Code versioning system used & git \\ \hline C6 & Software code languages, tools, and services used & Python, Qiskit, AWS Braket \\ \hline C7 & Compilation requirements, operating environments \& dependencies & Python?= 3.8 \\ & & numpy \textasci{}= 1.22.0 \\ & & scipy \textasci{}= 1.7.0 \\ & & pandas \textasci{}= 1.5.0 \\ & & amazon-braket\textasci{}= 1.11.1 \\ & & pydantic \textasci{}= 1.9.1 \\ & & qiskit \textasci{}= 0.37.2 \\ & & mthree \textasci{}= 1.1.0 \\ & & tqdm \textasci{}= 4.64.1 \\ & & pyyaml \textasci{}= 6.0 \\ & & qiskit-braket\textasci{}-provider \textasci{}= 0.0.3 \\ \hline C8 & If available Link to developer & [https://pyqbench.readthedocs.io/en/latest/](https://pyqbench.readthedocs.io/en/latest/) \\ & documentation/manual & en/latest/ \\ \hline C9 & Support email for questions & [email protected] \\ \hline \end{tabular} \end{table} Table 1: Code metadata computers can be accessed through Qiskit [11] or IBM Quantum Experience web interface [12]. Some cloud services, like Amazon Braket [13], offer access to several quantum devices under a unified API. On top of that, several libraries and frameworks can integrate with multiple hardware vendors. Examples of such frameworks include IBM Q's Qiskit or Zapata Computing's Orquestra [14]. It is well known that NISQ devices have their limitations [15]. The question is to what extent those devices can perform meaningful computations? To answer this question, one has to devise a methodology for benchmarking them. For gate-based computers, on which this paper focuses, there already exist several approaches. One could mention randomized benchmarking [16; 17; 18; 19; 20], benchmarks based on the quantum volume [21; 22; 23]. In this paper, we introduce a different approach to benchmarking gate-based devices with a simple operational interpretation. In our method, we test how well the given device is at guessing which of the two known von Neumann measurements were performed during the experiment. We implemented our approach in an open-source Python library called PyQBench. The library supports any device available through the Qiskit library, and thus can be used with providers such as IBM Q or Amazon Braket. Along with the library, the PyQBench package contains a command line tool for running most common benchmarking scenarios. ## 2 Existing benchmarking methodologies and software Unsurprisingly, PyQBench is not the only software package for benchmarking gate-based devices. While we believe that our approach has significant benefits over other benchmarking techniques, for completeness, in this section we discuss some of the currently available similar software. Probably the simplest benchmarking method one could devise is simply running known algorithms and comparing outputs with the expected ones. Analyzing the frequency of the correct outputs, or the deviation between actual and expected outputs distribution provides then a metric of the performance of a given device. Libraries such as Munich Quantum Toolkit (MQT) [24; 25] or SupermarQ [26; 27] contain benchmarks leveraging multiple algorithms, such as Shor's algorithm or Grover's algorithm. Despite being intuitive and easily interpretable, such benchmarks may have some problems. Most importantly, they assess the usefulness of a quantum device only for a very particular algorithm, and it might be hard to extrapolate their results to other algorithms and applications. For instance, the inability of a device to consistently find factorizations using Shor's algorithms does not tell anything about its usefulness in Variational Quantum Algorithm's. Another possible approach to benchmarking quantum computers is randomized benchmarking. In this approach, one samples circuits to be run from some predefined set of gates (e.g. from the Clifford group) and tests how much the output distribution obtained from the device running these circuits differs from the ideal one. It is also common to concatenate randomly chosen circuits with their inverses (which should yield the identity circuit) and run those concatenated circuits on the device. Libraries implementing this approach include Qiskit [28] or PyQuil [29]. Another quantity used for benchmarking NISQ devices is quantum volume. The quantum volume characterizes capacity of a device for solving computational problems. It takes into account multiple factors like number of qubits, connectivity and measurement errors. The Qiskit library allows one to measure quantum volume of a device by using its qiskit.ignis.verifica tion.quantum_volume. Other implementations of Quantum Volume can be found as well, see e.g. [30]. ## 3 Preliminaries and discrimination scheme approach In this section we describe how the benchmarking process in PyQBench works. To do so, we first discuss necessary mathematical preliminaries. Then, we present the general form of the discrimination scheme used in PyQBench and practical considerations on how to implement it taking into account limitations of the current NISQ devices. ### Mathematical preliminaries Let us first recall the definition of a von Neumann measurement, which is the only type of measurement used in PyQBench. A von Neumann measurement \(\mathcal{P}\) is a collection of rank-one projectors \(\{|u_{0}\rangle\langle u_{0}|,\ldots,|u_{d-1}\rangle\langle u_{d-1}|\}\), called effects, that sum up to identity, i.e. \(\sum_{i=0}^{d-1}|u_{i}\rangle\langle u_{i}|=\mbox{1l}\). If \(U\) is a unitary matrix of size \(d\), one can construct a von Neumann measurement \(\mathcal{P}_{U}\) by taking projectors onto its columns. In this case we say that \(\mathcal{P}_{U}\) is described by the matrix \(U\). Typically, NISQ devices can only perform measurements in computational \(Z\)-basis, i.e. \(U=\mbox{1l}\). To implement an arbitrary von Neumann measurement \(\mathcal{P}_{U}\), one has to first apply \(U^{\dagger}\) to the measured system and then follow with \(Z\)-basis measurement. This process, depicted in Fig. 1, can be viewed as performing a change of basis in which measurement is performed prior to measurement in the computational basis. ### Discrimination scheme Benchmarks in PyQBench work by experimentally determining the probability of correct discrimination between two von Neumann measurements by the device under test and comparing the result with the ideal, theoretical predictions. Without loss of generality1, we consider discrimination task between single qubit measurements \(\mathcal{P}_{\mathbf{1}}\), performed in the computational Z-basis, and an alternative measurement \(\mathcal{P}_{U}\) performed in the basis \(U\). Note, however, that the discrimination scheme described below can work regardless of dimensionality of the system, see [31] for details. Footnote 1: Explaining why we can consider only discrimination scheme between \(\mathcal{P}_{\mathbf{1}}\) and \(\mathcal{P}_{U}\) is beyond the scope of this paper. See [31] for a in depth explanation. In general, the discrimination scheme presented in Fig. 2, requires an auxiliary qubit. First, the joint system is prepared in some state \(|\psi_{0}\rangle\). Then, one of the measurements, either \(\mathcal{P}_{U}\) or \(\mathcal{P}_{\mathbf{1}}\), is performed on the first part of the system. Based on its outcome \(i\), we choose another POVM \(\mathcal{P}_{V_{i}}\) and perform it on the second qubit, obtaining the output in \(j\). Finally, if \(j=0\), we say that the performed measurement is \(\mathcal{P}_{U}\), otherwise we say that it was \(\mathcal{P}_{\mathbf{1}}\). Naturally, we need to repeat the same procedure multiple times for both measurements to obtain a reliable estimate of the underlying probability distribution. In PyQBench, we assume that the experiment is repeated the same number of times for both \(\mathcal{P}_{U}\) and \(\mathcal{P}_{\mathbf{1}}\). Unsurprisingly, both the \(|\psi_{0}\rangle\) and the final measurements \(\mathcal{P}_{V_{i}}\) have to be chosen specifically for given \(U\) to maximize the probability of a correct guess. The detailed description how these choices are made in [32], and for now we will focus only how this scheme can be implemented on the actual devices, assuming that all the components are known. Figure 1: Implementation of a von Neumann measurement using measurement in computational basis. The upper circuit shows a symbolic representation of a von Neumann measurement \(\mathcal{P}_{U}\). The bottom, equivalent circuit depicts its decomposition into a change of basis followed by measurement in the \(Z\) basis. #### 3.2.1 Implementation of discrimination scheme on actual NISQ devices Current NISQ devices are unable to perform conditional measurements, which is the biggest obstacle to implementing our scheme on real hardware. However, we circumvent this problem by slightly adjusting our scheme so that it only uses components available on current devices. For this purpose, we use two possible options: using a postselection or a direct sum \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\). **Scheme 1**.: (Postselection) The first idea uses a postselection scheme. In the original scheme, we measure the first qubit and only then determine which measurement should be performed on the second one. Instead of doing this choice, we can run two circuits, one with \(\mathcal{P}_{V_{0}}\) and one with \(\mathcal{P}_{V_{1}}\) and measure both qubits. We then discard the results of the circuit for which label \(i\) does not match measurement label \(k\). Hence, the circuit for postselection looks as depicted in Fig. 3. To perform the benchmark, one needs to run multiple copies of the postselection circuit, with both \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\). Each circuit has to be run in both variants, one with final measurement \(\mathcal{P}_{V_{0}}\) and the second with the final measurement \(\mathcal{P}_{V_{1}}\). The experiments can thus be grouped into classes identified by tuples of the form \((\mathcal{Q},k,i,j)\), where \(\mathcal{Q}\in\{\mathcal{P}_{U},\mathcal{P_{1}}\}\) denotes the chosen measurement, \(k\in\{0,1\}\) designates the final measurement used, and \(i\in\{0,1\}\) and \(j\in\{0,1\}\) being the labels of outcomes as presented in Fig. 3. We Figure 3: A schematic representation of the setup for distinguishing measurements \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\) using postselection approach. In postselection scheme, one runs such circuits for both \(k=0,1\) and discards results for cases when there is a mismatch between \(k\) and \(i\). Figure 2: Theoretical scheme of discrimination between von Neumann measurements \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\). then discard all the experiments for which \(i\neq k\). The total number of valid experiments is thus: \[N_{\text{total}}=\#\{(\mathcal{Q},k,i,j):k=i\}. \tag{1}\] Finally, we count the valid experiments resulting in successful discrimination. If we have chosen \(\mathcal{P}_{U}\), then we guess correctly iff \(j=0\). Similarly, for \(P_{\mathbf{1}}\), we guess correctly iff \(j=1\). If we define \[N_{\mathcal{P}_{U}} =\#\{(\mathcal{Q},k,i,j):\mathcal{Q}=\mathcal{P}_{U},k=i,j=0\}, \tag{2}\] \[N_{\mathcal{P}_{\mathbf{1}}} =\#\{(\mathcal{Q},k,i,j):\mathcal{Q}=\mathcal{P}_{\mathbf{1}},k= i,j=1\}, \tag{3}\] then the empirical success probability can be computed as \[p_{\text{succ}}(\mathcal{P}_{U},\mathcal{P}_{\mathbf{1}})=\frac{N_{\mathcal{ P}_{U}}+N_{\mathcal{P}_{\mathbf{1}}}}{N_{\text{total}}}. \tag{4}\] The \(p_{\text{succ}}\) is the quantity reported to the user as the result of the benchmark. **Scheme 2.** (Direct sum) The second idea uses the direct sum \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\) implementation. Here, instead of performing a conditional measurement \(\mathcal{P}_{V_{k}}\), where \(k\in\{0,1\}\), we run circuits presented in Fig. 4. One can see why such a circuit is equivalent to the original discrimination scheme. If we rewrite the block-diagonal matrix \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\) as follows: \[V_{0}^{\dagger}\oplus V_{1}^{\dagger}=|0\rangle\langle 0|\otimes V_{0}^{ \dagger}+|1\rangle\langle 1|\otimes V_{1}^{\dagger}, \tag{5}\] we can see that the direct sum in Eq. (5) commutes with the measurement on the first qubit. Thanks to this, we can switch the order of operations to obtain the circuit from Fig. 5. Now, depending on the outcome \(i\), one of the summands in Eq. (5) vanishes, and we end up performing exactly the same operations as in the original scheme. In this scheme, the experiment can be characterized by a pair \((\mathcal{Q},i,j)\), where \(\mathcal{Q}=\{\mathcal{P}_{U},\mathcal{P_{\mathbf{1}}}\}\) and \(i,j\in\{0,1\}\) are the output labels. The number of successful trials for \(U\) and \(1\!\!1\), respectively, can be written as \[N_{\mathcal{P}_{U}} =\#\{(\mathcal{Q},i,j):\mathcal{Q}=\mathcal{P}_{U},j=0\}, \tag{6}\] \[N_{\mathcal{P_{\mathbf{1}}}} =\#\{(\mathcal{Q},i,j):\mathcal{Q}=\mathcal{P_{\mathbf{1}}},j=1\}. \tag{7}\] Then, the probability of correct discrimination between \(\mathcal{P}_{U}\) and \(\mathcal{P_{\mathbf{1}}}\) is given by \[p_{\mathrm{succ}}=\frac{N_{\mathcal{P}_{U}}+N_{\mathcal{P_{\mathbf{1}}}}}{N_{ \mathrm{total}}}, \tag{8}\] where \(N_{\mathrm{total}}\) is the number of trials. #### 3.2.2 Importance of choosing the optimal discrimination scheme In principle, the schemes described in the previous section could be used with any choice of \(|\psi_{0}\rangle\) and final measurements \(\mathcal{P}_{V_{i}}\). However, we argue that it is best to choose those components in such a way that they maximize the probability of correct discrimination. To see that, suppose that some choice of \(|\psi_{0}\rangle,\mathcal{P}_{V_{0}},\mathcal{P}_{V_{1}}\) yields the theoretical upper bound of discriminating between two measurements of one, i.e. on a perfect quantum computer you will always make a correct guess. Then, on real hardware, we might obtain any empirical value in range \(\left[\frac{1}{2},1\right]\). On the other hand, if we choose the components of our scheme such that the successful discrimination probability is only \(\frac{3}{5}\), the possible range of empirically obtainable probabilities is only \(\left[\frac{1}{2},\frac{3}{5}\right]\). Hence, in the second case, the discrepancy between theoretical and empirical results will be less pronounced. #### 3.2.3 Constructing optimal discrimination scheme To construct the optimal discrimination scheme, one starts by calculating the probability of correct discrimination. Using the celebrated result by Helstrom [33], one finds that the optimal probability of correct discrimination between two quantum measurements, \(\mathcal{P}\) and \(\mathcal{Q}\), is \[p_{\mathrm{succ}}(\mathcal{P},\mathcal{Q})=\frac{1}{2}+\frac{1}{4}\|\mathcal{P} -\mathcal{Q}\|_{\diamond}, \tag{9}\] where \[\|\mathcal{P}-\mathcal{Q}\|_{\diamond}=\max_{\|\psi\rangle\|_{1}=1}\|\left(( \mathcal{P}-\mathcal{Q})\otimes\openone\right)(|\psi\rangle\langle\psi|)\|_{ 1}. \tag{10}\] The quantum state \(|\psi_{0}\rangle\) maximizing the diamond norm above is called the _discriminator_, and can be computed e.g. using semidefinite programming (SDP) [32; 34]. Furthermore, using the proof of the Holevo-Helstrom theorem, it is possible to construct corresponding unitaries \(V_{0}\), \(V_{1}\) to create the optimal discrimination strategy. For brevity, we do not describe this procedure here. Instead, we refer the interested reader to [32]. ## 4 Discrimination scheme for parameterized Fourier family and implementation So far, we only discussed how the discrimination is performed assuming that all needed components \(|\psi_{0}\rangle\), \(V_{0}\), and \(V_{1}\) are known. In this section, we provide a concrete example using parametrized Fourier family of measurements. The parametrized Fourier family of measurements is defined as a set of the measurements \(\{\mathcal{P}_{U_{\phi}}\colon\phi\in[0,2\pi]\}\), where \[U_{\phi}=H\left(\begin{array}{cc}1&0\\ 0&e^{i\phi}\end{array}\right)H^{\dagger}, \tag{11}\] and \(H\) is the Hadamard matrix of dimension two. For each element of this set, the discriminator is a Bell state: \[|\psi_{0}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right). \tag{12}\] Observe that \(|\psi_{0}\rangle\) does not depend on the angle \(\phi\). However, the unitaries \(V_{0}\), \(V_{1}\) depend on \(\phi\) and take the following form: \[V_{0}=\left(\begin{array}{cc}i\sin\left(\frac{\pi-\phi}{4}\right)&-i\cos \left(\frac{\pi-\phi}{4}\right)\\ \cos\left(\frac{\pi-\phi}{4}\right)&\sin\left(\frac{\pi-\phi}{4}\right)\end{array} \right), \tag{13}\] \[V_{1}=\left(\begin{array}{cc}-i\cos\left(\frac{\pi-\phi}{4}\right)&i\sin \left(\frac{\pi-\phi}{4}\right)\\ \sin\left(\frac{\pi-\phi}{4}\right)&\cos\left(\frac{\pi-\phi}{4}\right)\end{array} \right). \tag{14}\] Finally, the theoretical probability of correct discrimination between von Neumann measurements \(\mathcal{P}_{U_{\phi}}\) and \(\mathcal{P_{\mathbf{1}}}\) is given by \[p_{\text{succ}}(\mathcal{P}_{U_{\phi}},\mathcal{P_{\mathbf{1}}})=\frac{1}{2}+ \frac{|1-e^{i\phi}|}{4}. \tag{15}\] We explore the construction of \(|\psi_{0}\rangle\), \(V_{0}\) and \(V_{1}\) for parametrized Fourier family of measurements in C. ## 5 Software description This section is divided into two parts. In Section 5.1 we describe functionalities of PyQBench package. Next, in Section 5.2, we give a general overview of the software architecture. ### Software Functionalities The PyQBench can be used in two modes: as a Python library and as a CLI script. When used as a library, PyQBench allows the customization of discrimination scheme. The user provides a unitary matrix \(U\) defining the measurement to be discriminated, the discriminator \(|\psi_{0}\rangle\), and unitaries \(V_{0}\) and \(V_{1}\) describing the final measurement. The PyQBench library provides then the following functionalities. 1. Assembling circuits for both postselection and direct sum-based discrimination schemes. 2. Executing the whole benchmarking scenario on specified backend (either real hardware or software simulator). 3. Interpreting the obtained outputs in terms of discrimination probabilities. Note that the execution of circuits by PyQBench is optional. Instead, the user might want to opt in for fine-grained control over the execution of the circuits. For instance, suppose the user wants to simulate the discrimination experiment on a noisy simulator. In such a case, they can define the necessary components and assemble the circuits using PyQBench. The circuits can then be altered, e.g. to add noise to particular gates, and then run using any Qiskit backend by the user. Finally, PyQBench can be used to interpret the measurements to obtain discrimination probability. The PyQBench library also contains a readily available implementation of all necessary components needed to run discrimination experiments for parametrized Fourier family of measurements, defined previously in Section 4. However, if one only wishes to use this particular family of measurements in their benchmarks, then using PyQBench as a command line tool might be more straightforward. PyQBench's command line interface allows running the benchmarking process without writing Python code. The configuration of CLI is done by YAML [35] files describing the benchmark to be performed and the description of the backend on which the benchmark should be run. Notably, the YAML configuration files are reusable. The same benchmark can be used with different backends and vice versa. The following section describes important architectural decisions taken when creating PyQBench, and how they affect the end-user experience. ### Software Architecture #### 5.2.1 Overview of the software structure As already described, PyQBench can be used both as a library and a CLI. Both functionalities are implemented as a part of qbench Python package. The exposed CLI tool is also named qbench. For brevity, we do not discuss the exact structure of the package here, and instead refer an interested reader to the source code available at GitHub [36] or at the reference manual [37]. PyQBench can be installed from official Python Package Index (PyPI) by running pip install pyqbench. In a properly configured Python environment the installation process should also make the qbench command available to the user without a need for further configuration. #### 5.2.2 Integration with hardware providers and software simulators PyQBench is built around the Qiskit [11] ecosystem. Hence, both the CLI tool and the qbench library can use any Qiskit-compatible backend. This includes, IBM Q backends (available by default in Qiskit) and Amazon Braket devices and simulators (available through qiskit-braket-provider package [38; 39]). When using PyQBench as library, instances of Qiskit backends can be passed to functions that expect them as parameters. However, in CLI mode, the user has to provide a YAML file describing the backend. An example of such file can be found in Section 6, and the detailed description of the expected format can be found at PyQBench's documentation. #### 5.2.3 Command Line Interface The Command Line Interface (CLI) of PyQBench has nested structure. The general form of the CLI invocation is shown in listing 1. ``` qbench<benchmark-type><command><parameters> ``` Currently, PyQBench's CLI supports only one type of benchmark (discrimination of parametrized Fourier family of measurements), but we decided on structuring the CLI in a hierarchical fashion to allow for future extensions. Thus, the only accepted value of <benchmark-type> is disc-fourier. The qbench disc-fourier command has four subcommands: * benchmark: run benchmarks. This creates either a result YAML file containing the measurements or an intermediate YAML file for asynchronous experiments. * status: query status of experiments submitted for given benchmark. This command is only valid for asynchronous experiments. * resolve: query the results of asynchronously submitted experiments and write the result YAML file. The output of this command is almost identical to the result obtained from synchronous experiments. * tabulate: interpret the results of a benchmark and summarize them in the CSV file. We present usage of each of the above commands later in section 6. #### 5.2.4 Asynchronous vs. synchronous execution PyQBench's CLI can be used in synchronous and asynchronous modes. The mode of execution is defined in the YAML file describing the backend (see Section 6 for an example of this configuration). We decided to couple the mode of execution to the backend description because some backends cannot work in asynchronous mode. When running qbench disc-fourier benchmark in asynchronous mode, the PyQBench submits all the circuits needed to perform a benchmark and then writes an intermediate YAML file containing metadata of submitted experiments. In particular, this metadata contains information on correlating submitted job identifiers with particular circuits. The intermediate file can be used to query the status of the submitted jobs or to resolve them, i.e. to wait for their completion and get the measurement outcomes. In synchronous mode, PyQBench first submits all jobs required to run the benchmark and then immediately waits for their completion. The advantage of this approach is that no separate invocation of qbench command is needed to actually download the measurement outcomes. The downside, however, is that if the script is interrupted while the command is running, the intermediate results will be lost. Therefore, we recommend using asynchronous mode whenever possible. ## 6 Illustrative examples In this section, we present two examples demonstrating the usage of PyQBench. In the first example, we show how to implement a discrimination scheme for a user-defined measurement and possible ways of using this scheme with qbench library. The second example demonstrates the usage of the CLI. We show how to prepare the input files for the benchmark and how to run it using the qbench tool. ### Using user-defined measurement with qbench package In this example, we will demonstrate how qbench package can be used with user-defined measurement. For this purpose, we will use \(U=H\) (the Hadamard gate). The detailed calculations that lead to the particular form of the discriminator and final measurements can be found in B. The explicit formula for discriminator in this example reads: \[|\psi_{0}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle), \tag{16}\] with final measurements being equal to \[V_{0}=\left(\begin{array}{cc}\alpha&-\beta\\ \beta&\alpha\end{array}\right), \tag{17}\] and \[V_{1}=\left(\begin{array}{cc}-\beta&\alpha\\ \alpha&\beta\end{array}\right), \tag{18}\] where \[\alpha=\frac{\sqrt{2-\sqrt{2}}}{2}=\cos\left(\frac{3}{8}\pi\right), \tag{19}\] \[\beta=\frac{\sqrt{2+\sqrt{2}}}{2}=\sin\left(\frac{3}{8}\pi\right). \tag{20}\] To use the above benchmarking scheme in PyQBench, we first need to construct circuits that can be executed by actual hardware. To this end, we need to represent each of the unitaries as a sequence of standard gates, keeping in mind that quantum circuits start execution from the \(|00\rangle\) state. The circuit taking \(|00\rangle\) to the Bell state \(|\psi_{0}\rangle\) comprises the Hadamard gate followed by CNOT gate on both qubits (see Fig. 6). For \(V_{0}\) and \(V_{1}\) observe that \(V_{0}=\mathrm{RY}\left(\frac{3}{4}\pi\right)\), where \(\mathrm{RY}\) is rotation gate around the \(Y\) axis defined by \[\mathrm{RY}(\theta)=\left(\begin{array}{cc}\cos\frac{\theta}{2}&-\sin\frac{ \theta}{2}\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right). \tag{21}\] To obtain \(V_{1}\) we need only to swap the columns, i.e. \[V_{1}=\mathrm{RY}\left(\frac{3}{4}\pi\right)\mathrm{X}\,. \tag{22}\] Finally, the optimal probability of correct discrimination is equal to \[p_{\mathrm{succ}}(\mathcal{P}_{U},\mathcal{P}_{\mathbf{1}})=\frac{1}{2}+\frac {\sqrt{2}}{4}. \tag{23}\] We will now demonstrate how to implement this theoretical scheme in PyQBench. For this example we will use the Qiskit Aer simulator [40]. First, we import the necessary functions and classes from PyQBench and Qiskit. We also import numpy for the definition of np.pi constant and the exponential function. The exact purpose of the imported functions will be described at the point of their usage. ``` importnumpyasnp fromqiskitimportQuantumCircuit,Aer fromqbench.schemes.postselection import benchmark_using_postselection fromqbench.schemes.direct_sumimportbenchmark_using_direct_sum ``` Listing 2: Imports needed for running benchmarking example To implement the discrimination scheme in PyQBench, we need to define all the necessary components as Qiskit instructions. We can do so by constructing a circuit object acting on qubits 0 and 1 and then converting them using to_instruction() method. ``` defstate_prep(): circuit=QuantumCircuit(2) circuit.h(0) circuit.cnot(0, 1) returncircuit.to_instruction() defu_dag(): Figure 6: Decomposition of the Bell state \(|\psi_{0}\rangle\). circuit = QuantumCircuit(1) circuit.h(0) return circuit.to_instruction() def v0_dag(): circuit = QuantumCircuit(1) circuit.ry(-np.pi * 3 / 4, 0) return circuit.to_instruction() def v1_dag(): circuit = QuantumCircuit(1) circuit.ry(-np.pi * 3 / 4, 0) circuit.x(0) return circuit.to_instruction() def v0_v1_direct_sum_dag(): circuit = QuantumCircuit(2) circuit.ry(-np.pi * 3 / 4, 0) circuit.cnot(0, 1) return circuit.to_instruction() ``` Listing 4: Defining a backend We now construct a backend object, which in this case is an instance of Aer simulator. ``` simulator=Aer.get_backend("aer_simulator") ``` Listing 5: Simulation benchmark by using postselection In the simplest scenario, when one does not want to tweak execution details and simply wishes to run the experiment on a given backend, everything that is required is now to run benchmark_using_postselection or benchmark_using_direct_sum function, depending on the user preference. ``` postselection_result=benchmark_using_postselection( backend=simulator, target=0, ancilla=1, state_preparation=state_prep(), u_dag=u_dag(), v0_dag=v0_dag(), v1_dag=v1_dag(), num_shots_per_measurement=10000, ``` direct_sum_result=benchmark_using_direct_sum( backend=simulator, target=1, ancilla=2, state_preparation=state_prep(), u_dag=u_dag(), v0_v1_direct_sum_dag=v0_v1_direct_sum_dag(), num_shots_per_measurement=10000, ) ``` Listing 6: Simulation benchmark by using direct sum The postselection_result and direct_sum_result variables contain now the empirical probabilities of correct discrimination. We can compare them to the theoretical value and compute the absolute error. ``` p_succ=(2+np.sqrt(2))/4 print(f"Analyticalp_succ={p_succ}") print( f"Postselection:p_succ={postselection_result},abs.error= {p_succ-postselection_result}" ) print(f"Directsum:p_succ={direct_sum_result},abs.error= {p_succ-direct_sum_result}") ``` Listing 7: Examining the benchmark results In the example presented above we used functions that automate the whole process - from the circuit assembly, through running the simulations to interpreting the results. But what if we want more control over some parts of this process? One possibility would be to add some additional parameters to benchmark_using_xyz functions, but this approach is not scalable. Moreover, anticipating all possible uses cases isimpossible. Therefore, we decided on another approach. PyQBench provides functions performing: 1. Assembly of circuits needed for experiment, provided the components discussed above. 2. Interpretation of the obtained measurements. The difference between the two approaches is illustrated on the diagrams in Fig. 7. For the rest of this example we focus only on the postselection case, as the direct sum case is analogous. We continue by importing two more functions from PyQBench. ``` fromqbench.schemes.postselectionimport( assemble_postselection_circuits, compute_probabilities_from_postselection_measurements, ) circuits=assemble_postselection_circuits( target=0, ancilla=1, state_preparation=state_prep(), u_dag=u_dag(), v0_dag=v0_dag(), v1_dag=v1_dag(), ) ``` Listing 8: Assembling circuits Recall that for a postselection scheme we have two possible choices of the "unknown" measurement and two possible choices of a final measurement, which gives a total of four circuits needed to run the benchmark. The function assemble_postselection_circuits creates all four circuits and places them in a dictionary with keys "id_v0", "id_v1", "u_v0", "u_v1". We will now run our circuits using noisy and noiseless simulation. We start by creating a noise model using Qiskit. ``` fromqiskit.providers.aerimportnoise error=noise.ReadoutError([[0.75,0.25],[0.8,0.2]]) noise_model=noise.NoiseModel() noise_model.add_readout_error(error,[0]) noise_model.add_readout_error(error,[1]) ``` Listing 9: Adding noise model Once we have our noise model ready, we can execute the circuits with and without noise. To this end, we will use Qiskit's execute function. One caveat is that we have to keep track which measurements correspond to Figure 7: Differences between simplified (top) and user–controlled (bottom) execution of benchmarks in PyQBench. Compared to simplified benchmarking, in user-controlled benchmarks the user has direct access to the circuits being run, and hence can alter them (e.g. by adding noise) and/or choose the parameters used for executing them on the backend. which circuit. We do so by fixing an ordering on the keys in the circuits dictionary. ``` fromqiskitimportexecute keys_ordering=["id_v0","id_v1","u_v0","u_v1"] all_circuits=[circuits[key]forkkeyinkeys_ordering] counts_noisy=execute( all_circuits, backend=simulator, noise_model=noise_model, shots=10000)).result().get_counts() counts_noiseless=execute( all_circuits, backend=simulator, shots=10000)).result().get_counts() ``` Listing 10: Running circuits Finally, we use the measurement counts to compute discrimination probabilities using compute_probabilities_from_postselection_measurements function. ``` prob_succ_noiseless= compute_probabilities_from_postselection_measurements( id_v0_counts=counts_noiseless[0], id_v1_counts=counts_noiseless[1], u_v0_counts=counts_noiseless[2], u_v1_counts=counts_noiseless[3], ) ``` Listing 11: Computation probabilities ``` prob_succ_noisy= compute_probabilities_from_postselection_measurements( id_v0_counts=counts_noisy[0], id_v1_counts=counts_noisy[1], u_v0_counts=counts_noisy[2], u_v1_counts=counts_noisy[3], We can now examine the results. As an example, in one of our runs, we obtained prob_succ_noiseless = 0.8524401115559386 and prob_succ_noisy = 0.5017958400693446. As expected, for noisy simulations, the result lies further away from the target value of 0.8535533905932737. This concludes our example. In the next section, we will show how to use PyQBench's CLI. ### Using qbench CLI Using PyQBench as a library allows one to conduct a two-qubits benchmark with arbitrary von Neumann measurement. However, as discussed in the previous guide, it requires writing some amount of code. For a Fourier parametrized family of measurements, PyQBench offers a simplified way of conducting benchmarks using a Command Line Interface (CLI). The workflow with PyQBench's CLI can be summarized as the following list of steps:: 1. Preparing configuration files describing the backend and the experiment scenario. 2. Submitting/running experiments. Depending on the experiment scenario, execution can be synchronous, or asynchronous. 3. (optional) Checking the status of the submitted jobs if the execution is asynchronous. 4. Resolving asynchronous jobs into the actual measurement outcomes. 5. Converting obtained measurement outcomes into tabulated form. #### 6.2.1 Preparing configuration files The configuration of PyQBench CLI is driven by YAML files. The first configuration file describes the experiment scenario to be executed. The second file describes the backend. Typically, this backend will correspond to the physical device to be benchmarked, but for testing purposes one might as well use any other Qiskit-compatible backend including simulators. Let us first describe the experiment configuration file, which might look as follow. ``` type:discrimination-fourier qubits: -target:0 -ancilla:1 -target:1 -ancilla:2 angles: ``` start: 0 stop: 2 * pi num_steps: 3 gateset: ibmq method: direct_sum num_shots: 100 ``` The experiment file contains the following fields: * type: a string describing the type of the experiment. Currently, the only option of type is discrimination-fourier. * qubits: a list enumerating pairs of qubits on which the experiment should be run. For configuration in listing 12, the benchmark will run on two pairs of qubits. The first pair is 0 and 1, and the second one is 1 and 2. We decided to describe a pair by using target and ancilla keys rather than using a plain list to emphasize that the role of qubits in the experiment is not symmetric. * angles: an object describing the range of angles for Fourier parameterized family. The described range is always uniform, starts at the start, ends at stop and contains num_steps points, including both start and stop. The start and stop can be arithmetic expressions using pi literal. For instance, the range defined in listing 12 contains three points: 0, \(\pi\) and \(2\pi\). * gateset: a string describing the set of gates used in the decomposition of circuits in the experiment. The PyQBench contains explicit implementations of circuits The possible options are [ibmq, lucy, rigetti], corresponding to decompositions compatible with IBM Q devices, OQC Lucy device, and Rigetti devices. Alternatively, one might wish to turn off the decomposition by using a special value generic. However, for this to work a backend used for the experiment must natively implement all the gates needed for the experiment, as described in 4. * method: a string, either postselection or direct_sum determining which implementation of the conditional measurement is used. * num_shots: an integer defines how many shots are performed in the experiment for a particular angle, qubit pair and circuit. Note that if one wishes to compute the total number of shots in the experiment, it is necessary to take into account that the postselection method uses twice as many circuits as the direct_sum method. The second configuration file describes the backend. We decided to decouple the experiment and the backend files because it facilitates their reuse. For instance, the same experiment file can be used to run benchmarks on multiple backends, and the same backend description file can be used with multiple experiments. Different Qiskit backends typically require different data for their initialization. Hence, there are multiple possible formats of the backend configuration files understood by PyQBench. We refer the interested reader to the PyQBench's documentation. Below we describe an example YAML file describing IBM Q backend named Quito. ``` name:ibmq_quito asynchronous:false provider: hub:ibm-q group:open project:main ``` Listing 13: IBMQ backend IBMQ backends typically require an access token to IBM Quantum Experience. Since it would be unsafe to store it in plain text, the token has to be configured separately in IBMQ_TOKEN environmental variable. #### 6.2.2 Remarks on using the asynchronous flag For backends supporting asynchronous execution, the asynchronous setting can be configured to toggle it. For asynchronous execution to work, the following conditions have to be met: * Jobs returned by the backend have unique job_id. * Jobs are retrievable from the backend using the backend.retrieve_job method, even from another process (e.g. if the original process running the experiment has finished). Since PyQBench cannot determine if the job retrieval works for a given backend, it is the user's responsibility to ensure that this is the case before setting asynchronous to true. #### 6.2.3 Running the experiment and collecting measurements data After preparing YAML files defining experiment and backend, running the benchmark can be launched by using the following command line invocation: ``` qbenchdisc-fourierbenchmarkexperiment_file.ymlbackend_file.yml The output file will be printed to stdout. Optionally, the - -output OUTPUT parameter might be provided to write the output to the OUTPUT file instead. ``` qbenchdisc-fourierbenchmarkexperiment_file.ymlbackend_file.yml --outputasync_results.yml ``` The result of running the above command can be twofold: * If backend is asynchronous, the output will contain intermediate data containing, amongst others, job_ids correlated with the circuit they correspond to. * If the backend is synchronous, the output will contain measurement outcomes (bitstrings) for each of the circuits run. For synchronous experiment, the part of output looks similar to the one below. The whole YAML file can be seen in E. ``` data: -target:0 - ancilla:1 phi:0.0 - results_per_circuit: - name:id histogram:{'00':28, '01':26, '10':21, '11':25} mitigation_info: target:{prob_meas0_prep1:0.052200000000000024, prob_meas1_prep0:0.0172} ancilla:{prob_meas0_prep1:0.05900000000000005, prob_meas1_prep0:0.0202} mitigated_histogram:{'00':0.2637212373658018, '01':0.25865061319892463, '10':0.2067279352110304, '11':0.2709002142242433} ``` The data includes target, ancilla, phi, and results_per_circuit. The first three pieces of information have already been described. The last data results_per_circuit gives us the following additional information: * name: the information which measurement is used during experiment, either string "u" for \(\mathcal{P}_{U}\) or string "id" for \(\mathcal{P}_{\mathbf{1}}\). In this example we consider \(\mathcal{P}_{\mathbf{1}}\). * histogram: the dictionary with measurements' outcomes. The keys represent possible bitstrings, whereas the values are the number of occurrences. * mitigation_info: for some backends (notably for backends corresponding to IBM Q devices), backends.properties().qubits contains information that might be used for error mitigation using the MThree method [41; 42]. If this info is available, it will be stored in the mitigation_info field, otherwise this field will be absent. * mitigated_histogram: the histogram with measurements' outcomes after the error mitigation. #### 6.2.4 (Optional) Getting status of asynchronous jobs PyQBench provides also a helper command that will fetch the statuses of asynchronous jobs. The command is: ``` qbenchdisc-fourierstatusasync_results.yml ``` and it will display dictionary with histogram of statuses. #### 6.2.5 Resolving asynchronous jobs For asynchronous experiments, the stored intermediate data has to be resolved in actual measurements' outcomes. The following command will wait until all jobs are completed and then write a result file. ``` qbenchdisc-fourierresolveasync-results.ymlresolved.yml ``` The resolved results, stored in resolved.yml, would look just like if the experiment was run synchronously. Therefore, the final results will look the same no matter in which mode the benchmark was run, and hence in both cases the final output file is suitable for being an input for the command computing the discrimination probabilities. #### 6.2.6 Computing probabilities As a last step in the processing workflow, the results file has to be passed to tabulate command: ``` qbenchdisc-fouriertabulateresults.ymlresults.csv ``` A sample CSV file is provided below: ## 7 Impact With the surge of availability of quantum computing architectures in recent years it becomes increasingly difficult to keep track of their relative performance. To make this case even more difficult, various providers give access to different figures of merit for their architectures. Our package allows the user to test various architectures, available through qiskit and Amazon BraKet using problems with simple operational interpretation. We provide one example built-in in the package. Furthermore, we provide a powerful tool for the users to extend the range of available problems in a way that suits their needs. Due to this possibility of extension, the users are able to test specific aspects of their architecture of interest. For example, if their problem is related to the amount of coherence (the sum of absolute value of off-diagonal elements) of the states present during computation, they are able to quickly prepare a custom experiment, launch it on desired architectures, gather the result, based on which they can decide which specific architecture they should use. Finally, we provide the source code of PyQBench on GitHub [36] under an open source license which will allow users to utilize and extend our package in their specific applications. ## 8 Conclusions In this study, we develop a Python library PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parameterized Fourier family of measurements. For \begin{table} \begin{tabular}{|c c c c c|} \hline target & ancilla & phi & ideal\_prob & disc\_prob & mit\_disc\_prob \\ \hline \hline 0 & 1 & 0 & 0.5 & 0.46 & 0.45 \\ \hline 0 & 1 & 3.14 & 1 & 0.95 & 0.98 \\ \hline 0 & 1 & 6.28 & 0.5 & 0.57 & 0.58 \\ \hline 1 & 2 & 0 & 0.5 & 0.57 & 0.57 \\ \hline 1 & 2 & 3.14 & 1 & 0.88 & 0.94 \\ \hline 1 & 2 & 6.28 & 0.5 & 0.55 & 0.56 \\ \hline \end{tabular} \end{table} Table 2: The resulting CSV file contains table with columns target, ancilla, phi, ideal_prob, disc_prob and, optionally, mit_disc_prob. Each row in the table describes results for a tuple of (target, ancilla, phi). The reference optimal value of discrimination probability is present in ideal_prob column, whereas the obtained, empirical discrimination probability can be found in the disc_prob column. The mit_disc_prob column contains empirical discrimination probability after applying the Mthree error mitigation [41; 42], if it was applied. more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones. ## 9 Conflict of Interest We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its out- come. ## Acknowledgements This work is supported by the project "Near-term quantum computers Challenges, optimal implementations and applications" under Grant Number POIR.04.04.00-00-17C1/18-00, which is carried out within the Team-Net programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. PL is also a holder of European Union scholarship through the European Social Fund, grant InterPOWER (POWR.03.05.00-00-Z305).
2301.03386
Need of 6G for the Metaverse Realization
The concept of the Metaverse aims to bring a fully-fledged extended reality environment to provide next generation applications and services. Development of the Metaverse is backed by many technologies, including, 5G, artificial intelligence, edge computing and extended reality. The advent of 6G is envisaged to mark a significant milestone in the development of the Metaverse, facilitating near-zero-latency, a plethora of new services and upgraded real-world infrastructure. This paper establishes the advantages of providing the Metaverse services over 6G along with an overview of the demanded technical requirements. The paper provides an insight to the concepts of the Metaverse and the envisaged technical capabilities of 6G mobile networks. Then, the technical aspects covering 6G for the development of the Metaverse, ranging from validating digital assets, interoperability, and efficient user interaction in the Metaverse to related security and privacy aspects are elaborated. Subsequently, the role of 6G technologies towards enabling the Metaverse, including artificial intelligence, blockchain, open radio access networks, edge computing, cloudification and internet of everything. The paper also presents 6G integration challenges and outlines ongoing projects towards developing the Metaverse technologies to facilitate the Metaverse applications and services.
Bartlomiej Siniarski, Chamitha De Alwis, Gokul Yenduri, Thien Huynh-The, GÜrkan GÜr, Thippa Reddy Gadekallu, Madhusanka Liyanage
2022-12-28T05:13:09Z
http://arxiv.org/abs/2301.03386v1
# Need of 6G for the Metaverse Realization ###### Abstract The concept of the Metaverse aims to bring a fully-fledged extended reality environment to provide next generation applications and services. Development of the Metaverse is backed by many technologies, including, 5G, artificial intelligence, edge computing and extended reality. The advent of 6G is envisaged to mark a significant milestone in the development of the Metaverse, facilitating near-zero-latency, a plethora of new services and upgraded real-world infrastructure. This paper establishes the advantages of providing the Metaverse services over 6G along with an overview of the demanded technical requirements. The paper provides an insight to the concepts of the Metaverse and the envisaged technical capabilities of 6G mobile networks. Then, the technical aspects covering 6G for the development of the Metaverse, ranging from validating digital assets, interoperability, and efficient user interaction in the Metaverse to related security and privacy aspects are elaborated. Subsequently, the role of 6G technologies towards enabling the Metaverse, including artificial intelligence, blockchain, open radio access networks, edge computing, cloudification and internet of everything. The paper also presents 6G integration challenges and outlines ongoing projects towards developing the Metaverse technologies to facilitate the Metaverse applications and services. Metaverse, 6G, AI, Blockchain, Edge Computing, Security, Privacy, vertical applications. ## I Introduction The term 'Metaverse' has been coined to further facilitate the digital transformation in every aspect of our physical lives [1]. The Metaverse is a virtual world where you can live a synchronous life through your avatar. The concept is similar to an online game, however, instead of shooting targets or driving cars, users will be engaged in real-life activities. These activities could include attending meetings, catching up with friends, attending music festivals, going door to door selling digital collectables, or buying and selling land, apartments or assets. Virtual interactive worlds or early Metaverses have already been introduced primarily in video games with releases such as Fortnite, Minecraft, Decentralized, Ifland. The list isn't extensive and users are gravitating toward other Metaverse ecosystems that are emerging today. The Metaverse embraces a social interaction accelerated through a virtual environment and driven by novel technologies such as Web 3.0, 5G, Artificial Intelligence (AI) and Extended Reality (XR). The XR - which includes everything from Virtual Reality (VR) to Mixed Reality (MR) to Augmented Reality (AR) and haptics - have enormous potential to transform both industry and society. The widespread adoption of XR was slowed down recently by a number of issues including limited processing power, storage and battery life of small head-mounted displays (HMDs). The 5G made it possible to overcome some of these challenges by offloading a portion of XR processing to the mobile network edge. In addition to this, the 5G QoS framework makes it possible to establish QoS flows that provide optimized network treatment for specific traffic flows, in addition to the default QoS flow used for mobile broadband (MBB). Such additional QoS flows can be established either using 5GC QoS-exposure application programming interfaces to communicate service requirements or by traffic detection together with pre-provisioned service requirements, such as relying on standardized 5G QoS identifier characteristics. Although the Metaverses have the potential to be transformational for both business and society, widespread adoption has previously been hindered by issues such as heat generation and the limited processing power, storage, and battery life of small form factor head-mounted devices. The time-critical communication capabilities in 5G make it possible to overcome only some of these challenges by offloading XR processing to the mobile network edge. By evolving the already existing 5G or B5G networks, mobile network operators are in an excellent position to enable the realization of the Metaverse on a large scale. The 6G aims to achieve high spectrum and energy efficiency, low latency, and massive connection due to the exponential growth of the Internet of Things (IoT) devices. 6G will also effectively link the physical, and digital worlds by providing seamless and ubiquitous services such as extreme-scale environmental monitoring and control, virtual reality/virtual navigation, telemedicine, digital sensing, and robotics. This will result in a network that connects us to one another, to information, to knowledge, and to purpose. As a result, 6G networks will enhance the efficiency of technologies such as computer vision, blockchain, AI, the IoT, robotics, and user interfaces which are critical for the metaverse realization. In summary, 6G will enhance every feature of the 5G network that benefits the user to improve areas such as smart cities, farming, manufacturing, and robots. 6G will provide enhanced productivity, capabilities, and better user experiences. The main use of 6G in the Metaverse is summarized below: **Near-zero-latency**: In virtual interaction, 6G will continuously provide users with a near-zero-latency sensory interconnection experience, such as the user's virtual movement in the Metaverse, virtual meetings, virtual paintings, and other interactive, immersive holographic experiences. **New services**: 6G is the main driver of multiple new service models. For example, 6G communication technology provides users with precise service models in autonomous driving, industrial control, e-health, Internet robots, and autonomous systems, bringing a more convenient lifestyle. **Upgraded real-world infrastructure available for use in the Metaverses**: 6G infrastructure mainly includes information infrastructure, fusion infrastructure, and innovation infrastructure. In particular, the 6G communication system integrates infrastructure such as ground, UAV, and satellite Internet. 6G also features high bandwidth, low latency, strong reliability, and global coverage. ### _Motivation_ The main motivation of this paper is to realize if mobile network operators can enable large-scale XR and at the same time further development of Metaverses by introducing time-critical communication capabilities in 6G networks. The 5G networks already contribute to considerable improvement in data rate, latency, and packet loss since the last network generation (4G) and users already enjoy comfortable viewing experiences. However, as the resolution of video increases from 4K to 8K and 12K/24K for 3D video and the number of worldwide users increases, the 5G network will not be sufficient to support many use cases. Some of the main cloud and network providers are defining the evolution of the service experience into the fair-experience, comfortable experience, and ideal-experience phases [2], where each has its own network KPI requirement to be met. Table 1 summarizes those KPI requirements based on different use cases envisaged to be a part of future metaverses. In this work, we aim to establish and explain the main advantages of providing the Metaverse services over 6G and provide an overview of the technical requirements. Furthermore, we aim to establish what role will 6G play in the Metaverse operation and if the envisaged architecture of 6G will be capable of supporting the upcoming technology portrayed by the tech industry. ### _Related Surveys and Contributions_ Our work is exclusively focused on the networking aspects of the Metaverse and the role that 6G will play in the Metaverse deployment. Though there are some Metaverse-focused surveys we found it is lacking a comprehensive, and detailed discussion on the role of B5G/6G technologies as indicated by Table 2. The table also includes the limitations of the related works in the context of technical challenges, security and privacy, and research directions, which we have already addressed in this paper. The surveys [3] and [4] investigate technical aspects and security threats comprehensively. However, those papers are not focused on the future networks and the role of 6G in the Metaverse specifically. The surveys [5], [1] and [6] include an interesting view on the potential use of the Metaverse in different domains and clearly define network requirements. The limitations in [5], [1] and [6] include the lack of coverage of future network aspects and the discussion on the security and privacy issues is weak. Surveys [7] and [8] discuss implementation challenges and analyze the fusion of 6G-enabled edge with the Metaverse, however, the security issues and research directions are only partially covered. Therefore, we contribute to addressing this gap in our work on the comprehensive discussion on 6G for the Metaverse. ### _Paper Organization_ The rest of this paper is organized as follows. Introduction and discussion of the role of 6G networks in the Metaverse are presented in Section I. Section II covers the expected improvements from 5G to 6G and the impact it will have on the Metaverses. Section III investigates the state-of-the-art solutions provided by 6G for the Metaverse from technical perspective, followed by Section IV that discusses in detail how different 6G technologies will help to achieve the Metaverse aims. Section V identifies expected 6G challenges that would have to be approached before the introduction of Metaverses to wider community. Finally, Section VI provides an overview of related research projects. ## II 66 and the Metaverse: Preliminaries The preliminary introduction to 6G and the Metaverse is presented in this section, followed by the role of 6G in the Metaverse. ### _Preliminary to 6G_ Since the middle of 2019, commercial 5G mobile networks have been standardized and deployed globally, with significant coverage in some countries. Numerous new applications and use cases are being developed, placing existing networks' capabilities to the test. The capacity of current 5G networks to handle the Internet of Everything (IoE), holographic telepresence, collaborative robotics, and deep-sea and space tourism is limited [9]. This has prompted researchers to reconsider and work toward the development of the next generation of mobile communications networks called the sixth-generation of mobile networks 6G. Each time mobile communication technology is upgraded and iterated, its performance metrics improve by a factor of ten to hundred times over the preceding generation [10]. Researchers from all over the world propose AI/machine learning (ML), quantum communication/quantum machine learning (QML), blockchain, tera-hertz and millimetre wave communication, tactile Internet, non-orthogonal multiple access (NOMA), small cell communication, fog/edge computing, etc. as the key technologies for the realisation of 6G communications. 6G aims to achieve high spectrum and energy efficiency, low latency, and massive connection due to the exponential growth of the IoT devices. 6G will make feasible intelligent traffic, environmental monitoring and control, virtual reality/virtual navigation, telemedicine, digital sensing, high definition (HD), and full HD video transmission in connected drones and robotics. 6G will also effectively link the physical, and digital worlds. This will result in a network that connects us to one another, to information, to knowledge, and to purpose. 6G wireless networks operate in the terahertz band, with a peak rate of 1T b/s and a network with ultra-reliable and low-latency communication (URLLC) of less than 1 ms, considerably improving the overall quality of experience (QoE) for consumers [11]. 6G has a high positioning accuracy of 1 m outdoors and 10 cm indoors [12] which also improves positioning accuracy of deep-sea and space tourism. 6G utilises endogenous security technology to increase its resistance to unknown security threats [13]. As a result, 6G networks can enhance the efficiency of technologies such as computer vision, blockchain, AI, the IoT, robotics, and user interfaces [14]. To summarize, 6G will enhance every feature of the 5G network that benefits the user. 6G will improve areas such as smart cities, farming, manufacturing, and robots. 6G will provide enhanced productivity, capabilities, and better user experiences. Improved and expanded functionality is an in \begin{table} \begin{tabular}{p{113.8pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline \hline **Type of interaction /sec case** & **Network KPI requirement** & \multicolumn{2}{c|}{**Fail-experience**} & \multicolumn{1}{p{56.9pt}|}{**Confirable-experience**} & \multicolumn{1}{p{56.9pt}}{**Ideal-experience**} \\ \hline & & In the four-experience phase, most current i-clock, and the internal screen resolution in 2K to 4K. & In the comfortable-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. & In the dual-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. & In the dual-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. \\ \hline **Work-interaction** & Bitrate & 2-40 Mbits (4K) & Full-size: 2-90 Mbits & 2-1000 Mbits (24K) & 2-1000 Mbits (24K) \\ & & & FOV: 2.50 Mbits & FOV: 2.15 Mbits (12K) & 2-50 Mbits (24K) \\ & & & FOV: 2.50 Mbits (24K) & 2-50 Mbits (24K) \\ & & & FOV: 2.440 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & & FOV: 2.20 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & & FOV: 2.20 Mbits (12K) \\ & & & FOV: 2.20 Mbits (24K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2. evitability over successive generations. Even with 6G, this will be the case. 6G will improve upon 5G by optimising and lowering costs to increase adoption. Information management and consumption will be simplified with the advent of 6G's new human-machine interfaces. The touchscreen interface of the future will instead be controlled by voice instructions, gestures, or even brain signals. The comparison of the features related to 5G and 6G are depicted in Table 3 ### _Preliminary to the Metaverse_ The Metaverse is a network of three-dimensional virtual environments dedicated to social interaction. It is frequently depicted in futurism and science fiction films. The worldwide virtual environment will be made possible by the usage of VR and AR devices [15]. The term "Metaverse" is not entirely unfamiliar in the technological world. Neal Stephenson coined the term "Metaverse" in 1992. His science fiction novel Snow Crash envisioned an online universe in which people may explore and escape the physical world using digital avatars [16]. Decades later, major technology firms like Meta, Axie Infinity, The Sandbox, and Decentraland have begun developing their versions of a futuristic Metaverse. The overview of the enabling technologies, services, and technical requirement is depicted in the Fig. 2 \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Ref & \begin{tabular}{c} Technical \\ aspects \\ challenges \\ \end{tabular} & \begin{tabular}{c} Security \\ and \\ privacy \\ issues \\ \end{tabular} & \begin{tabular}{c} The role \\ of 6G in \\ Metaverse \\ \end{tabular} & \begin{tabular}{c} Research \\ directions \\ (6G) \\ \end{tabular} & Remarks & Limitations \\ \hline [5] & **H** & **M** & **H** & **M** & This paper aims to show the roadmap to the Metaverse in terms of communication and networking in 6G, including requirements (limited) and challenges for 6G to realize the Metaverse, and discussing the fundamental technologies to be integrated in 6G to drive the implementation of the Metaverse. & The paper investigates AI-based methods concerning six technical aspects that have potentials for the Metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the Metaverse. & The paper is missing some important references and the requirements are not discussed in detail except for what is depicted in Fig. 2. \\ \hline [3] & **H** & **M** & **L** & **M** & The security threats to the Metaverse are explained comprehensively. The paper includes the countermeasures to those threats. From the security and privacy perspective, it is a comprehensive survey. & The future networks enablers are not discussed in this paper. The paper provides good state-of-the-art survey, however it is lacking future directions especially in the networking aspect. & The paper does not cover any security or privacy issues or the importance of future networks in designing meta worlds. \\ \hline [4] & **M** & **H** & **L** & **M** & The security threats to the Metaverse are explained comprehensively. The paper includes the countermeasures to those threats. From the security and privacy perspective, it is a comprehensive survey. & The future networks enablers are not discussed in this paper. The paper provides good state-of-the-art survey, however it is lacking future directions especially in the networking aspect. & The paper does not cover any security or privacy issues or the importance of future networks in designing meta worlds. \\ \hline [7] & **H** & **M** & **L** & **M** & This survey changes how enablers of the Metaverse can be implemented at a scale at mobile edge networks. The implementation challenges are also discussed in this survey. & The survey mentions the role of B5G/6G, but it doesn’t cover how 6G will enable the Metaverse. \\ \hline [8] & **M** & **L** & **M** & **M** & This survey analyzes the fusion of 6G-enabled edge AI and the Metaverse, and introduces the features, architecture, technologies, and applications of the Metaverse. & The paper only partially covers privacy and security issues. The 6G requirements are discussed only to certain level and the 6G enablers are not covered in full. \\ \hline Our - #### 1.1.1 Enabling Technologies of the Metaverse The immersive experience of the Metaverse will be enabled by cutting-edge technologies such as blockchain, AR and XR, AI, and the IoT. * Blockchain: Blockchain technology enables decentralised and transparent digital proofs of ownership, collectibility, value transfer, governance, accessibility, and interoperability in the Metaverse [17]. Blockchain also enables individuals to exchange assets while working and interacting in the Metaverse. * Extended reality: XR enables the creation of 3D computer-rendered virtual environments in the Metaverse. XR allows users to interact with these virtual goods through head tracking devices or physical controls [18]. As XR technology matures, it will be able to broaden the Metaverse experience by including physical simulations using XR equipment. Users will then have the ability to sense, hear, and interact with people from all around the world. * Artificial intelligence: AI will allow users of the Metaverse to construct incredibly realistic avatars and have multilingual accessibility. AI will help people make better decisions in the Metaverse. A better human-computer interface will also be provided by AI [19]. AI can also help detect, prevent, and recover from cyber attacks in the Metaverse. * Internet of things: IoT will enable the Metaverse to map data from real life and emerge it into virtual reality [20]. The data provided from the IoT devices to the Metaverse will help professionals to solve real-world problems. The Metaverse with the help of IoT will support the users to collect real-time data-driven decisions with minimum need for training and computation. * Edge Computing: Edge computing enables mobility and the border-less Metaverse for the users [21]. Edge computing improves the availability of data in the Metaverse by bringing data closer to end consumers for retrieving and storing data at remote data centre locations. Edge computing will help data transferring with ultra-reduced latency in the Metaverse which will help the users to make quick and effective decisions. #### 1.2.2 Applications of the Metaverse The Metaverse has made its way into many sectors, capturing the enthusiasm of entrepreneurs across the world. The Metaverse will have a huge impact on applications like healthcare, real estate, manufacturing, tourism, Entertainment, and shopping. * Healthcare: Smart healthcare has contributed to resolving several healthcare difficulties, including linking patients to doctors situated throughout the world during the COVID-19 epidemic. This prepared the door for the application of the Metaverse in healthcare, which is facilitated by medical IoT, VR, and AR [22]. The Metaverse gives users control over how the virtual and physical worlds interact. This enhances doctors' ability to provide consistent and customised patient care. Through the use of VR technology, the Metaverse can aid in remote health monitoring, clinical data collection, and improved robotic surgery, while 3D immersive technology will enable surgeons to practise through \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Features** & **5G** & **6G** \\ \hline Data Rate & 1 Gbps to 20 Gbps. & 1 Tbps. \\ \hline Application Types & Enhanced Mobile Broadband, Ultra-Reliable Low Latency Communications.Massive Machine Type Communications. & Massive Broadband Reliable Low Latency Communication. Massive-URLLC Human-Centric Services. & Broadband Reliable Low Latency Communication. Massive-URLLC Human-Centric Services. \\ \hline Device Types & Smartphones, Drones, and Sensors. & Sensors & DLT devices,BCI and XR equipment, CRAS, and Smart implants. \\ \hline Frequency Band & Sub 6 GHz and mm wave for fixed access. & Sub 6 GHz mm wave for mobile access exploration of THz bands. Non-RF bands. \\ \hline Latency & 5 ms & \(<\)1 ms \\ \hline Architecture & Dense sub 6 GHz smaller BS5 with umbrella macro BSs. & Cell free smart surfaces at high frequencies. Temporary hotspots provided by drone-mounted BSs. Trials of tiny THz cells. \\ \hline Spectral and Energy Efficiency & 10 x in bps/Hz/m\({}^{2}\). & 1000 x in bps/Hz/m\({}^{2}\). \\ \hline Traffic Capacity & 10 Mbps/m\({}^{2}\). & 1 to 10 Gbps/m\({}^{2}\). \\ \hline Reliability & 10\({}^{-9}\). & 10\({}^{-9}\). \\ \hline Localization Precision & 10cm. & 1cm in 3D. \\ \hline User Experience & 50 Mbps. & 10 Gbps. \\ \hline Mobility & 500 km/h & 1000 km/h \\ \hline Connection density & 10\({}^{6}\) devices/km\({}^{2}\) & 10\({}^{7}\) devices/km\({}^{2}\) \\ \hline \end{tabular} \end{table} TABLE III: The comparison of 5G and 6G Features Figure 2: Preliminary to the Metaverse simulations that will raise their success rate in the real world. * Real Estate: The Metaverse allows organisations to establish retail and experience centres on its virtual land [23]. Rather than downloading many applications, users can access the Metaverse, which contains all currently available applications. As a result, the value of virtual land will increase. Property ownership in the Metaverse is limitless, and owners are free to use their virtual holdings. Digital property owners can make, run, lease, and build billboards for advertising. * Manufacturing: Manufacturers can create digital factories in the Metaverse to assist in organising production and effective usage of machinery. This allows simulation of human-machine interaction throughout the manufacturing process [24]. As a result, firms can use virtual production systems to train new employees and staff on how to use them in the real world, which would boost the manufacturing of products with a very low error rate. The metaverse also allows mass personalization of the product and allows the user to track the product from its development to delivery, which will improve the trust of the users in the organization. * Tourism: The Metaverse has the potential to create the most immersive experiences ever seen in the tourism sector. The Metaverse allows hotel chains, tourism boards, online travel agencies, and other businesses to advertise their services [25]. Users can virtually visit those locations and decide whether or not to visit them in person. They can go through two distinct locations without leaving their homes, comparing and evaluating places through the use of 3D imagery. The Metaverse will give users an experience that will be better than any kind of communication that exists in the present day, including video and audio interaction. * Entertainment: The Metaverse will completely revolutionise entertainment with its rich 3D environment and avatars. Entertainment's growth is highly linked to the development of VR in the Metaverse. The Metaverse-based entertainment, including movies and video games, can be enjoyed in a virtual world that users can access from the comfort and privacy of their home [26]. It also allows users to attend virtual concerts and sporting events from first-row seats or to ride virtual roller coasters at theme parks. * Shopping: Customer experiences in the Metaverse will evolve constantly as a result of XR technologies, and organisations selling products in metamalls will have more creative freedom to express themselves and attract customers than they do in traditional shopping. These spaces will encompass much more than the basic services seen on the majority of e-commerce sites today, including user engagement, avatar customization, event attendance, and skill acquisition [27]. Furthermore, the products sold in the Metaverse will include both physical and virtual items. Consumers may feel and touch the object with the use of sensors, which will completely alter the traditional buying experience. Additionally, customers can purchase things on the go while engaged in real-world activities. #### 3.2.3 Technical Requirements of the Metaverse Privacy, security, storage, and interoperability are the important technical requirements of the Metaverse. * Privacy: The Metaverse is a social platform that employs interactive technology such as VR, AR, and AI that requires sensitive user data. Since behavioral-learning and recommender systems collect vast quantities of personal information, they pose a threat to the privacy of the Metaverse users [28]. Therefore, the use of such technologies poses a substantial risk to the privacy of users' data. The Metaverse must guarantee privacy protection for such sensitive information, and users must have complete control over their data, which will increase their trust in the Metaverse. Even though blockchain technology can help protect privacy in the Metaverse, there are no specific rules designed for privacy protection in the Metaverse, which makes it a critical requirement. * Security: In the Metaverse, attackers and AI bots can and will emerge from any location and at any time. The Metaverse networks should have a high level of security, and related protocols to incorporate continuous awareness into these networks [5]. In addition to existing passwords, multi-factor authentication, enhanced firewalls, and advanced threat detection technologies, the Metaverse must be incorporated with higher transparency and analysis to detect anomalies and uncover malicious activities to maintain user security. Data must be secured and safeguarded during transmission and storage. To assure the security of the Metaverse in the future, it is vital to draw on and build upon information from the past. * Storage: The Metaverse is a collection of technologies. It is a huge concept which involves the simultaneous integration of multiple technologies. The list includes high-performance networks, sophisticated computing and sensors, hardware, AR/VR, AI, 3D modelling, blockchain, IoT, and many other technologies. The data produced from these technologies and their related application will be enormous [29]. The formation of the Metaverse itself necessitates voluminous data storage. Decentralised storage based on blockchain technology can be used to store this massive amount of data. This storage distributes data to numerous independent network nodes using open-source applications and algorithms. It also improves the privacy of data, the redundancy of data backups, and the transparency of data in the Metaverse. * Interoperability: Interoperability across services, technology, and virtual worlds is a crucial aspect of the Metaverse [30]. A cross-chain protocol is an optimal approach for maintaining interoperability between diverse Metaverse services, technologies, and configurations. Among other protocols, this one permits the exchange of assets like avatars, non-fungible tokens, and currency. To make the Metaverse more interoperable, different devices that use the same technology need to follow the same rules and standards. ## III 6G for the Metaverse: Technical Perspective This section investigates the state-of-the-art solutions provided by 6G for the Metaverse from the technical perspectives, including validation of digital assets, cross-platform integration, efficient support of AI, privacy and security, high-speed data connection, content creation and storage, user interactivity, low latency communication, and computer vision. ### 6G for Validation of Digital assets in the Metaverse Non-fungible Token (NFT) is one of the key components of a digital asset in the Metaverse. A visual asset, such as a virtual building, can be represented by an NFT that can be represented as a digital asset with unique data coding. When a purchaser buys an NFT, a private digital key password will be generated, that can certify that the purchaser owns a particular digital asset. Through this private key, the owner of the NFT can sell or transfer the digital asset to others [31]. The blockchain associated with the specific Metaverse will record the NFT for a digital asset in the Metaverse. For instance, the Ethereum blockchain stores "Decentraland Metaverse", which is highly popular. Ethereum records the digital asset transactions of the NFT for Decentraland [32]. Digital assets in the Metaverse can be created in the form of NFTs by the users. These digital assets can be anything ranging from virtual goods to digital art to virtual real estate, which is minted into the NFTs that are securely stored on the blockchain. The owners of these digital assets can see these digital assets which are in the form of NFTs in the form of crypto for purchasing other digital assets in the Metaverse [33]. Content creators and artists can afford to have an opportunity to monetize their assets uniquely due to NFTs and blockchain. For instance, artists need not depend on auction houses or galleries for selling their art. The artists can sell their art directly to the customer in the form of an NFT that allows them to keep the majority of the profits. The artists can even receive royalties whenever their art is sold to a new customer. Like art, the other digital assets also can be traded through NFTs in the Metaverse [34]. The process of creating NFTs and transferring them from one virtual world to another requires a network that is highly reliable and secure. Digital assets in the Metaverse, represented by NFTs are verified and tracked on the blockchain. Every NFT will have a unique transaction hash that makes it non-replicable. All the transactional data related to the NFTs collected by the blockchain and are stored in blocks, that forms a blockchain. The information stored in the blockchain is stored forever and can be viewed and verified by the public. Verification of the digital assets in the Metaverse that has AR and other MR technologies incorporated, needs significant amount of bandwidth to create a more immersive experience add also to reduce the load times. The validation and verification of the digital assets in the blockchain incurs heavy computation in the blockchain, which needs significant bandwidth so that the users can see the results in near real-time, as depicted in Fig. 4. The transactions between the different entities in the Metaverse are also powered by the consensus mechanism of the blockchain, which requires huge amounts of data transfer between nodes. This creates a requirement for a network that is both transparent and capable of real-time communication. These challenges faced during the creation, transfer, and validation of digital assets in the Metaverse can be solved by 6G due to its low latency, reliability, transparency, and high-speed connectivity [35]. ### 6G for Cross Platform Integration/interoperability in the Metaverse One of the hurdles in realizing the full potential of the Metaverse is the lack of interoperability. Lack of interoperability [36, 37] is a major hurdle in a mass adaption of the Metaverse that makes the navigation of the users free from one Metaverse application to the other challenging. The Metaverse should mimic the interoperability that is experienced in the physical world. For instance, in the real/physical world, we can take physical assets/objects from one place to another easily. The users in the Metaverse too should be able to navigate seamlessly and freely to other Metaverses. This is possible through interoperability that can form a global interconnected Metaverse where various Metaverses are integrated across the platforms as experienced in the real world [38]. Realization of interoperability in the Metaverse is a significant challenge as heavy objects such as digital avatars, 3D holograms etc. have to be navigated across in feature-rich Metaverse in near real-time. It requires a communication infrastructure with high bandwidth and low latency. 6G network with its high bandwidth and ultra-reliable low latency communication infrastructure can solve the issue of seamless communication in the Metaverse, as depicted in Fig. 5. With the help of supporting technologies like ORAN and ZSM, the 6G network can be the common platform that provides an interoperable infrastructure for multiple Metaverses. Network slicing, software-defined networking, symbiotic radio, and network function virtualization are the 6G techniques that promote network interoperability and agility in the Metaverse. Intelligent collaboration between diverse wireless signals is supported by symbiotic radio. The SDN/NFV offers open interfaces that facilitate interoperability between several Metaverses and assist produce network slices for any vertical application such as gaming and shopping over the common physical infrastructure among different Metaverses [39]. ### 66 for Efficient Support of AI in the Metaverse The Metaverse is a virtual world where the users will play games, interact with each other and the 3D objects in the virtual world, and build things in the virtual world. VR and AR along with blockchain and AI are the key enabling technologies in realizing the Metaverse. The applications of AI in Figure 4: 6G for Validation of Digital Assets in the Metaverse Figure 5: 6G for Cross Platform Integration/interoperability in the Metaverse Figure 3: 6G for the Metaverse:Technical Perspective the Metaverse include speech processing, content analysis, computer vision, etc [3]. These applications of AI can be used to help build important components of the Metaverse as discussed below: **Avatars:** Avatars is one of the important and interesting concepts of the Metaverse, where people in the physical world will create a digital avatar in the virtual world. People would like to get creative in the virtual world and they would like to see themselves in a different way which may not be possible in the physical world. They can change their clothing, hairstyle, and body language, which is not their regular norm in the real world. AI plays a major role in the users designing their avatars in the virtual world. AI can be used to analyze 3D scans or user images to create accurate, realistic, and innovative avatars [40]. Some organizations such as Ready Player Me are making use of AI to create avatars for the Metaverse. **Digital Humans:** In the Metaverse, 3D chatbots are termed as digital humans. The digital humans respond and react to the actions of humans in the virtual world. They are usually non-playing characters that can be a character in a game of virtual reality whose actions and responses depend on a set of rules or automated scripts. They try to understand what the users are communicating by listening and observing them. Human-like interactions and conversations can happen in the Metaverse between humans and digital humans through body language and speech recognition [41]. AI plays a significant role in the successful implementation of digital humans in the Metaverse. Some of the key functionalities of digital humans like speech recognition, body language identification, object detection, etc. can be realized through AI [6]. **Language Processing:** In the Metaverse users from across the globe can communicate and interact easily without language barriers. This is made possible with the help of AI. AI can break the human language such as English into a format that can be read by machines. The AI can then analyze the sentences, and respond to the users in their language [42]. From the above discussion, it is obvious that AI plays a significant role in the realization of some of the key features of the Metaverse. In the Metaverse, huge volumes of heterogeneous big data will be generated at a very fast rate. 6G, with its characteristics such as fast communication infrastructure, and near real-time processing, can help in processing/analyzing this big data to uncover the patterns existing in the data that trains the AI/ML algorithms in near real-time to make quick decisions/predictions through which several components of the Metaverse can communicate easily. ### 6G for High Speed Data Connection in the Metaverse The wide adaption of AR and VR technologies is the key to the transition to the Metaverse. It is expected that data usage to be increased by 20 times to what is being used today due to the revolution of the Metaverse by 2022. To realize the full potential of the Metaverse with real-time experience of AR and VR technologies, truly immersive 3D experiences. The end-users should be able to access high-speed data connections that can deliver the data at speeds of approximately 1 Gbps [43]. Some of the key requirements that will be needed to realize the true potential of the Metaverse are as follows: * To create virtual reality worlds in real-time, high-speed data connection is required. * The communication infrastructure should high-speed transmission in near real-time with very low latency, typically, below 10 milliseconds. * The existing 4K video resolution may not be sufficient to convey the pixels for creating immersive worlds. Higher-resolution videos have to be supported by the data carriers. * Next-generation video compression techniques that can compress and decompress huge data files in the Metaverse in real time are the need of the hour. The key features of 6G with high bandwidth and URLLC [44] promise is a key enabling technology to realize the high bandwidth requirement of the Metaverse. The use of Edge AI-enabled 6G can also help applications and the Metaverse devices address these issues. Edge AI is the combination of edge computing and AI to run machine learning tasks directly on connected edge devices. Edge AI computes and processes data locally, which helps the Metaverse devices be efficient and responsive in their communication. This also reduces the amount of data sent from the Metaverse devices to the cloud, thereby saving a huge amount of bandwidth. ### 6G for Efficient User Interaction in the Metaverse The Metaverse enables the interaction between real-world entities and virtual objects. It is a digital environment that incorporates social networking, real estate, online gaming, AR, VR, and cryptocurrencies. In the Metaverse, with the help of virtual objects, sounds, and other sensory input, AR tries to enhance the user's perception of the real world [5]. Each time a user enters the Metaverse, the objects around them undergo a dynamic transformation based on the requirements. Figure 6: 6G for Efficient User Interaction in the Metaverse Everything in the Metaverse is constantly changing, which indicates the dynamic nature of the Metaverse. Changes to a physical object are mirrored in its virtual counterpart in the Metaverse because of their digital twins, which are linked to real-world objects. People can also change objects by interacting with them. Instead of just looking at digital objects in the Metaverse, users will be able to experience a place and interact with them [45]. The creation of new objects will require complex inputs and will demand high-quality user interaction with the objects in the Metaverse. The Metaverse poses three crucial challenges for effective user interaction, as depicted in Fig. 6: **Interacting with existing objects:** Users' physical interactions with these virtual worlds are an important consideration [46]. For the Metaverse to persist, this is a fundamental challenge that must be overcome. When the user is unable to control the interaction, they will stop using it immediately. When a user is completely immersed in a virtual world and finds themselves unable to perform a task that they could do in the real world, they become frustrated and annoyed. **Modifying existing objects:** As technology gets better and the real world keeps changing, the Metaverse objects will need to be changed to make them seem more real [47]. Realistic objects need more precise modelling algorithms, just like real faces. Even in the Metaverse, where scenes and avatars are always changing and interacting, objects have to be changed all the time to reach this level of realism. **Creation of new virtual objects:** The Metaverse is a virtual 3D universe comprised of virtual 3D objects [48]. The Metaverse requires the creation of immersive experiences based on real-world artefacts to accomplish its objective of combining the digital and physical worlds. In the Metaverse, a lot of digital objects will need constant sensor inputs from their physical counterparts to produce this realistic immersive experience for the users. The Metaverse will also enable its users to create virtual objects by providing them with various tools and applications. As a result, it creates a huge requirement for bandwidth, which is a challenge to achieve with the present technology. From the above discussion, it is obvious that efficient user interaction plays a significant role in the creation, interaction, and modification of digital objects in the Metaverse. This requires massive input from real-world objects. 6G's URLLC and real-time processing abilities will aid in the building of a highly immersive 3D environment in the Metaverse. ### 66 for Low Latency Communication in the Metaverse Low latency communication is the capability of the communication network to deliver large quantities of data with minimal delay and high accuracy. These networks are designed to support operations that require access to rapidly changing data in real-time. Advanced technologies like self-driving cars, holographic telepresence, remote surgery, deep-sea and space tourism, and other AR and VR innovations are becoming part of the Metaverse [49]. For instance, we had been accustomed to virtual communication using Zoom, Skype, Microsoft Teams, and other platforms. Future developments in VR and AR are well on their way to making an office where people can talk to each other in a fully immersive way. This integration of advanced technologies into the Metaverse creates a huge demand for next-generation networks with enhanced bandwidth and latency. The present network infrastructure cannot provide the bandwidth and latency required for the Metaverse and its applications. The capacity of current 5G networks to handle the IoE, holographic telepresence, collaborative robotics, and deep-sea and space tourism is limited. These applications require multiple terabytes of bandwidth as they depend on real-time inputs from the real world. From the discussion, it is clear that the Metaverse necessitates the highest network positioning accuracy and multiple terabytes of bandwidth. The 6G network, with its advancements like greater use of the distributed radio access network (RAN) and the terahertz (THz) spectrum to increase capacity and improve spectrum sharing, will provide effective and low-latency communication required for the Metaverse [50]. ### 66 for Computer Vision in the Metaverse Computer vision is the study of how computers perceive and interpret digital images and videos. Computer vision encompasses all activities done by biological vision systems, including seeing or sensing a visual signal, interpreting what is being seen, and extracting complicated information in a form usable by other processes [51]. Using sensors, comput Figure 7: 6G for Computer Vision in the Metaverse ers, and machine learning algorithms, this multidisciplinary field replicates and automates essential parts of human vision systems. The objective behind computer vision is to develop artificial intelligence systems that can see and understand their surroundings. In the Metaverse, computer vision plays an important role in enabling humans to experience the virtual environment, as depicted in Fig. 7. Through the use of digital avatars, VR and computer vision provide a near-to-lifelike experience in the Metaverse [52]. In order to connect to this virtual world, the user needs to use XR devices, which are built on the foundation of computer vision. XR applications rely heavily on computer vision. Visual information in the form of digital images or videos is often processed, analyzed, and interpreted with the help of computer vision and visual information. This helps people make effective decisions in the Metaverse. As a result of computer vision, VR and AR environments can be built that are more accurate, trustworthy, and user-friendly than their real-world counterparts. Human position tracking is a computer vision challenge that tries to figure out where people are located in an environment that is constantly changing. In the Metaverse, the healthcare, military, construction, manufacturing, education, and retail sectors will rely largely on computer vision. For example, doctors can improve surgical processes and study data from 3D scans in real time using computer vision. The computer vision will assist doctors in detecting, diagnosing, and treating potential diseases and enable them to examine patients from anywhere in the world [53]. Computer vision in the Metaverse will evolve at an accelerated rate, and even 5G cannot compete with the rapidly evolving technological requirements of the Metaverse's computer vision capabilities. The computer vision requires the continual collaboration of heterogeneous devices to provide immersive experiences for the users, which requires uninterrupted network service, and should provide symmetric uploading and downloading speeds for users to quickly upload all their content while concurrently downloading the content of others. 6G supports a higher number of device connections, which is very crucial for computer vision in the Metaverse for delivering its fully immersive services to customers [54]. The independent frequency, higher data transmission rates, and large coverage of 6G will enhance the QoS of computer vision in the Metaverse. ### _6G for high transaction integration scalability_ To date, the Metaverse implementations used a centralized cloud-based approach for avatar physics emulation and graphical rendering. The centralized design is unfavourable as it suffers from several drawbacks caused by the long latency required for cloud access. Further deployments of Metaverses will also bring scalability issues to the physical layer due to the increased number of computing tasks mainly generated by extremely demanding applications. The traditionally deployed centralized architectures are unlikely to support a large number of Metaverses and users, so the introduction of de-centralized Metaverse systems including frameworks and protocols is inevitable. There are several approaches that can be taken, starting with leveraging Mobile Edge Computing (MEC) technology. For example, [55] proposed the blockchain-based MEC architecture, where base stations allocate their computation and communication resources for providing video streaming and the use of a series of smart contracts enables a self-organized video transcoding and delivery service without a centralized controller. Using the MEC more efficiently will not fulfil the requirements in full, so the decentralized architecture will have to further distribute the communication and computational cost among different nodes present in the virtual space. The concept of de-centralizing the Metaverse applications was presented by authors of Solipsis [56] - a system that allows the adaptive streaming of 3D models including avatars, sites and objects in a completely decentralized fashion. In general, to overcome challenges related to a high number of transactions, massive resource demands and scalability concerns a novel framework should be proposed to address those emerging challenges for the development of future Metaverses. In such framework, the Metaverse Service Provider (MSP), which is a service provider that offers applications or services such as games, conferences or concepts should be able to get paid for provided services and in addition to this, the MSP should be allowed to negotiate with the Metaverse User (MU) to use MUs computational resources in return for discounts or rewards. The blockchain, which is provided by the MSP can contain all interactions between the MSP and MU in terms of transactions. The MetaChain [57] describes a similar concept that could serve basis for future deployments. In this particular proposal, the blockchain shards are used to allow the MUs to contribute their computational resources to the Metaverse application provided by the MSP. This is done in exchange for digital assets such as tokens or service access. With this approach, more users can be attracted to a particular Metaverse. However, attracting users to contribute resources is going to be particularly challenging. The reason is that the service provider will not be able to directly allocate the user resources to the shards. Sharding is so far one of the most practical solutions to achieve a scale-out system where the processing, storage, and computing can be conducted in parallel. As such, the capacity and throughput being linearly proportional to the number of participating nodes or the number of shards become possible, while preserving decentralization and security. The consideration has to be taken when creating shard-based systems as users (by human nature) will aim to maximize their profits and concentrate resources on the shards that pay more. Nevertheless, whichever form such framework will take, a pay-per-share protocol is required to off-load computational workload onto the Metaverse user devices. Metaverses should offer their users an extraordinary immersive experience in virtual environments, such as entertainment games and smart cities, using its enabling technologies. The metaverse can track users' physical actions and physiological responses and may expose confidential information about their habits and physiological nature to third parties. If hackers get their hands on such sensitive information, it could lead to harassment and theft of digital assets, which could make users lose faith in the security and privacy of the metaverse. These issues can be addressed by utilizing privacy-protection technologies like "Private Copy" and "Clone Cloud", as depicted in Fig. 8. The creation of private copies and clone clouds is dependent on high connectivity and continuous integration with the metaverse environment. The edge intelligence facilitated by 6G can support the needs of these technologies in the metaverse. The use of a blockchain-based digital twin wireless network and an edge computing federated learning architecture can further enhance the users' privacy and data security [8]. Together with 6G, AI can optimize connectivity while also enabling traffic prediction and improving security in the metaverse. To avoid information breaches, physical layer communication may use a machine learning-based antenna design. Machine learning and quantum encryption can also be used to protect the security of communication devices in the metaverse. The metaverse's security may be increased by using early warning systems and AI-enabled 6G to identify network anomalies. The use of distributed and federated AI in a 6G network also eliminates the necessity for data sharing across the metaverse devices, which preserves the privacy of the users. IV Role of 66 Technologies for the Metaverse 6G will play a key role in the Metaverse operation since such an environment requires pervasive connectivity for full-fledged and omnipresent Metaverse immersion. Essentially, very-high bitrates and ultra-low delay are crucial for a satisfactory Metaverse experience. An important factor on this performance is the smart management of connectivity resources/services, scalable infrastructure and very low latency communications. Therefore, Edge AI and cloud infrastructure are necessary for efficient and performant handling of relevant use cases in the Metaverse. Edge AI is an important enabler since it facilitates AI-driven optimized network management and minimizes delay with distributed and close-to-the-user computing paradigms. This technology will be compounded with the AI native design of 6G which will be embedded for numerous functions ranging from physical layer control to service management. Furthermore, the required flexibility and scalability for network and service environment requires moving towards cloud-native technologies which can also form telco clouds for more efficient and scalable Metaverse infrastructure in the backend. In the cyber-physical domain, another aspect of the Metaverse regarding 6G will IoE and robotics play a key role. Additionally, 6G will have the essential toolbox to enable AR/VR, which is critical since the Metaverse will be the main vessel for AR/VR experience. An appropriate immersive experience in the Metaverse will be possible with those technologies enabled by 6G communication and computation functions. As a transversal technology similar to AI, blockchain can also help the distributed and open nature of the Metaverse and enable the transferability of digital assets which will be an important capability for the Metaverse use cases. A depiction of these technologies and their roles is provided in Fig. 9 and the summary of all related works is presented in Table 4. ### _AI_ #### Iv-A1 Introduction Based on the combination of many advanced technologies, the Metaverse should be built to convey a groundbreaking immersive experience to users, in which AI has played a vital role in the foundation and development of the Metaverse regarding numerous aspects, including core services and applications. Besides the responsibility of ensuring the reliability of the Metaverse's infrastructure, AI can help developers in designing and building a smarter and more beautiful virtual world and further allows users to acquire hyperreal creation using built-in tools. In 6G systems, numerous challenging tasks and applications can be solved and enabled by advanced ML algorithms with different learning mechanisms (i.e., supervised learning, unsupervised learning, and reinforcement learning) to achieve high performance and low latency. Especially, DL with the advantage of effectively learning complex patterns from practical large and messy datasets will be the key technology to polish many aspects of the Metaverse, from the intelligence of AI agents and virtual assistants (a.k.a., chatbots) to the visual quality of 3D worlds [58]. Indeed, the presence of AI in the Metaverse can be realized in the interactions between a user (represented by an avatar) and other objects (e.g., non-player characters) by automatically analyzing sensory data for multiple tasks, such as speech recognition and understanding, facial expression analysis, body movement tracking, and gesture recognition. Besides, Figure 8: 6G for Security and Privacy Protection AI can be applied to preserve users' identity and their digital assets from cyberattacks, especially in the scenario in which the Metaverse is built with a cross-chain bridge. #### 3.2.2 How 6G AI can help, on which features In the Metaverse, natural language processing (NLP) plays an important role to deploy intelligent virtual assistants (including chatbot) [59], which helps the Metaverse comprehensively understand what users are typing and talking, from simple sentences to complicated and long conversations, over unlimited, to smooth user interaction accordingly. Empowered by AI with ML and DL algorithms, chatbots can immediately respond to the users and adapt to an environment with reinforcement learning to consolidate operation and improve the performance of an overall virtual assistant system [60]. In the NLP domain, language modelling aims to predict linguistic components in sentences and paragraphs by mining syntactic and semantic relations of words and phrases, which is commonly developed for machine translation and text recommendation. Several advanced language modelling methods have exploited DL with RNN, LSTM, and CNN architectures to improve the overall system efficiency and addressed many fundamental tasks [61], such as identifying long-term dependency in long sentences in complicated scenarios, recognizing hyphenated words, misspelt words, suffixes, and prefixes. Especially, language modelling should be taken into consideration with different popular languages, such as English, Chinese, Spanish, and French [62, 63] to attract as many as possible users from over the world to join the Metaverse. Some advanced structures in deep networks, such as bidirectional LSTM, bidirectional gated recurrent unit (GRU), and channel-wise attention connection, have been leveraged to deal with some challenging problems, such as sentiment analysis, question type classification, and answer identification with multiple sentences [64, 65], which accordingly improved readability, interpretation, and rationality of virtual assistant agents. Some other specific AI-based NLP tasks (e.g., context retrieval, semantic notation, and named entity recognition) can be considered to uplift text-based and speech-based user interactive experiences in the Metaverse. Commercial headset devices with VR/XR technology have been designed to bring 3D viewing experiences to users, including high video quality (i.e., high resolution and high frame rate) and wonderful wearing comfort thanks to the advancement of AI. In [66], an eye fixation prediction method was introduced for gaze-based applications (e.g., video rendering and content visualization), in which a DL framework with hierarchical CNN architectures was exploited to process different data types, including VR images, gaze data and head data collected by wearable sensors. Some recent works have studied advanced ML and DL algorithms to precisely identify periodic behaviours of VR gear's user (e.g., gaming controllers and head-mounted display) for automatic identity authentication and health issues detection [67]. Some deep networks with CNN architectures have been designed to Figure 9: 6G key technologies and their roles for the Metaverse. assess the quality of images/videos (e.g., color saturation, brightness, resolution, and frame rate) displayed on the screen of VR devices, and then automatically adjust screen settings to optimize the visualization based on video contents and user conditions [68]. Along with the VR/XR technologies, computer vision is one of the most important sectors to build a beautiful virtual world in the Metaverse and enable it to be more intelligent from the user's viewpoint with the adoption of AI, especially DL in a few years [69; 70; 71]. Many sophisticated CNN architectures have been designed for different fundamental image processing and computer vision tasks, such as object detection, semantic segmentation, and scene understanding [72; 73]. For instance, atrous convolution was introduced by DeepLab [74] for semantic segmentation to capture more meaningful features by enlarging the receptive field of kernels to enhance the learning efficiency of a deep network while obtaining a small network size. Static and dynamic objects can be detected and located in the virtual worlds accurately by several recently advanced DL models to provide useful information to users and be synthetized for higher-level tasks like scene understanding and detailed captioning. Some image/video quality distortion problems, such as blurring, noise, and frame corruption, can be addressed effectively by AI technology to guarantee the high-class visual perception of a user when experiencing the Metaverse [75]. In addition, the activities, including single actions and interactions, of users and non-player characters in the Metaverse can be automatically detected and recognized by AI-powered human pose estimation and action recognition methods [76]. Some convolutional networks have exploited cutting-edge layer structures, such as dense connection, skip connection, and attention connection, to estimate complex human poses and classify grouped activities while handling other challenges like varying viewpoints, object occlusion, and complicated actions in practice. For example, generative statistic models and hybrid LSTM-CNN architectures are suggested in [77] to precisely examine pose transition in the spatiotemporal domain, thus increasing the accuracy of action recognition and activity classification. To preserve the Metaverse from different cyberattacks, especially protect users' digital goods and cryptocurrency assets, many advanced ML algorithms and DL models can be deployed in multiple layers (e.g., network and services layers) of the Metaverse's platform for intrusion detection [78; 79; 80], in which various malicious attacks can be automatically and accurately detected and classified to immediately provide an efficient security solution. In [81], a holistic security method with sufficient assurability and explainability was proposed to quickly and sequentially detect abnormalities and time-dependent abnormal events in IoT systems and software-defined networks, in which zero-bias neural networks are transformed into performance-assured binary abnormality detectors to increase detection accuracy while presenting the lowest latency based on false alarm constraints. In the effort to exploit DL denoising autoencoder (DAE) for many fusion security problems, many variants of DAE, including stacked autoencoder, stacked sparse autoencoder, stacked noise autoencoder, stacked contractive autoencoder, deep belief network, were benchmarked in for performance comparison with different practical intrusion detection datasets [82]. Reinforcement learning (RL) with the capability of learning environmental factors to adapt learnable parameters was also exploited to deal with different types of cyberattacks (such as malware, denial-of-service attack, and man-in-the-middle attack) [83]. In addition, privacy Figure 10: The roles of AI for the development and advancement of the Metaverse. preservation in the Metaverse should be uplifted comprehensively with the help of AI to ensure that there are no leakable risks and threats to users' big data in the virtual world. For instance, a privacy-aware and asynchronous DL-based method was introduced in [84] to maintain the confidentiality of data among different collaborative data collection sites. In [85], an optimal centralized privacy-preserving aggregate mobility data release mechanism was proposed to minimize the data and information leakage, in which deep RL models and the Asynchronous Advantage Actor-Critic algorithms are combined to optimize the privacy-preserving method. The above-mentioned privacy-preserving DL-based methods can be recommended for the Metaverse to combat information leakage threats and adversary attack effectively. #### 3.1.3 Summary In the Metaverse, AI has presented a plentiful foundation and development in numerous aspects and helped to construct a more beautiful virtual world with intelligent and secured services, thus bringing a wonderful experience to users. Several advanced ML algorithms and DL architectures have been deployed to take care of the comfortableness of VR users, and the interaction between users with virtual assistants, and automatically provide useful information about the virtual worlds to users. Besides some popular domains like NLP and computer vision, AI has great potential for deployment in other sectors: protecting users' digital assets from hackers, early detecting intrusions for data security and privacy preservation, improving the performance of URLLC with intelligent MEC, enhancing the intelligence of AI agents in real-time strategy and fighting games, and analyzing mental state with the brain-computer interface as illustrated in Fig. 10. Although some advanced ML and DL models can conduct a high performance in many detection and classification tasks, they represent black boxes that lack the capability of explainability and interpretability. Therefore, there remains room for AI research and development in the Metaverse. ### _Blockchain_ 1) Introduction In the Metaverse, data privacy and virtual asset (e.g., cryptocurrency and NFT) security of users should be guaranteed as the top priority. In this context, blockchain technology represents a promising solution with many unique features at once, for example, decentralization, transparency, and immutability. Fundamentally, blockchain is an innovative technology that permanently records transactions in a decentralized and public database so-called a ledger [86]. Although all transactions are transparent (i.e., being available to check by anyone), the decentralized recording system of blockchain is very difficult to fool or control. Some blockchains like Ethereum and Solana are programmable through smart contracts with different consensus mechanisms, such as proof-of-work and proof-of-stake, which can meet high-security requirements of e-commerce platforms and enable the revolution of the digital ecosystem in the Metaverse, especially supporting virtual asset payment and trading activities. A smart contract on the blockchain could be used to establish the ownership of any digital object, such as artwork and music, over NFT specialized by unique and nonreplaceable (i.e., no one else can claim the ownership of that digital product on the blockchain even if they have a copy version on computers). The role of blockchain in the Metaverse relies on ensuring data privacy and security, enabling seamless and secured data sharing, data interoperability and integrity with some common applications and services, such as decentralized finance and NFT market [87]. Besides that, blockchain allows digital goods to be tradable safely in a virtual world and enables the connection of physical objects to the Metaverse over NFTs. Notably, if two virtual worlds are interoperable, the blockchain has to authenticate the proof of ownership of digital goods in both virtual worlds. Indeed, blockchain bridges the real world and the virtual world besides playing as the gateway for users to access the Metaverse. #### 3.2.2 How 6G BC can help, on which features Data acquisition is one of the most fundamental processes to build the virtual world in the Metaverse, which collects big data from different modalities. Notably, the sensitive data collected from users to train AI models for several special modules (such as decision-making of virtual assistant, recommendation system, digital product development, and automated market maker) in the Metaverse should be secure. For secure and large-scale environment data acquisition, the work in [88] proposed a blockchain-based system which is specialized by one valuation layer to assess the quality of acquired data, one consensus layer to encourage and incentivize high-quality data acquisition, and one ledger layer to record transactions and qualified environmental data. In [89], a blockchain-based efficient data collection and secure data sharing mechanism was introduced for reliable industrial IoT systems. This mechanism has exploited the Ethereum blockchain to maximize the amount of acquired data and the deep reinforcement learning algorithm to obtain highly secure and reliable shared data. To guarantee the users' privacy in crowdsourcing systems, Li _et al._[90] designed a blockchain-based decentralized framework for data collection and sharing. There were three standard smart contracts on blockchain executed for the whole process of data acquisition to achieve such crowdsourcing information as task posting, receiving, and assignment. The proposed method was implemented and verified on an Ethereum test network with real-world data, which demonstrated usability, feasibility, and scalability to be suitable for distributed crowdsourcing systems. Although blockchain technology can ensure highly secure and reliable data supplied to the Metaverse, its drawback is low latency due to the complicated and distributed nature of processing transactions with smart contracts and consensus mechanisms like PoW. Besides, the high transaction fee is also a realistic barrier for a low-income user to experience the Metaverse. In a large-scale Metaverse platform, data storage should be taken into consideration seriously because of the high velocity, big volume, and complicated variety of big data from a plentiful number of applications and services deployed in virtual worlds [91]. There exist many underlying risks, such as leakage, tampering, and loss if the Metaverse is built on a platform with centralized storage systems. Some sensitive data like biometric login data of the user (e.g., face and touch identification on iPhone) can become the target of cyberattacks to steal virtual assets. To overcome the above-mentioned issues of centralized systems, the work in [92] proposed a large-scale secured IoT data storage scheme by exploiting blockchain miners to manipulate IoT data stored in distributed hash tables (DHTs). In the blockchain system, a certifiateless cryptography scheme was applied to reduce redundancy in traditional public key infrastructure and authenticate IoT devices, where the generated public key pairs are broadcasted to all devices with verification done by the blockchain miners. In [93], the time-series data in the Metaverse was stored in a locality-aware auditable decentralized storage ecosystem that was designed and managed thanks to the advancement of blockchain technology. Some data storage systems with recovery functionality have been developed to effectively address multiple problems, such as low integrity, high cost, and easy tempering. Liang et al. [94] introduced a secure blockchain-based data storage scheme, wherein the incoming data packets are verified with smart contract and consensus mechanism, and then checked to early detect any threats before being stored on a decentralized system. Notably, when distortion occurs to the stored data, multiple nodes in the blockchain network can repair it successfully. As the mixture of numerous digital realms, the Metaverse demands manipulating and processing the big data that is acquired from incompatible infrastructures for different purposes, in which the standardizations of data for different applications and services in the virtual worlds are dissimilar. This reveals a serious concern about data interoperability when expanding the Metaverse with an interconnection capability among different virtual worlds. To ensure the interoperability between different virtual worlds in the Metaverse, building a cross-chain protocol or an inter-blockchain bridge becomes a promising solution in many specific domains like healthcare and e-commerce [95, 96, 97]. A blockchain bridge is a protocol connecting two economically and technologically separate blockchains (such as Bitcoin, Ethereum, Avalanche, Solana and Polygon) for interactions and acts like a physical bridge linking the ecosystems of one blockchain with another. As a result, blockchain bridges enable what is called interoperability means that digital assets and data hosted in Metaverses built on different chains can interact with each other [98]. Besides, blockchain bridges allow users to access new protocols on other chains and encourage collaboration between developers from different blockchains, thus promoting a virtual economy in the Metaverse. A novel blockchain framework, namely BiiMED, was introduced in [95]to uplift the data interoperability and integrity in electronic health Figure 11: The roles of blockchain for ensuring the security and privacy of data acquisition, data sharing, data storage, and data interoperability in the Metaverse. records (EHR) sharing systems. The proposed framework facilitated the medical data on EHR systems between different medical providers and healthcare institutions with a decentralized trusted third-party audior for interoperation validation. Some recent cross-chain protocols [99, 100] have been introduced to interconnect multiple blockchains for secure data utilization and management while obtaining full interoperability. In [99], a cross-chain interactive decentralized access model was designed with a gateway to reading the information on multiple chains and route cross-chain transactions, and a notary network with an interplanetary file system and BigchainDB to verify and confirm each transaction based on a voting mechanism. Such kinds of cross-chain protocols allow users to easily buy, sell, and trade virtual assets among different digital worlds without any intermediate tools, and consequently encourage the adoption of the Metaverse. Along with interoperability, data integrity has also received much attention in the Metaverse, in which blockchain technology was considered to verify and protect data integrity in decentralized cloud computing systems [101, 102]. In the Metaverse, a user can freely interact and trade virtual goods (including cryptocurrency and other virtual assets like NFT) with a virtual assistant and other users via decentralized exchanges (DEXs) integrated into the Metaverse to promote the development of decentralized finance (DeFi). As an ecosystem of financial applications built on blockchain networks, DeFi enables easy access to financial services, facilitates traditional financial systems, and has a modular framework with interoperability with public blockchains. Recently, GameFi, a fusion of words game and finance, refers to play-to-earn blockchain-based games with economic incentives to players, which is being developed and integrated in the Metaverse. A GameFi ecosystem uses cryptocurrency, NFTs, and blockchain technology to create a virtual gaming environment, where various GameFi ecosystems built on different chains can be involved in the Metaverse owning to chain bridges. In this context, it arises many privacy issues (e.g., the leakage of user identity and other personal information that can be stolen for illegal purposes) can be effectively handled by blockchain technology with immutability [103]. In a blockchain-powered Metaverse, third-party intermediaries are not permitted to manipulate the data of other parties. In [104], a blockchain-enabled crowdsourcing approach was proposed to deal with privacy preservation in mobile environments, where users can access the Metaverse using mobile devices. In secure 5G and 6G communication networks [105], blockchain was exploited to minimize privacy breaches by completely integrating authentication mechanisms with blockchain-based identification systems. #### 3.2.3 Summary With the distinctive features of decentralization, immutability, and transparency, blockchain technology has promoted the development and advancement of the Metaverse, where it has played an important role in any Metaverse platforms with some great contributions in terms of many technical aspects, including data acquisition, data storage, data interoperability, and privacy preservation. Besides ensuring the privacy of sensitive information and security in trading activities (e.g., buy/sell cryptocurrency, NFTs, and other virtual assets), blockchain has shown great achievement to revolutionize user's immersive experience, boosting the economic growth, and attracting new users to the Metaverse via numerous blockchain-aided applications and services supplied in the virtual worlds. However, it remains several challenging issues to concurrently attain security, scalability, and decentralization when the Metaverse must serve a huge number of users and a rapidly increasing number of transactions to process. Consequently, many research topics to optimize blockchain for the Metaverse should be continuously exploited in the future, such as consensus algorithms, blockchain interoperability, smart contract, and network management. ### Edge Computing and Edge Ai #### 3.2.1 Introduction The Metaverse is envisaged to map and simulate all our daily life activities in cyberspace at a huge scale while enriching such mapping with an immersive and interactive user experience. Cyber-physical and digital twin applications will also be integrated with the Metaverse application to offer realistic cyber representations of the physical world. In the ICT infrastructure, there will be the Metaverse engine which performs computations required to run virtual universe simulations carrying out computationally heavy tasks such as collision detection in the virtual universe and computation of 3D physics, and also other aspects of virtual universe that demand high computational power [106]. The Metaverse is striving to connect billions of users and create a shared world where virtual and reality merge [8]. Therefore, users interact in the physical-virtual world with the characteristics of diversification of information, identities, and modes under the requirements of ultra-low latency, massive resource demands, interoperability between applications, and security and privacy issues [107]. The promised Metaverse operation will require extremely low latency with highly elastic and omnipresent compute and storage resources. The latency and processing challenge for the Metaverse is in line with what is expected with 6G edge computing realization: For the Metaverse extended-reality computations to be offloaded, the entire process must be shortened so that input from the user device, a network trip, processing by the service, a return network trip and drawing the output on the user device fits in the 20ms time taken by a single network trip today [108]. Cloud-based processing for Metaverse operation can be unfavourable as it suffers from several drawbacks caused by the long latency required for cloud access, such as low-quality visualization in XR. 6G enables real-time, ubiquitous, and ultra-reliable communications for massive Metaverse devices with support for device mobility, which can reach 1020 Gbps [109]. To this end, Fog Computing [110] and Mobile Edge Computing [111] have been proven effective to tackle the issues faced by cloud-based systems, by moving the computational heavy load near the end-user and distribute it among edge devices; such approach can significantly reduce the latency and optimize the system performance. Furthermore, there is the cost-benefit: such an approach it would drive down the cost of XR devices and allow mass adoption. Verizon has estimated that any more than 20ms of motion-to-photon (total stack) latency causes many users to become nauseated; for comparison, well-built wireline broadband networks today typically have 20ms of network latency alone, and typical LTE latencies are 3x higher. Therefore, edge computing is an important technology. For instance, Zhang et al. [112] introduced the MEC into the Metaverse to improve the quality of users' experience. Xu et al. [7] discussed the potentials of AI, Edge computing and blockchain for ubiquitous, seamless access to the Metaverse. Similarly, Lim et al. [113] present the infrastructural architecture required for the Metaverse with a special focus on the convergence of edge intelligence and the infrastructure layer of the Metaverse. 6G-enabled edge intelligence opens up a new era of Internet of Everything and makes it possible to interconnect people-devices-cloud anytime, anywhere. In this context, industry, and academia have developed a new learning paradigm, Edge Artificial Intelligence (Edge AI) [114], which allows AI models to be deployed on devices and perform real-time data processing and model inference. 6G mobile communication technology provides edge AI with lower latency, more stable network connection, and more secure network architecture. Edge AI with 6G is expected to be applied to solve problems such as high bandwidth and high connection density in the Metaverse. However, the Metaverse still faces many challenges, such as users' privacy, network latency, and resource allocation issues. Moreover, the Metaverse places higher demands on the current edge AI architecture. As mentioned above, 6G edge intelligence has the advantages of low latency, computing offload, and high performance [115]. Overall, the application of 6G-oriented edge intelligence has the benefits of balanced data storage, efficient data transmission and high reliability. #### 4.2.2 How 6G EC and Edge AI can help the Metaverse As noted above, a high-speed and low-latency network connection and ubiquitous access to services is an important foundations for improving the user experience in Metaverse. Otherwise, issues such as visual jitter or delay and other undesirable phenomena might lead to the subbar performance of Metaverse. In that regard, to reduce network latency, an incentive mechanism framework for VR services was proposed in [116], which uses perceived quality as a criterion for measuring immersive experience and effectively evaluates the immersive experience in the Metaverse. [117] presents a novel MEC-based mobile VR delivery framework that is able to cache parts of the field of views (FOVs) in advance and compute certain post-processing procedures on demand at the mobile VR device. Jiang et al. [118] found that coded distributed computing (CDC) can improve the latency problem in the Metaverse and proposed a CDC and dual blockchain distributed collaborative computing framework. However, the computing, communication, and storage shortage will seriously affect the user's immersive experience. For the resource allocation problem, a new blockchain-based framework called Metachain was proposed in [87] which uses Stackelberg game theory analysis to propose an incentive mechanism, i.e., users obtain corresponding rewards by providing resources to blockchain shards. Based on the intelligent 6G edge network, a machine learning framework was proposed in [119] for decentralized learning and coordination of edge nodes to improve resource allocation strategies. For Edge AI and its applications in 6G, there are various challenges which are investigated by the research community. Edge AI paradigm and its applications still have the following issues that need to be optimized [8]: - High Latency: Since edge AI generally involves thousands of remote devices and needs to transmit and process massive amounts of data [120, 121], the high latency issue in the current network environment has always been one of the bottlenecks hindering the wide application of edge AI [122, 123]. - Fragile Stability: In edge AI, the training of large-scale models often requires powerful computing power and stable network connections, especially the training of large language models [124]. However, the current network environment is only suitable for the training of small-scale models [125]. This is due to the fragility of the network connection leads to the failure of large-scale model training. - Low Security: The current network architecture no longer meets the security needs of thousands of remote devices connecting to cloud servers today [120]. Furthermore, the openness of the network further challenges the security of the current network architecture. These issues are expected to be exacerbated with the utilization of Edge AI in 6G for the Metaverse applications. For instance, in [8], Chang et al. propose a self-balancing federated learning-based Metaverse framework to address the statistical heterogeneity faced by edge-cloud architectures. Besides, in [126], Lu et al. proposed a blockchain-based digital twin wireless network (DTWN) edge computing federated learning framework to solve the problem of user privacy data security. #### 4.2.3 Summary The integration of edge computing and realization of edge AI in 6G will provide various capabilities as well as challenges for the Metaverse. The key benefit is related to latency minimization needed for superb Metaverse user experience and pervasive services. Similarly, the inherent Edge AI support in 6G will also serve the Metaverse for smart edge services leading to better Metaverse services and device simplicity and flexibility. However, the potential benefits of 6G edge technologies should be supported with relevant research for improving on the aspects such as smart resource allocation, security, and privacy-preserving AI techniques. ### _6g Open RAN_ #### 1 Introduction Radio Access Network (RAN) is a very important component of a mobile communication system that can link individual devices like mobile phones or terminals to other parts of the network using cellular radio. RAN coordinates the resource management in the cellular network across the radio sites. RAN can send the signal from a mobile device that is connected wirelessly to the core/backbone network to several endpoints in the wireless network, thereby, enabling the signal to travel along with the traffic generated from other networks. A RAN will typically comprise of base stations, that can transmit and receive signals to and from mobile devices in the network. The signals are then digitized in the RAN-based station and are connected to the network. RAN contains radio units (RU), distributed units (DU), a centralised unit (CU), and the RAN Intelligent Controller (RIC) for the creation of an effective mobile communication system. RAN is very important for meeting the low latency and high-speed internet connectivity requirements of real-time applications [127]. RAN requires manual intervention if any network issues arise in software or connecting devices. Pointing out the cause and the origin of these issues in the network by the mitigation experts is difficult as RAN is black-box in nature. The process involved in the mitigation of these network issues requires significant cost and time, subsequently affecting the overall quality of the network. This necessitates the creation of open, intelligent, virtualised, and fully automated interoperable RAN for the next generation 6G networks [128]. Open RAN (ORAN) is one such technology that integrates AI to optimize radio resources and also automates the management and operations of infrastructure. 6G ORAN integrated with AI can be used to implement Self-Organizing Networks (SON) and Radio Resource Management (RRM) solutions that improve network coverage, capacity, handover, and interference. They could also be used to increase the spectral efficiency of massive MIMO systems by optimising their performance. AI/ML can also enhance the user experience through VoLTE/video quality optimization, terminal anomalies detection, and other Quality of Service/Quality of Experience (QoS/QoE)-based use cases [129]. The use of ORAN gives mobile network operators (MNOs) the flexibility to provide 6G connectivity in a cost-effective, secure, and energy-efficient way. The openness of ORAN also enables the MNOs a unique ability where all the vendors can share the RAN functionalities [130]. As a result, it avoids vendor lock-in by replacing vendor-proprietary interfaces with a fully disaggregated RAN based on open standards. #### 2 How 6G Open RAN can help the Metaverse The users navigate in the Metaverse frequently with the help of technologies such as AI, XR, digital twins, IoT, etc. As a consequence, the Metaverse demands continuous connectivity with sensors, hardware devices, and many other peripherals for providing high-quality and immersive services to the user. Any disruption in the network connectivity of these devices will cause the users extreme discomfort and make them feel that the surroundings are out of their control [131]. The standards for the Metaverse are substantially more demanding than those for the vast majority of internet applications in the present day. The current capacity of MNOs to handle the network requirements of devices connected to the Metaverse is rather questionable. This presents a challenge in the adaptation of the Metaverse. To solve these issues ORAN in 6G is a potential solution.ORAN in 6G with its AI, automation, and fully disaggregated open standards will enable the Metaverse to be cost-effective, secure, and energy-efficient way. Let us consider the application of the Metaverse in the healthcare domain. The Metaverse allows healthcare professionals to have better interactions with patients who are in different demographic locations, such as viewing a three-dimensional model of the human body while discussing diagnoses and treatments. This would allow doctors to simulate the effect of a proposed treatment on a patient's body before its application, creating a more personal and informative experience than is currently possible with two-dimensional images displayed on a screen. VR, AR, and MR technologies are currently being used for medical training and surgical procedures, These enabling technologies of the Metaverse demand reliable connectivity. If any failure of software or hardware occurs in the network at the time of medical intervention it will lead to serious catastrophic situations. ORAN in 6G enables devices to relay on multiple MNO, so, this will ensure the medical devices connected to the Metaverse with much reliable connectivity. The remote medical surgeries supported by the Metaverse require real-time insights. The network supporting these devices must be faster in recovering from the related issue and failures. The ORAN in 6G will provide the Metaverse with zero-touch network and service management capabilities which will automatically resolve the raised issues related to the network faster than the traditional RAN. The vital monitoring devices connected to the Metaverse require a latency-free and cost-efficient network. These devices connected to the Metaverse will be greatly benefited by ORAN service management and orchestration platform in 6G. ORAN service management and orchestration platform in 6G is an intelligent automation platform that applies automation reduces the complexity of networks, improves network performance, and enhances the customer experience in the Metaverse which minimizes ORAN operational costs, as depicted in Fig. 12. In the Metaverse, the possibilities of what can be created and purchased are nearly limitless. Users can purchase avatar skins, hairstyles, clothing, and accessories, as well as virtual land and property. Cryptocurrency and digital wallets will play a role in the Metaverse payments. Blockchain-based cryptocurrencies in the Metaverse or a crypto wallet are required to store and transport digital assets purchased in the Metaverse as well as between the virtual worlds. Digital wallets will be an alternative payment method that enables users to purchase digital goods securely. Thus the number of transactions occurring in the Metaverse will be limitless. Any breach or a critical update to the network will interrupt or halt these transactions and may affect the QoS/QoE of the customer in the Metaverse. ORAN in 6G will be less dependent on hardware which will reduce the risk associated with automated upgrades or isolated security breaches. The enhanced modularity available with open interfaces makes it easier for operators to serve the Metaverse towards a continuous integration/continuous delivery of services. Every trade or purchase that occurs in the Metaverse is recorded as a transaction, which results in huge network traffic because the data is to be stored in multiple peers. ORAN in 6G helps the Metaverse in better traffic management and also determines where to send traffic across the network. ORAN in 6G and AI enables the Metaverse to predict network conditions, such as congestion, so the controller can find an optimal path to send traffic over. This provides the users of the Metaverse with valuable insights about the network. #### 4.4.3 Summary ORAN in 6G with features like openness, better security, enhanced resource sharing, improved traffic management, and zero-touch network and services management provides the Metaverse with a network that is faster, reliable, cost-effective, automated, and intelligent. This is will help the Metaverse applications and services to be real-time. ORAN in 6G will help the users in the Metaverse with high-quality immersive experiences. The issues related to network software updates or threats will not affect the transactions in the Metaverse as ORAN in 6G is secured and depends less on hardware compared to the traditional RAN. ORAN in 6G allows AI to easily analyze the network and provide valuable insights for the Metaverse to persist. Though ORAN in 6G provides better network capabilities to the Metaverse it still faces challenges related to widespread adoption, technical support difficulties, system integration problems, and security risks. ### 6G cloudification and cloud nature technologies #### 4.4.1 Introduction A key aspect of 6G networks will be the cloud-native design of the overall ecosystem. With the actual realization of the Metaverse, the cloud, infrastructure, and telecom companies will have to provide a fully immersive Metaverse experience challenging servers with a 100x harder compute task than an AAA game server hosting Fortnite today, and telecom access networks facing 24x more traffic in a decade [108]. To address these compute-storage requirements, the State-of-the-art Metaverse architectures rely on a cloud-based approach for core Metaverse functions such as avatar physics emulation and graphics rendering computation. Specifically, XR places extraordinary demands on networks with native cloud design of 6G networks, on-board computation capability to eliminate external computing device dependency can still be delivered on simpler, lighter, and cheaper end-user devices if computationally intensive tasks can be offloaded to a cloud computing instance. The Metaverse leads to a clear need for cloud computing, considering the amount of storage and processing required to support a virtual reality universe: compute, storage and network loads [132]. As more performance and details will be demanded, remote cloud-based computers will become a necessary cost-effective way to solve that problem. The cloud computing technologies will be heavily exploited in two dimensions: First, by the Metaverse providers themselves built whether with private data centres or managed services. Due to their advantages, these compute- and graphics-intensive systems will be built on public cloud providers. Another option is to provide on-demand access compute and storage using pay-as-you-go models which can be done by public cloud providers with points of presence distributed globally. However, there is also the latency dimension: navigating the Metaverse smoothly through VR technology depends mainly on the network latency. As VR technologies are delay-sensitive and required very short latency, communicates with the Metaverse servers plays a pivotal role that leads to telco Figure 12. The role of 6G Open RAN for the development and advancement of the Metaverse. clouds where this concept is embedded in the telco network itself. For example, the validation of Non-Fungible Token (NTF) trading transactions requires tremendous computational power. This challenge is also valid for other Metaverse applications such as data processing of digital twin applications or AI-enabled services like storytelling and recommendation services that empower the virtual universe simulation [106]. Current state-of-the-art Metaverse implementations perform the computational on the cloud, which may limit the simulation capacity, and increase access latency. Moreover, there are several independent and fragmented Metaverses that rely on different hardware and software technologies. Since the service providers would like to exploit the advantages of controlling users' Metaverse data in their servers, we may end up with isolated Metaverses rather than a universal Metaverse. Additionally, due to capacity limitations, the number of users that can access each region may be limited by the cloud service provider's local computational and communication capacity. Such limitations defeat the purpose of a virtual world, which is supposed to accommodate avatars as much as the real-world location can physically accommodate people. Mobility support is also crucial since Metaverse will be a pervasive experience. Cloud can also help there as proposed by [106]. In this context, they propose a distributed architecture that can achieve a universal Metaverse, and solves the computational bottleneck. The advantage of layered architecture is twofold. Firstly, the users control their data, which enables organizations to access a universal Metaverse, rather than multiple separated Meta- verses. Secondly, the computational bottleneck is resolved by distributing the computational cost of heavy tasks. #### 4.2.2 How 6G cloudification can help the Metaverse The real-time interactive nature and high demands on data storage, streaming rates, and the processing power of Metaverse applications will accelerate the merging of the cloud into the network, leading to highly distributed tightly-integrated compute- and data- intensive networks becoming universal compute platforms for next-generation digital experiences [133]. For instance, Google Stadia [134] and Nvidia GeForce Now [135] instead offload such rendering tasks to a remote compute cloud--allowing the highest level of quality on weaker devices such as smartphones. less latency- and loss-tolerant (to provide satisfying responsiveness to inputs). To an even greater extent than AAA video games, VR and MR are highly computationally intensive. #### 4.2.3 Summary Cloud computing technologies and their adoption by telecom operators as telco clouds and cloud-native design in 6G have important implications for the Metaverse. First, they allow elastic Metaverse services which can be dynamically deployed and provisioned. Moreover, the Metaverse is expected to be a federated entity where different service providers, applications and users are present. Cloud computing enables such an environment where different Metaverse apps can easily reside together and integrate. Moreover, efficiency gains via consolidation and infrastructure sharing is possible. 6G clouds can support ultra-scalable compute storage for spatiotemporal changes in the Metaverse services. However, the trade-off between latency and cloud centralization is an important research topic [133]. ### 6G IoE #### 4.2.1 Introduction The growth of IoT applications results in increasing the number of IoT devices, which is expected to grow up to 24 billion by 2030 [136]. Furthermore, the total IoT market will also grow up to USD 1.5 trillion in 2030. The dawn of Internet of Everything (IoE) is envisaged to expand the IoT paradigm to weave a hyper-connected network of not only things but also data, people, and processes [137]. Therefore, IoE is expected to integrate "Everything" for connecting, identifying, monitoring, and making intelligent decisions towards realizing new applications and services. IoE will connect many ecosystems involving heterogeneous sensors, actuators, user equipment, data types, services, and applications [138]. Numerous heterogeneous sensors in IoE can obtain data related to various parameters ranging from location, speed, acceleration, temperature, ambient light, humidity and air pressure to biologists. This sensory information is paramount for the functionality of the Metaverse as real-world information provides inputs to form and update the virtual space and allow interactions between the real world and the virtual world. Furthermore, Human-Computer Interaction (HCI) can provide more flexible ways to access the Metaverse through human sensing (e.g. gesture recognition) [5]. Numerous cameras can capture video sequences from multiple angles to recognize human activities through advanced AI-enabled computer vision algorithms. In addition, the captured audio-visual information can be used to predict human emotions with the aid of smart wearables. These smart wearables can also capture data that are useful to obtain health metrics, such as heart rate, oxygen saturation level, body temperature, and electrocardiogram (ECG). 6G provides the ubiquitous, uninterruptible, ultra-high reliable/available and massive low-latency communication demanded by IoE [5, 137]. In addition, the edge-6G capabilities of 6G can process massive amounts of data collected from IoE devices to provide meaningful information for 6G applications and services. The integration of 6G and IoE will have the potential to enable many services, including the internet of medical things, smart healthcare, robotics, industry 5.0, smart grids, smart cities, and body area networks [137]. The superior connectivity offered through 6G with features such as, near real-time connectivity, extreme data rates, access to powerful computing resources at the network edge, and massive machine-type communication under strict delay constraints between heterogeneous sensory devices will facilitate the smooth operation of the Metaverse services and applications [139, 140]. #### 6.1.2 How 6G IOE can help the Metaverse 6G IoE plays an important role towards enabling the Metaverse by supporting an extremely large number of users, sensors, and devices to connect and communicate seamlessly with extremely high data rates, ultra-low delays, and jitters [137]. In addition, the data obtained through heterogeneous IoE devices can be processed using AI and ML through powerful Multi-access Edge Computing (MEC) resources in envisaged 6G networks. For instance, [141] discusses the expansion of IoE and how a multitude of sensors will enable the Extended Reality (XR) applications in the Metaverse. This work also explores the convergence of AI, MEC, Robots, and Distributed Ledger Technologies, such as blockchain, towards expanding the horizons of IoT towards IoE and beyond to provide a beyond smartphone experience. The proposed multisensory architecture is capable of integrating ubiquitous and pervasive computing towards enhancing human perception through advanced XR experiences. This is performed by utilizing wearables and nearby network resources in the 6G era. Hence, the dawn and the evolution of IoE will facilitate cross-reality environments, such as the Metaverse that can fuse real and virtual worlds with networked humans, avatars, and robots. In addition, 6G IoE enables "wireless sensing" to sense the behavior of surrounding humans and the environment [5]. The functionality of IoT is expanded from simply networking a large number of devices towards sensing the wireless network. Various wireless signals including Wireless Fidelity (WiFi), Zigbee, Bluetooth, and Radio-Frequency IDentification (RFID) are used as sensing mediums through analyzing the signal variation (e.g. signal blocking, signal reflection, and signal scattering) caused by surrounding humans and objects [142]. These variations may change signal properties, such as phase, frequency and amplitude, which can be inferred through parameters including Received Signal Strength (RSS), Channel State Information (CSI), and Doppler shift. Together with signal preprocessing techniques, such as filtering and de-noising to minimize the effect of signal interference and noise, changes in the environment can be recognized by identifying distinguishable unique features owing to ML models. The accuracy of such predictions can be enhanced through the widespread of mmWave and MIMO technologies. In addition, an Integrated Sensing and Communication (ISAC) system, where communication systems and IoE hardware are jointly designed can improve the accuracy of wireless sensing while enhancing spectrum efficiency and minimizing hardware implementation cost [143]. However, modelling such systems, providing real-time access to powerful computational resources for data processing through advanced AI and ML schemes, and providing real-time ultra-low latency communication with seamless coverage requires beyond 5G network capabilities that are expected to be facilitated by emerging 6G networks. #### 6.1.3 Summary The evolution of IoT towards IoE with the dawn of 6G provides seamless connectivity, extreme data rates, ultra-low latency and ultra-high reliable/available communication, and real-time access to powerful Edge-AI-enabled computational resources to facilitate the Metaverse applications. 6G IoE also facilitates advanced wireless sensing with mmWave and MIMO technologies. The development of ISAC harnessing extreme communication capabilities and Edge-AI processing of 6G networks can further improve the capabilities of 6G IoE that would enable emerging the Metaverse applications. 6G IoE features that enable the Metaverse applications are illustrated in Fig. 13. ### 6.6 Other 6G Technologies #### 6.6.1 Extended Reality Extended Reality (XR) combines Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) to blur the border between physical and virtual worlds with wearables supporting human-machine interactions with real and computer-generated environments [137]. 6G is capable of facilitating the massive low-latency, extremely low latency and extremely high data rate demanded by XR applications. Together with Edge-AI capabilities, 6G can facilitate the seamless 3C (computing, caching and communication) services for XR applications. Many sensors are used for the data collection on user location, orientation and movements. XR enables telepresence towards facilitating various aspects of human life, such as work, education, shopping, healthcare, tourism, and entertainment [144]. For instance, [145] explores how XR impacts six dimensions of workload, as defined by NASA Task Load Index (NASA-TLX), namely, mental demand, physical demand, temporal demand, performance, effort, and frustration, and the overall workload in the retail sector. The results of the study indicate that albeit VR alone did not have a significant impact on the various dimensions of workload, XR had a significant impact on performing shopping-related tasks. In addition, [146] Figure 13. 6G IoE for the Metaverse presents how users can actively engage with 3D content to stimulate healthy behaviour using XR in the Metaverse. This work discusses how XR can be effectively used in the Metaverse to address long-term health issues and challenges. Accordingly, XR can be identified as an important enabler to provide services efficiently using the Metaverse. However, challenges, such as limitations in physical and cognitive resources, lack of experience with VR environments, and difficulties in using existing XR devices for prolonged periods, need to be addressed towards utilizing XR for the Metaverse applications in future 6G networks. #### 3.4.2 Digital Twins The Metaverse applications demand next-generation networks to facilitate the high processing capabilities demanded by the Metaverse applications. These can be provided through the edge-AI capabilities of emerging 6G networks. Digital Twins (DT) can be an important enabler of the cloud-native network paradigm, which can efficiently support the Metaverse [147]. DTs act as a digital representation of humans and things in cyberspace. Cybertwins can provide a multitude of services for the Metaverse, including, acting as a communication assistant, logging network behavior, and own digital assets, in a flexible, scalable, secure and reliable manner. 6G IoE can play a key role towards facilitating DTs. In [148], the authors discuss how to utilize a cloud network operating system that can work distributively in a real-time multi-agent platform to allocate 3C resources, which are considered to be integral components of envisaged 6G networks [137]. In addition, the Metaverse applications demand 6G networks to support intelligent and fully autonomous operation. In response [149] proposes a Digital Twin Edge Network (DITEN). DITEN is able to combine Multi-access Edge Computing (MEC) together with DT to improve the network throughput, enhance network security, and reduce the cost of 3C services. DITEN continuously monitors the network status for DT modelling, updating and deployment and performs tasks such as routing and resource management efficiently to enable applications such as the Metaverse. However, there are several open issues and challenges, including high-precision DT modelling, DT migration for mobility and ensuring security and privacy. #### 3.4.3 Space-Air-Ground Integrated Network (SAGIN) Global sensing and seamless connectivity are paramount to providing uninterrupted access to the Metaverse applications through 6G networks. However, ground networks alone are not capable of providing ubiquitous connectivity to the Metaverse applications in a reliable and cost-efficient fashion [149]. This is even evident in mountain areas and in disastrous situations. As a solution, Non-Terrestrial Networks (NTN) Towards 3D Networking are proposed with 6G networks [137]. NTN provides 3D network coverage and backhauling through integrating Unmanned Aerial Vehicles (UAVs), satellites, balloons and High Altitude Platform (HAP) stations [150].3D networking expands the NTN paradigm through incorporating space, underground, and underwater communication [151]. For instance, project 3GPP TR 38.811 intends to support non-terrestrial networks by considering the architecture and channel models across satellite, air access, and terrestrial cellular networks [137]. In addition, multi-dimensional networks named Space-Air-Ground Integrated Network (SAGIN) envisage to deeply integrate of space nodes (e.g. satellites), air nodes (e.g. UAVs, drones, air balloons), and terrestrial network nodes (e.g. 5G and beyond network nodes) towards providing seamless connectivity [5]. However, the seamless inter-operation and resource management among multiple types of networks require unified access methods and network standards towards facilitating seamless connectivity for the Metaverse applications. ## 5 6G Integration Challenges In this section, we present the challenges raised by limited backwards compatibility with existing devices, lack of standards, accountability, resilience & privacy preservation, energy inefficiency, and radio design & carrier bandwidths while integrating 6G with the Metaverse. ### _Limited Backwards Compatibility with Existing Devices_ #### 5.1.1 Introduction to issues Effective communication in the Metaverse requires compatibility with previous-generation networks such as 4G and 5G. Despite that, some Metaverse applications can operate on existing network capabilities devices due to the deployment of 6G these devices become worthless. #### 5.1.2 Possible solutions A potential solution to address this issue is the backward compatibility of the 6G network with existing devices that enables the addition of high-capacity communication in the Metaverse and also delivers faster data rates for applications requiring real-time processing and integration. The 6G networks should support the features of the previous generations of communications like the 5G network for some time, enabling progressive migration of the Metaverse devices and lowering the overall cost of 6G and the Metaverse integration. In order to evaluate backward compatibility, mobile operators need to consider how the 5G and 6G core networks are connected and work on the 3GPP standard accordingly. ### _Lack of Standards_ #### 5.2.1 Introduction to issues There is a concern among users about the Metaverse's potential legal consequences. If a problem arises, there is no agreed-upon policy framework or set of standards for the integration of 6G with the Metaverse. Any problem with the integration of these technologies will affect the trust and the capabilities of the 6G networks and the Metaverse. #### -C2 Possible solutions These challenges may be resolved by establishing a forum involving service providers, researchers, and legal counsel to develop standards and policy frameworks that address concerns about user ethics, safety, and privacy while integrating 6G with the Metaverse. The users should be provided with complete control and transparency of their data transmitted over 6G networks, which ensures their privacy in the Metaverse. As a consequence, this will raise the bar for the 6G communication networks and the Metaverse, which will increase trust among the users. For example, though ORAN is not yet fully functional it has an alliance focusing on the integration issues of multiple service providers which will enhance the bandwidth availability and security of the overall networks. ### _Accountability, Resilience and Privacy Preservation_ #### -C1 Introduction to issues The functionalities across 6G integrated Metaverse will be mostly automated based on the decisions made by AI. Any misclassification made by these decisions that cannot be traced because of the black box nature of AI will have a direct effect on the accountability of the 6G integrated Metaverse. #### -C2 Possible solutions Explainable AI (XAI) is a promising solution for this issue which allows us to understand the misclassification issues and improve trust in the decisions made in the 6G integrated Metaverse. The usage of xAI will aid in pinpointing the problem's cause, assist the Metaverse's administrators in understanding the issue, and motivate them to prevent a recurrence - this enhances the transparency of auditing of issues related to the 6G integrated Metaverse. Additionally, existing and newly proposed AI algorithms need to be analysed considering their accountability, resilience and privacy preservation capabilities within the context of future networks. ### _Energy Inefficiency_ #### -D1 Introduction to issues The integration of processing, communication, sensing, and control capabilities inside a 6G network enables a seamless transition between the virtual and physical worlds, consequently contributing to the realisation of the Metaverse. To support the requirements of the Metaverse, the cellular capacity should be increased on top of the existing network infrastructure. This will require 6G to deploy more microscopic and even micro-cells in the network. This increases technological and network complexity and will further strain the energy efficiency and sustainability of the Metaverse. #### -D2 Possible solutions The integration of AI with 6G will address the issues of energy efficiency and network complexity, opening the door to a sustainable Metaverse ecosystem. The use of Zero touch network & Service Management (ZSM) in 6G provides an intelligent network for the Metaverse by enabling effective data access and cross-domain data exposure by permitting operational data to be maintained apart from the management applications. This will also improve the reliability of communication in the Metaverse. ### _Radio Design and Carrier bandwidths_ \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**6G for the Metaverse - technical perspective**} \\ \hline Ref. & Validation of digital assets & Cross platform integration and interoperability & Efficient support of AI & High speed data connection & Low Latency Communication & Computer Vision & High transaction and integration privacy \\ \hline 7 & & & & & & x & x \\ \hline 30-34 & x & & & & & x & x \\ \hline 35-38 & & x & & & & & x \\ \hline 39-41 & & & x & & & & \\ \hline 42-47 & & & x & x & x & & \\ \hline 48-49 & & & & x & x & x & \\ \hline 50-53 & & & x & & x & x & \\ \hline 54-56 & & & x & & & x & \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**The role of 6G technologies for the Metaverse**} \\ \hline Ref. & AI & Blockchain & Edge & OpenRAN & Cloud & IoE/IoT & XR & Digital Twin \\ \hline 57-84 & x & & & & & & x & \\ \hline 85-104 & & x & x & & & & \\ \hline 105-125 & x & & x & x & x & & \\ \hline 126-130 & & & & x & & & \\ \hline 131-134 & & & & & x & & \\ \hline 135-142 & & & x & & x & x & x & \\ \hline 2, 143-148 & x & x & x & & x & x & x \\ \hline \end{tabular} \end{table} TABLE IV: Summary of related works #### 5.2.1 Introduction to issues One of the main goals of 6G is to achieve Tb/s data rates, which requires large bandwidths (10-100 GHz spectrum for THz bands), which requires an aggregation of a large number of carriers to create larger bandwidth. Designing radios that work at sub-THz bands present a significant challenge to the industry and research due to the complexity of associated RF circuits. Finding the right balance in terms of transceiver efficiency, power generation, heat dissipation and the cost is critical for the successful adoption of radios to sub-THz bands. #### 5.2.2 Possible solutions 6G should provide more bandwidth and lower latency to improve the overall connectivity of the Metaverse. On 6G networks, there should be a 10 to 100-fold reduction in latency and an increase in bandwidth capacity for the users of the Metaverse to have the best immersive experiences. Every piece of networking hardware must have its material, component manufacture, and antenna architecture modified. To comply with the 6G standard, base station operations must change. 6G should depend on tightly focused, packaged radio signals rather than "ommidirectional" radio channels. Moreover, tightly focused radio signals use less energy, have high transceiver efficiency, less heat dissipation and less cost. ## 66 Metaverse projects This section provides an overview of research projects and developments that are already underway towards realizing the Metaverse by harnessing the extreme network capabilities of envisioned B5G and 6G mobile networks. ### _Meta_ Meta, formerly known as Facebook, is presently working on combining social media with VR and AR towards realizing the Metaverse for users to work, play and interact with other users online [152]. This is possible due to the extreme mobile broadband capabilities, near zero latency, extreme reliability and availability, and network intelligence of emerging mobile networks. Users can join the Metaverse using VR headsets. The envisaged applications will range from connecting people, education, training, healthcare and the workplace to gaming. For instance, education technologies are expected to broaden their horizons from platforms to passively absorb information to learn by doing and experiencing through 3D immersion. In addition, Meta is working on building the Metaverse responsibly ensuring a safe, secure, and transparent operation. Meta has also launched the Meta Immersive Learning Academy and Research Fund to collaborate in building a solid and interoperable Metaverse platform. In addition, their Spark AR platform enables the creation and sharing of AR experiences through their apps and devices. Furthermore, Meta is working on building economic opportunities in the Metaverse to maintain and thrive in a digital economy in the future. ### _Vr Hive_ VR Hive [153] aims to transform e-learning through VR from the comfort of home or workplace. This project aims to design and develop a fully immersive learning platform over 6G mobile networks to feature the Metaverse that can be used to provide education, training, holographic telepresence, and real-time communication. These features will be provided through the extreme network capabilities of emerging 6G networks, such as, near real-time ultra-reliable communication with ultra-low latency and edge intelligence. Relevant infrastructure and network-aware immersive and adaptive environments will be developed to facilitate education through the range of products offered through VR Hive. ### _66 Life_ 6G Life [154] aims to facilitate the envisaged digital transformation where 6G mobile networks will play a significant role in this revolution. The project not only aims to develop the digital infrastructure and high-performance computing platforms but also concentrates on political and social issues that are required to realize future 6G applications. Realizing 6G applications will require diverse communication capabilities including human-machine interaction in virtual worlds, such as the Metaverse. The project aims to provide innovative solutions in the areas of scalable communication, flexible software concepts, and adaptive hardware platforms. The four key aspects considered by the project are latency, resilience, security and sustainability. The research work, including both basic and applied research, is mainly performed considering Industry 4.0/5.0, and intelligent healthcare applications. ### _Decentraland_ Decentraland [155] is a decentralized virtual world where users can create objects, trade and interact with others in a virtual world. This also allows users to control policies on the operation of the virtual world. Decentralized operates as a Decentralized Autonomous Organization (DAO), where it owns smart contracts and assets on virtual land and estate contracts, wearables and other devices, and the marketplace to trade virtual assets. These developments can be realized through the capabilities of emerging 6G mobile networks, where extreme mobile connectivity will facilitate seamless connectivity to the virtual world. Furthermore, blockchain operation and smart contract execution will be enabled through the edge computing capabilities of the 6G networks. Similar projects, such as Sandbox [156], Axie Infinity [157], and Illuvium [158] also envisage harnessing the capabilities of blockchain and emerging mobile networks towards realizing the Metaverse. ### _Luxembourg Metaverse_ The Luxembourg Metaverse [159] project aims to build a digital twin of an area of Luxembourg City. These digital twins can be explored by the public and the industry to provide multiple working opportunities. Luxemburg 5G-6G network digital twin aims to enable seamless and highly capable network connectivity to facilitate real-time services banking on emerging communication networks, such as beyond 5G and 6G. This project will also raise awareness of the advantages and applications of the Metaverse to the public and the industry. Furthermore, the project expects to optimise and secure the Metaverse deployments while integrating the latest developments of networks in a cost-effective and cost-efficient manner. The 6G technological directions explored by the 6G metaverse projects presented in this section are tabulated in TABLE 5. ## VII Conclusion This paper presents the role of 6G towards realizing the Metaverse applications and services. The paper presents the role of 6G technologies in the immersive, smart, scalable and secure realization of the Metaverse. Furthermore, the paper presents how various 6G capabilities play a key role towards the realization of the Metaverse, including the role of 6G for, cross-platform integration, efficient support for AI, high-speed data connectivity, efficient user interaction, low latency communication, computer vision, high transaction integration, and security and privacy protection. Consequently, the integration challenges of 6G with the Metaverse are elaborated while providing several research directions towards realizing the Metaverse owing to the capabilities of future 6G networks.
2305.19629
Measuring and Predicting the Quality of a Join for Data Discovery
We study the problem of discovering joinable datasets at scale. We approach the problem from a learning perspective relying on profiles. These are succinct representations that capture the underlying characteristics of the schemata and data values of datasets, which can be efficiently extracted in a distributed and parallel fashion. Profiles are then compared, to predict the quality of a join operation among a pair of attributes from different datasets. In contrast to the state-of-the-art, we define a novel notion of join quality that relies on a metric considering both the containment and cardinality proportion between join candidate attributes. We implement our approach in a system called NextiaJD, and present experiments to show the predictive performance and computational efficiency of our method. Our experiments show that NextiaJD obtains greater predictive performance to that of hash-based methods while we are able to scale-up to larger volumes of data.
Sergi Nadal, Raquel Panadero, Javier Flores, Oscar Romero
2023-05-31T07:54:47Z
http://arxiv.org/abs/2305.19629v1
# Measuring and Predicting the Quality of a Join for Data Discovery ###### Abstract. We study the problem of discovering joinable datasets at scale. We approach the problem from a learning perspective relying on profiles. These are succinct representations that capture the underlying characteristics of the schema and data values of datasets, which can be efficiently extracted in a distributed and parallel fashion. Profiles are then compared, to predict the quality of a join operation among a pair of attributes from different datasets. In contrast to the state-of-the-art, we define a novel notion of join quality that relies on a metric considering both the containment and cardinality proportion between join candidate attributes. We implement our approach in a system called NexiaID, and present experiments to show the predictive performance and computational efficiency of our method. Our experiments show that NexiaID obtains greater predictive performance to that of hash-based methods while we are able to scale-up to larger volumes of data. **Artifact Availability:** The source code, data, and/or other artifacts are available at [https://www.essi.upc.edu/dim/NexiaID/](https://www.essi.upc.edu/dim/NexiaID/). ## 1. Introduction Data discovery is the broad process of navigating a large set of data sources in order to find relevant datasets and meaningful relationships among them (Sergi et al., 2016; Chen et al., 2017). Discovery and integration of datasets is nowadays a largely manual and arduous task that consumes up to 80% of a data scientists' time (Kang et al., 2017). This only gets aggravated by the proliferation of large repositories of heterogeneous data, such as _data lakes_(Kang et al., 2017) or open data-related initiatives (Kang et al., 2017). Due to the unprecedented large-scale volumes of heterogeneous data sources, manual data discovery becomes an unfeasible task that calls for automation (Nadal et al., 2018). In this paper, we focus on the problem of discovering joinable attributes among datasets in a data lake. As an illustrative example of the challenges we face, take the reference dataset (\(D_{ref}\)) depicted in Table 1. Assume we have a collection of other datasets available in the same data lake such as those depicted in Table 3. In such setting, we aim at finding joinable combinations of pairs of attributes from the reference dataset to all the rest. A first observation is that purely schema-based methods, such as LogMap (Kang et al., 2017), would fail to propose the combination \(D_{ref}.Country=D_{1}.X\) due to the lack of embedded semantics in the schema of \(D_{1}\). Thus, we must also take into account the data values. Note, however, that checking only data values might result in proposing the combination \(D_{ref}.Schengen=D_{2}.Discount\), which is definitely not relevant for analysis. Furthermore, given positive pairs (i.e., likely to be meaningfully joinable), such as \(D_{ref}.Country=D_{1}.X\) and \(D_{ref}.Country=D_{2}.Country\), there should be a clear criteria to rank them (i.e., suggest which one is _better_). Ranking is relevant in data lake scenarios, where independent data files must be crossed. Most of the times, in such scenarios, the cardinality of such files is not excessively large, but their order (i.e., number of columns / attributes) tend to be. Consequently, current approaches tend to propose too many joinable pairs of attributes, which is overwhelming for the end-user validating them. The problem of finding joinable attributes among datasets is nowadays a topic of major interest for the data management community (Chen et al., 2017; Chen et al., 2017). We distinguish three approaches: _comparison by value, comparison by hash and comparison by profile_. Table 2, overviews recent contributions. Comparison by value relies on auxiliary data structures such as inverted indices or dictionaries to minimize the lookup cost. Alternatively, comparison by hash expects that the signature of values under locality-sensitive hashing schemes will collision in the same bucket, also employing index structures for efficient threshold index. Comparison by profile methods leverage on profiles extracted from datasets and their attributes, which are used to predict whether a pair of attributes will join. ### Data discovery at scale Unfortunately, as we experimentally show in Section 6, the state-of-the-art in data discovery does not meet the expectations for large-scale scenarios. Unlike traditional relational databases, these are characterized by _a)_ a wide heterogeneity among datasets (e.g., large differences on the number of attributes and / or their cardinalities); _b)_ massive number of datasets; and _c)_ the presence of a variety of topics, or domains. Overall, these distinguishing features, which we discuss as follows, deem current solutions ineffective due to their inability to scale-up and the low quality in rankings they provide. **Inability to scale-up.** Solutions that yield exact results (i.e., comparison by value) quickly suffer from scalability problems. Indeed, \begin{table} \begin{tabular}{|c|c|c|} \hline **Country** & **Happiness score** & **Schengen** \\ \hline Mexico & 6.595 & N \\ \hline Spain & 6.354 & Y \\ \hline United States & 6.892 & N \\ \hline France & 6.592 & Y \\ \hline \end{tabular} \end{table} Table 1. \(D_{ref}\) - Happiness score per country in 2019 \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Search accuracy**} \\ Exact & & & Approximate \\ \hline Comp. by value & Comp. by hash & Comp. by profile (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) & (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) \\ \hline Expensive & & & Efficient \\ \hline \end{tabular} \end{table} Table 2. Overview of approaches by technique, arranged according to accuracy and algorithmic complexity assuming that the sets of values for a pair of attributes \(A\), \(B\) are maintained in memory as dictionaries, the complexity of computing their containment (i.e., the inclusion coefficient) is \(O(min(|A|_{i},|B|))\). Alternatively, those solutions that discover joinable pairs with a bounded error (i.e., comparison by hash) require the construction and maintenance of index structures for efficient lookup. This is a task that becomes highly demanding in terms of computing resources on large-scale datasets. In fact, as we have empirically observed and discuss in Section 6, the available implementations on comparison by hash fail to handle datasets of few GBs overall. Another drawback of such approaches is that the estimation of similarity measures like containment is highly imprecise when the cardinality (i.e., the number of distinct values) of an attribute is comparatively larger than the other's [26], which is a common characteristic in real-world large-scale applications. As a result, the precision of current approaches is highly affected due to the large number of false positives and worsened in large-scale settings. **Low quality in rankings.** Comparison by hash and profile solutions aim at predicting set-based measures such as the inclusion coefficient (i.e., containment), denoted \(C\), or the Jaccard index, denoted \(J\), using locality-sensitive hashing techniques such as MinHash [6] or random projection [8] to determine joinability among pairs of attributes [5]. Hence, a pair of attributes will be ranked higher if the estimated overlapping of their instance sets is also high. Note, however, that such _syntactic_ definition does not discern pairs of attributes from different domains (e.g., there exist several musical bands that use city or state names), leading to a large number of false positives. While it might be feasible to manually discern such false positives on a handful of datasets, this task is unattainable at scale. In order to showcase the detrimental impact of using such measures to determine joinability, we designed an experiment collecting 138 datasets from open repositories such as Kaggle and OpenML1. Precisely, we devised an heterogeneous collection of datasets ranging different topics, which yielded a total of 110,378 candidate pairs of textual attributes, where 4,404 of those have a containment higher or equal than 0.1. We, then, manually labeled such pairs as either _semantic_ or _syntactic_, distinguishing whether a pair of attributes share common values and, respectively, do or do not refer to the same concept in a shared domain. Such ground truth is publicly available to the community and available in the paper's companion website. Footnote 1: Repository available at [https://mydisk.cs.upc.edu/s/GetwMfT2vsGqbX](https://mydisk.cs.upc.edu/s/GetwMfT2vsGqbX) As shown in Figure 1, even for high values of \(C\) and \(J\), the number of _syntactic_ pairs, and thus false positives, represent a substantial part. Then, in Table 4, we show the performance metrics (i.e., precision, recall and F-score) when considering different threshold values over \(C\) and \(J\) to discern syntactic (i.e., below the threshold) and semantic pairs (i.e., above the threshold) on the ground truth. We can observe that, overall, \(C\) has a lower precision than \(J\), indicating that it has a higher false positive rate. In contrast, \(J\) has a lower recall than \(C\), indicating that it has a higher false negative rate. Yet, overall, we can observe that in terms of F-score, which denotes the accuracy of the classifier, both metrics clearly fail at the task of identifying semantic pairs of joinable attributes. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.5} & \multicolumn{3}{c|}{\(J\) \(>\) 0.5} \\ \hline \(P=0.59\) & \(R=0.72\) & \(F=0.65\) & \(P=0.74\) & \(R=0.39\) & \(F=0.51\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.6} & \multicolumn{3}{c|}{\(J\) \(>\) 0.6} \\ \hline \(P=0.67\) & \(R=0.56\) & \(F=0.61\) & \(P=0.79\) & \(R=0.29\) & \(F=0.43\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.7} & \multicolumn{3}{c|}{\(J\) \(>\) 0.7} \\ \hline \(P=0.64\) & \(R=0.44\) & \(F=0.52\) & \(P=0.78\) & \(R=0.20\) & \(F=0.32\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.8} & \multicolumn{3}{c|}{\(J\) \(>\) 0.8} \\ \hline \(P=0.63\) & \(R=0.38\) & \(F=0.47\) & \(P=0.75\) & \(R=0.17\) & \(F=0.28\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.9} & \multicolumn{3}{c|}{\(J\) \(>\) 0.9} \\ \hline \(P=0.61\) & \(R=0.30\) & \(F=0.40\) & \(P=0.80\) & \(R=0.16\) & \(F=0.26\) \\ \hline \multicolumn{4}{|c|}{\(C\) = 1.0} & \multicolumn{3}{c|}{\(J\) = 1.0} \\ \hline \(P=0.60\) & \(R=0.25\) & \(F=0.35\) & \(P=0.75\) & \(R=0.11\) & \(F=0.20\) \\ \hline \end{tabular} \end{table} Table 4: Performance metrics of using different thresholds over \(C\) and \(J\) to identify semantic pairs. \(P\), \(R\) and \(F\) denote, respectively, precision, recall and F-score Figure 1: Proportion of syntactic and semantic pairs for different ranges of containment (left) and Jaccard (right) values. Labels in each bar denote the count of pairs in the range. \begin{table} \begin{tabular}{|c|c|c|} \hline \(X\) & \(Y\) & \(Z\) \\ \hline Spain & 47M & 2020 \\ \hline United States & 330M & 2020 \\ \hline Mexico & 123M & 2020 \\ \hline Germany & 83M & 2020 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Y\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline Spain & 47M & 2020 \\ \hline United States & 330M & 2020 \\ \hline Mexico & 123M & 2020 \\ \hline Germany & 83M & 2020 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(X\)} & \multicolumn{1}{c|}{\(X\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn ### Computing and accurately predicting the quality of a join The discussion above highlights the limitations of current data discovery approaches over large-scale scenarios. Indeed, the first challenge lies in the definition of a similarity measure such that it prioritizes pairs of attributes with large overlapping and shared domains, as an indicator of semantic relationship. Next, the second challenge is that of efficiently computing such measure at scale. As previously discussed, value and hash-based data discovery approaches do not scale well. Alternatively, comparison by profile methods are a better suit since they rely on the detection of similarities or discrepancies between profiles. Working with summaries instead of data values is much more efficient from a complexity point of view. Yet, despite the clear performance benefits of profile-based approaches, there is nowadays a large gap in the trade-off regarding the quality of their results mainly due to the adoption of rather basic profiles (e.g. (Kang et al., 2017)) that do not accurately describe the underlying data or representative profiles (e.g. (Kang et al., 2017)) that are used to discover a binary class (e.g. joinable or non-joinable). In order to overcome these issues, in this paper, we extend our previous vision paper (Kang et al., 2017) and propose a novel approach to data discovery which aims to cover the gap generated by the low predictive performance of profile-based methods, as well as the limited precision and scalability of hash-based systems on large data lakes. We, first, propose a novel metric to measure the quality of a join. Opposite to the state-of-the-art, mostly focused on containment or Jaccard distance, we also consider the cardinality proportion between attributes as an indicator of a higher join quality. This allows us to get rid of a substantial amount of false positives, reducing the number of pairs to analyze. This is specially relevant in large-scale settings, where as shown in Figure 1, the number of candidate pairs is too large to manually disregard false positives. Second, we propose a novel learning-based method based on profiles to discover joinable attributes for large-scale data lakes. Our assumptions apply to scenarios where data is denormalized and file formats embed tabular data (i.e., not nested). We rely on state-of-the-art relational data profiling techniques (Beng et al., 2015) to compute informative profiles for datasets. This task, which can be done offline and parallelized over distributed computing frameworks (e.g., Apache Spark), allows us to extract and model the underlying characteristics of attributes. Next, profiles are compared in order to predict their expected join quality. We show that our method is generalizable and that proposes a meaningful ranking of pairs of attributes based on the predicted join quality. We further show that our model is generalizable for data lake-alike settings. **Contributions.** We summarize our contributions as follows: * We introduce a quantitative metric for join quality, which considers containment and cardinality proportion between attributes. * We learn a highly accurate and general model to predict and efficiently rank candidate pairs of joinable attributes. * We extensively evaluate our approach to show it is scalable and outperforms the current state-of-the-art, yielding higher predictive performance results. **Outline.** The rest of the paper is structured as follows. We discuss related work and introduce the formal background, respectively in Sections 2 and 3. Next, in Section 4 we present the definition of the proposed quality metric, while Section 5 shows our approach to predicting it. Section 6 presents exhaustive experiments to showcase the effectiveness and scalability of our approach. We finally conclude our paper and present future work in Section 7. ## 2. Related Work In this section, we survey related work for each category identified in Table 2. **Comparison by value.** SilkMoth (SilkMoth, 2017) proposes a method to generate signatures from a subset of attribute tokens. To select an optimal subset, it uses an heuristic. Such signatures are used in an inverted index to prune the search space. Then, a verification step is required on the remaining candidates to discard those that do not hold for a certain similarity measure. This approach supports edit distance and Jaccard coefficient as similarity measures. It assumes all signatures fit in memory. JOSIE (SilkMoth, 2017) proposes to optimize the number of comparisons by scanning only the required values. Tokens are extracted from each attribute to create a dictionary and an inverted index. A ranked list is built from the \(k\) most relevant candidate tables with highest containment, where attributes ranked at the top will have a larger number of common values. PPJoin (Kang et al., 2017) performs a different optimization by using prefix filtering to avoid computing similarity values for all possible values. This reduces the number of comparisons and hence improve efficiency. However, an inverted index requires a large space in memory. This approach proposes a similarity measure which combines tokens and characters. **Comparison by hash.** MinHash (Han et al., 2017) uses the minwise hash function from the LSH collection, with a collision probability equal to the Jaccard similarity. This requires, for every value, to compute the MinHash signature \(K\) times, where \(K\)'s magnitude is in the hundreds. This approach has a major limitation on performance, as well as a bias towards small sets introduced by the Jaccard similarity. To overcome this, under the observation that MinHash can be optimized providing a Jaccard threshold, LSH Ensemble (Shen et al., 2016) proposes to use containment similarity and convert it to a Jaccard threshold. It focuses on finding attributes with a high containment similarity, that is, to cover as many values as possible. For efficient indexing, LSH Ensemble partitions the sets according to the set size. GB-KMV (SilkMoh et al., 2017) aims to reduce the number of false positives generated by LSH Ensemble. Further, it considers that additional information (e.g., attribute cardinalities and value frequencies) to offer better performance in estimating the containment similarity. Another approach that aims to tackle the drawbacks of MinHash is Lazo (Kang et al., 2017). Here, the Jaccard similarity is redefined to consider set cardinalities, which allows to estimate the containment similarity. Instead of computing \(K\) times a hash function, Lazo implements the One Permutation Hashing (OPH) technique, hashing data values only once. A distinguishable feature of Lazo is that rehashing the entire dataset collection is not required when a new one is introduced. Aurum (Kang et al., 2017), represents relations between datasets and their attributes in a graph data structure (i.e., the _Enterprise Knowledge Graph_). In this graph, attribute nodes are related if their hashing signatures, generated from their instances, are similar in an LSH index. To determine similarity it uses two metrics: Jaccard (i.e., MinHash similarity) and cosine (i.e., TF-IDF). Finally, the approach presented by D3L (Dalalal et al., 2017), also employs LSH indexes generated from four kind of features (i.e., attribute names, values, formats and domain distributions) as well as word embeddings generated from the values. Hence, attribute joinability is based on the composition of the similarity of each of such five features. **Comparison by profile.** LSD (Kang et al., 2015) proposes a multi-strategy learning approach to automatically find related attributes among XML files. It applies multiple learner modules, where each module exploits different kind of information, either from schema or data values. Such predictions are combined to weigh each learner. LSD also exploits domain integrity constraints and user feedback. FlexMatcher (Marcher, 2015) extends LSD with more data types. A relevant aspect is that it considers pattern classifiers to filter data values. A limitation is that every time a discovery process is to be performed it requires to train new models providing a training sample of attributes that might join with the specific attribute. A different approach is SIMUBC (Kang et al., 2015), which aims to detect pairs of attributes sharing common values. SIMUBC extracts 28 kinds of metadata from attributes such as tokens, phonetic values or representatives. Such metadata are used to train Random Forest and Multilayer Perceptron models to predict whether two attributes are join candidates. To improve performance, weights are assigned to each model to compute the final prediction. A limitation of this work is that it requires to train the models each time a data discovery process is started. Then, PEXESO (Pexes et al., 2017) presents an approach to create high dimensional vectors (i.e., embeddings) from each record of a column. Then, attributes can be efficiently compared to each other via such embeddings. A major limitation of PEXESO is that it requires indexing in memory the embeddings of the complete data lake. To alleviate this issue, the paper presents partitioning techniques. The approach presented by DLN (Dalalal et al., 2017) is that of building a ML model to find join relationships from Microsoft's data lake Cosmos. The paper argues that two metadata-based features suffice to build a high quality model. On the one hand, the first feature is an embedding-enhanced column name similarity, for which they use word embeddings trained on software domain data and calculate the cosine similarity. On the other hand, the second feature is the column-name uniqueness, where the maximum ITF (inverse term frequency) of all individual tokens is used. Unfortunately, the paper does not provide reproducibility information or ground truth to compare with. Finally, WarpGate (Pexes et al., 2017), is a prototype system that targets data discovery over cloud data warehouses by applying an embedding approach. These are built from columns with the objective that joinable columns will be closer in the higher dimensional embedding space. One of the limitations of WarpGate is, however, the runtime complexity of building such embedding spaces for large datasets. ## 3. Preliminaries Here, we introduce the formal background of our approach. ### Measuring the quality of a join In this subsection, we fix the data model and formalize metrics for join quality. **Data repositories and datasets.** A data repository \(\mathcal{D}\) is a finite nonempty set of dataset names \(\{D_{1},\ldots,D_{m}\}\), where each \(D_{i}\) has a fixed arity \(n_{i}\). Let \(A\) be a set of attribute names, then each \(D_{i}\in\mathcal{D}\) is associated to a tuple of attributes denoted by \(att(D_{i})\). Henceforth, we will assume that \(\forall i,j:i\neq j\to att(D_{i})\cap att(D_{j})=\emptyset\) (i.e., relations do not share attribute names), which if required can be done prefixing attribute names with their relation name. Then, we use \(att(\mathcal{D})\) to refer to the set \(\{att(D_{1})\cup\ldots\cup att(D_{m})\}\). Then, let \(V\) be a set of values, a tuple \(t\) in \(D_{i}\) is a function \(t:att(D_{i})\to V\). For any dataset \(D_{i}\), \(tuples(D_{i})\) denotes the set of all tuples of \(D_{i}\). **Joinable pairs.** Given two distinct datasets \(D_{a},D_{b}\) and a pair of attributes \(\langle a,b\rangle\), such that \(a\in att(D_{a})\) and \(b\in att(D_{b})\) and value-sets \(A\) and \(B\), we say the pair \(\langle a,b\rangle\) is _syntactically joinable_ if \(A\cap B\neq\emptyset\). Following the definition from (Kang et al., 2015), we also say that such pair of attributes is _semantically joinable_ if they are _syntactically joinable_ and there exists a function \(h:A\to B\) denoting semantic equivalence between attributes (i.e., both refer to the same concept). In practice, attributes with a semantic relationship also have a syntactic one. When this is not satisfied, as happens for the pair _Country_ (in Table 1) and _Nation_ (in Table 3c), we refer to this relationship as _semantic non-syntactic_. **Quantifiable measures for joinability.** A quantifiable way to define that a pair of attributes are joinable is by using set-based coefficients (i.e., coefficients over well-defined collections of distinct values). As earlier discussed, two of the most commonly used coefficients are the inclusion coefficient (\(C(A,B)\)) and Jaccard coefficient (\(J(A,B)\)), which are formalized as: \[C(A,B)=\frac{|A\cap B|}{|A|}\qquad\qquad J(A,B)=\frac{|A\cap B|}{|A\cup B|}\] Note Jaccard similarity is symmetric, thus it can be biased towards smaller sets. Oppositely, containment measures the relative size of the intersection of two sets over the size of one. Hence, such measure is asymmetric. **Join quality metric.** A join quality metric is a function \(Q:(\mathcal{A},\mathcal{B})\rightarrow\mathbb{R}\) from the set of all sets of values \(\mathcal{A}\) and \(\mathcal{B}\) to the set of real numbers, such that, for any set of values \(A,B,C,D\) it holds that \(Q(A,B)>\mathcal{Q}(C,D)\) if the pair \(\langle A,B\rangle\) is semantically joinable and the pair \((C,D)\) is syntactically joinable. Note this generalization allows to include containment and Jaccard as join quality functions, yet it does not consider the possibility to rank joinable pairs of the same kind. This is due to the fact that there is no consensus in the literature on which metric is best. Hence, one of the contributions of this paper is on the proposal of a novel metric to determine the quality of a join. ### Predicting the quality of a join Since the computation of a join quality measure \(\mathcal{Q}\) might be unattainable at scale, we also consider the join discovery problem as a predictive task. **Profiles.** A unary profile \(P_{u}\) for an attribute \(A\), referred as \(P_{u}(A)\) is a set of meta-features \(\{m_{1},\ldots,m_{n}\}\). Each \(m_{i}\) is a summary or statistic about the structure or content of \(A\) (e.g., number of distinct values). We also consider binary profiles, which are meta-features that denote characteristics of a relationship between pairs of attributes. Hence, we define a binary profile \(P_{b}\) for a pair of attributes \(A,B\), denoted \(P_{b}(A,B)\), as a set of meta-features (e.g., Levenshtein distance between attribute names). **Join quality prediction function.** A join quality prediction function is a function \(\mathcal{P}:(\overline{P_{u}},\overline{P_{u}}^{\prime},\overline{P_{b}}) \rightarrow\mathbb{R}\) from a triple defined by the set of all unary profiles, from both the reference and candidate attributes, and the set of all binary profiles, to the set of real numbers, such that, for any set of values \(A,B,C,D\) if \(\mathcal{Q}(A,B)>\mathcal{Q}(C,D)\) then \(\mathcal{P}(P_{u}(A),P_{u}(B),P_{b}(A,B))>\mathcal{P}(P_{u}(C),P_{u}(D),P_{b}(C,D))\). **Problem statement.** We now formalize the predictive join discovery problem. The goal is to discover a ranking (i.e., a partially-ordered set) of equi-join predicates based on their predicted join quality. **Definition 3.1** (Discovery-by-attribute).: Let \(A_{q}\) be a query attribute, \(D_{ref}\) a reference dataset where \(A_{q}\in att(D_{ref})\), and \(\mathcal{D}\) a data repository where \(D_{ref}\notin\mathcal{D}\); obtain a partially-ordered set of joinable pairs \(R\) of the form \(R=\{(A_{q},A_{1}),\ldots,(A_{q},A_{n})\}\), where \(A_{1},\ldots,A_{n}\in att(\mathcal{D})\) such that \(\forall(A_{q},A_{i}),\langle A_{q},A_{j}\rangle\in R:(A_{q},A_{i})>\langle A_ {q},A_{j}\rangle\implies\mathcal{P}(P_{u}(A_{q}),P_{u}(A_{i}),P_{b}(A_{q},A_{i }))\geq\mathcal{P}(P_{u}(A_{q}),P_{u}(A_{j}),\)\(P_{b}(A_{q},A_{j}))\). The remainder of the paper is devoted to _a)_ present a novel instantiation of join quality metric (see Section 4), and _b)_ present an approach to instantiate the join quality prediction function (see Section 5). ## 4. A novel metric to measure the quality of a join Here, we define a novel metric to determine the quality of a join. ### On the cardinality proportion's role Unlike the state-of-the-art, which mainly uses containment and Jaccard similarities to decide the degree of joinability among pairs of attributes, we aim to define a metric to measure the expected join quality. As shown in Table 4, containment yields better results to determine the joinability of a pair attributes with respect to Jaccard. Yet, we make the observation that datasets on a data lake do not relate to each other as in a relational database. In such scenarios, it is common to find datasets with few data values in common. In order to exemplify this idea, let us consider the datasets depicted in Table 5. In this example, the reference dataset \(D_{ref}\) might be joined with any of the two candidate datasets \(D_{1}\) (at the EU level) and \(D_{2}\) (worldwide). Current approaches would propose both as positive pairs, since they yield the same containment. However, we aim at distinguishing the join quality between them and use their _cardinality proportion_ for that purpose, which is defined by the following expression: \[K(A,B)=\frac{min(|A|,|B|)}{max(|A|,|B|)}\] Let us, then consider the following cardinalities corresponding to the city attributes (which are the only relevant ones to join): \(|City|=\) 8,124, \(|Unit|=\) 20,000 and \(|Name|=\) 54,500, respectively belonging to \(D_{ref}\), \(D_{1}\) and \(D_{2}\). We use the cardinality proportion as a measure to infer whether their data granularities are similar. In this sense, the joinable attribute in \(D_{2}\) is much larger than that in \(|D_{ref}|\) and yields a worse proportion compared to \(D_{1}\), and thus should be ranked lower. Importantly, we assume these datasets store independently generated events and such big difference in their cardinality indicates they embed different semantics or sit at different granularity levels. In general, such situations are a source of false positives for current solutions, specially, when considering small tables. ### Definition of an empirical metric We, then, follow an empirical approach to define the join quality metric. This is, from a set of quantifications drafted from a sample, we aim to derive a measure that can generalize to a population (Kang et al., 2017). In our setting, from the manually-labeled ground truth used to conduct the experiment depicted in Figure 1, we observe how the containment and cardinality proportion values relate for both syntactically and semantically-joinable pairs. Indeed, as shown in Figure 2, the rationale that the cardinality proportion is a valid metric to discern false positives provided by the containment holds. As observed, most of the syntactically-joinable pairs have a value of \(C<0.5\), yet for those that are above such threshold most of them lie below the range \(K<0.5\). In other words, we can identify semantically-joinable pairs when both \(C\) and \(K\) are closer to 1. From such observations in the ground truth, a naive approach to discern syntactically and semantically-joinable pairs would be that expressed by the following expression, which would yield 1 if a pair is semantically-joinable and 0 otherwise: \[Q(A,B)=\begin{cases}1,&\text{if }C(A,B)\geq\frac{1}{2}\text{ and }K(A,B)\geq\frac{1}{2}\\ 0,&\text{otherwise}\end{cases}\] Yet this metric is still limited, as is the case for the other ones in the state-of-the-art, on its ability to rank pairs that are of the same kind. We, hence, generalize and propose a multi-class metric to determine the quality of a join based on multiple quality levels \(L\) (i.e., degrees of joinability) as defined by the following expression: \[Q(A,B,L)=max(i)\in[0,\ldots,L]|C(A,B)\geq(1-\frac{i}{L})\wedge K(A,B)\geq\frac {1}{2^{i}}\] Figure 2. Distribution of syntactically and semantically-joinable pairs in the ground truth over \(C\) and \(K\) The intuition of \(Q(A,B,L)\) is that of defining equally-sized buckets for \(C(A,B)\) and constraint them using \(K(A,B)\). Figure 3, depicts the areas defined by such quality metric for the case of \(L=2\) (which is equivalent to \(Q(A,B)\) earlier introduced) and \(L=4\). The latter uses richer labels, for this case denoted as _Low, Medium, Good_, and _High_ for the different levels of quality, respectively 0.25, 0.5, 0.75 and 1 (note that we ignore the value 0 in the chart). Hence, a pair labeled _High_ will always be ranked higher than one labeled _Good_. The interpretation of such metric is that of measuring the quality of the join's output from \(A\)'s perspective under equi-join semantics (i.e., under the semantics of left-semijoin conjunctive queries). This is, how the number of elements in \(A\) will be reduced after doing a join operation using \(B\). Take again the example from Table 5 and consider the following containment values \(C(City,Unit)=0.8\) and \(C(City,Name)=0.95\), and cardinality proportion values \(K(City,Unit)=0.40\) and \(K(City,Name)=0.15\). Note that, although the containment is very high in both cases, the constraint on cardinality proportions allows to rank in a higher position the first, denoting a more interesting join result. To showcase the benefit of considering the cardinality proportion to complement the containment, consider the following extreme case, which is not uncommon on large-scale automated data discovery scenarios. Consider the two datasets depicted in Table 6, the former (\(D_{s}\)) listing the opening hours of stores and the latter (\(D_{m}\)) movies and their directors. Let us assume \(|Store|=3\) and \(|Movie|\) is above a million movies. Solutions exclusively considering containment would qualify the pair \(\langle D_{s}Store,D_{m}.Movie\rangle\) pair as a high quality join, given the \(2/3\) containment (which would be even higher if we consider approximate joins). Yet, this is clearly a false positive. Considering the cardinality proportion, our quality metric would penalize its ranking and assign a low value to this candidate pair. Note that the Jaccard index is able to deal with this case, yet as shown in Table 4 it generally has a high false negative rate deeming it suboptimal. ### A continuous metric for join quality Despite the ability of \(Q(A,B,L)\) to assign quality levels beyond binary ones, the output of such metric is discrete, and thus the rankings it generates are bounded by \(L\). In order to overcome this issue, we aim to provide a generalization of such discrete metric into a continuous one \(Q(A,B)\) in the continuous range \([0,1]\). The approach we follow is that of plotting the empirical distribution function (_edf_) of \(Q(A,B,L)\) for some value of \(L\), and then fit a continuous probability distribution on it. Empirical distributions are functions that describe a sample of observations for a given variable, while probability distributions are functions that yield the probability of different possible values for a variable. We distinguish between probability density functions (_pdf_), which yield the probability that a random variable takes on a certain value, and cumulative distribution functions (_cdf_) which yield the probability that a random variable takes on a value less than or equal to a certain value. Thus, we are precisely interested in the latter. The challenge is to determine what distribution function better fits our metric. **Fitting a Gaussian distribution.** The most notorious kind of probability distribution is the Gaussian distribution, also known as the normal distribution, \(\mathcal{N}(\mu,\sigma^{2})\). The _pdf_ of the normal distribution is defined by the following expression: \[pdf(x;\mu,\sigma^{2})=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}\] The _cdf_ of the normal distribution is defined as: \[cdf(x;\mu,\sigma^{2})=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\frac{x-\mu}{ \sigma}}e^{\frac{-x^{2}}{2}}dt\] \begin{table} \begin{tabular}{|c|c|c|} \hline **Store** & **Open** & **Close** \\ \hline Chicago & 8am & 18pm \\ \hline Casablanca & 9:30am & 20pm \\ \hline Paris & 9am & 18pm \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline **Movie** & **Director** \\ \hline An American in Paris & G. Gershwin \\ \hline Casablanca & M. Cutrie \\ \hline Chicago & R. Marshall \\ \hline... &... \\ \hline \end{tabular} \end{table} Table 6. \(D_{s}\), store schedules (left), and \(D_{m}\), movies and their directors (right) Figure 3. Areas identified by the quality metric for \(L=2\) (left) and \(L=4\) (right) \begin{table} \end{table} Table 5. A reference dataset (\(D_{ref}\)) and two candidate datasets to be joined. \(D_{1}\) is curated with extensive data at European level, while \(D_{2}\) is curated at the worldwide level with less details Yet, since we are working with a two-dimensional function for \(C\) and \(K\) we must consider the case for the multivariate normal distribution. Assuming \(C\) and \(K\) are independent, the _cdf_ truncated in the range \([a,b]\) for a two-dimensional Gaussian for \(C\) and \(K\) (i.e., \(cdf_{\text{C}}\left(c,k\right)\)) is equal to the product of the individual _cdf_s of \(C\) and \(K\) (i.e., \(cdf_{\text{C}}\left(c\right)cdf_{\text{K}}\left(k\right)\)), which is defined as: \[cdf_{\text{CK}}\left(c,k\right)=\frac{\Phi\left(\frac{c-\mu_{\text{c}}}{c_{ \text{c}}}\right)-\Phi\left(\frac{a-\mu_{\text{c}}}{\sigma_{\text{c}}}\right) }{\Phi\left(\frac{b-\mu_{\text{c}}}{\sigma_{\text{c}}}\right)-\Phi\left(\frac {a-\mu_{\text{c}}}{\sigma_{\text{c}}}\right)}\Phi\frac{\left(\frac{b-\mu_{ \text{c}}}{\sigma_{\text{c}}}\right)}{\Phi\left(\frac{b-\mu_{\text{c}}}{\sigma _{\text{c}}}\right)-\Phi\left(\frac{a-\mu_{\text{c}}}{\sigma_{\text{c}}} \right)}\] where \(\Phi(x)\) is the univariate _cdf_ of the normal distribution defined by the following expression: \[\Phi\left(x\right)=\frac{1}{2}\left(1+\text{erf}\left(\frac{x}{2}\right) \right)=\frac{1}{2}\left(1+\frac{2}{\sqrt{\pi}}\int_{0}^{\frac{\pi}{2}}e^{-t^ {2}}\text{dt}\right)\] Hence, the challenge reduces to finding the mean values \(\mu_{\text{c}}\) and \(\mu_{k}\), which determine the offset of the distribution, and the covariance matrix \(\Sigma=\left(\begin{smallmatrix}\sigma_{\text{c}}&0\\ 0&\sigma_{\text{c}}\end{smallmatrix}\right)\), which determine the shape of the distribution, such that it best fits the discrete quality function \(Q(A,B,L)\). To do so, we consider the Wasserstein metric which is a distance function defined over probability distributions. Hence, the goal is to find the optimal values such that minimize the Wasserstein distance. Such task, which can be evaluated in a brute-force manner over a discrete range of values, yielded the following values: \(\mu_{\text{c}}=0\), \(\mu_{k}=0.44\), and \(\Sigma=\left(\begin{smallmatrix}0.19&0\\ 0&0.28\end{smallmatrix}\right)\). Figure 4 depicts resulting fit of the _cdf_ of the normal distribution over \(C\) and over \(K\) using the previously introduced values, which is superposed over the _cdf_ of \(Q(A,B,4)\). **Continuous quality metric with strictness levels.** Since the previously presented values for mean and covariance matrix have been derived from our ground truth, we finally consider the possibility to include a certain degree of variability on the resulting quality metric. To that end, we consider the _strictness_ score \(s\) and consider three possible values: _relaxed_ (i.e., \(s=0\)), _balanced_ (i.e., \(s=0.25\)) and _strict_ (i.e., \(s=0.5\)). Such score will vary the value of \(\mu_{\text{c}}\) (i.e., the mean of the containment value), which, as can be observed in Figure 3, is the dominating factor in our metric. Hence, the resulting continuous join quality metric \(Q(A,B,s)\) is defined as: \[cdf\left(\mu_{C}+s,\Sigma[0]\,[0],0,1,C(A,B)\right)cdf\left(\mu_{K},\Sigma[1 ]\,[1],0,1,K(A,B)\right)\] Figure 5, shows the application of such metric for the different strictness values considered over our ground truth. ## 5. Predicting the quality of a join In this section, we describe our approach to predict the join quality metric introduced in Section 4. ### Attribute profiling Profiles are composed of meta-features that represent the underlying characteristics of attributes. Such profiles are the key ingredient for high accuracy predictions, thus we require an exhaustive summary of attributes. Hence, we base our profiling on state-of-the-art relational data profiling techniques (Bang et al., 2017). We distinguish meta-features corresponding to unary and binary profiles. We further distinguish the former into meta-features modeling cardinalities, value distribution and syntax. A summary of all the implemented meta-features is depicted in Table 7. Although for space reasons it has not been included here, we validated by means of a principal component analysis the relevance of all meta-features towards meaningful profiling of attributes. **Cardinalities.** These provide a broad view of an attribute. Uniqueness, which is computed dividing the number of distinct values by the cardinality, allows us to quantify the extent of duplicated values. A uniqueness smaller than 1 indicates there exists duplicate values, hence we can identify which attributes have high redundancies. We can also detect incompleteness, which is determined by the number of missing values divided by the cardinality. This produces a value in the range \([0,1]\), where values closer to 1 denote the attribute has a high percentage of missing values. Finally, entropy, also referred as _diversity index_, measures the variety of data in an attribute. **Value distribution.** Here, we exploit information in a fine-grained manner by using a frequency distribution of the attribute values, either by count or percentage. Despite its simplicity, the frequency distribution of column values exposes insightful characteristics, such as how often a value occurs. We compute frequency metrics (e.g., in the form of octiles), and descriptive statistics (e.g., mean, standard deviation, etc.) to characterize the distribution of the data. We also take a sample of the ten most frequent values. **Syntax.** This category of unary metadata describes the patterns of data. These meta-features include information regarding the length of values in characters, such as the length of the longest and shortest value, and the average length. We also compute information regarding syntactic consistency, such as format and data type. This aids to give meaning to the attribute's content. We also infer the data type of an attribute, in a broad and fine-grained manner. Broad data types are generic descriptions such as numeric, alphabetic, alphanumeric, dateTime, non-alphanumeric, etc. However, we also extract its fine-grained type to extract what content is the attribute representing. To this end, we use regular expressions that allow us to model usernames, phrases, phones, etc. In order to improve the quality of meta-features in this category, we preprocess values to lowercase, remove accents and special symbols. Figure 4. Resulting _cdf_s that minimize the Wasserstein distance over the _cdf_ of \(C\) (left) and \(K\) (right) for \(Q(A,B,4)\) **Binary meta-features.** We also extract meta-features regarding pairs of attributes. We use Levenshtein distance to obtain the similarity between pairs of attribute names (Levenshtein, 2017). This is normalized by the length of the largest string. ### Comparing profiles Before comparing profiles and due to the fact attribute meta-features are represented in different magnitudes, we normalize them to guarantee a meaningful comparison. As shown in Table 7, we consider a large amount of meta-features that require normalization. \begin{table} \begin{tabular}{|c|c|l|c|} \hline **Category** & **Meta-feature** & **Description** & **Norm.7** \\ \hline \multirow{4}{*}{Cardinalities} & Cardinality & Number of distinct values within an attribute & Yes \\ \cline{2-4} & Uniqueness & Measures if the attribute contains unique values & No \\ \cline{2-4} & Incompleteness & Measures the number of missing values & No \\ \cline{2-4} & Entropy & Measures the variety of an attribute & Yes \\ \hline \multirow{4}{*}{Value distribution distribution} & Average frequency & The average value of the frequency distribution count & Yes \\ \cline{2-4} & Min frequency & The minimum value of the frequency distribution count & Yes \\ \cline{2-4} & Max frequency & The maximum value of the frequency distribution count & Yes \\ \cline{2-4} & SD frequency & The standard deviation of the frequency distribution count & Yes \\ \cline{2-4} & Octiles & The octiles (quantiles) of the frequency distribution in percentages & No \\ \cline{2-4} & Min perc frequency & The minimum value of the frequency distribution in percentages & No \\ \cline{2-4} & Max perc frequency & The maximum value of the frequency distribution in percentages & No \\ \cline{2-4} & SD per frequency & The standard deviation of the frequency distribution in percentages & No \\ \cline{2-4} & Constancy & Frequency of the most frequent value divided by number of rows & No \\ \cline{2-4} & Frequent words & The 10 most frequent words & No \\ \cline{2-4} & Soundex & The 10 most frequent words in soundex representation & No \\ \hline \multirow{4}{*}{Syntactic} & Data type & The data type of the attribute (i.e., numeric, alphanumeric, alphabetic, & No \\ \cline{2-4} & Data type & nonAlphanumeric, or datetime) & \\ \cline{2-4} & Specific type & The specific type of the attribute (i.e., phone, email, url, ip, username, or phrases) & No \\ \cline{2-4} & Percentage data type & The percentage for each data type detected in the data values & No \\ \cline{2-4} & Percentage specific type & The percentage for each specific type detected in the data values & No \\ \cline{2-4} & Longest string & The number of characters in the longest string & Yes \\ \cline{2-4} & Shortest string & The number of characters in the shortest value in the attribute & Yes \\ \cline{2-4} & Average string & Average length of the strings in term of characters & Yes \\ \cline{2-4} & Number words & The number of words in the attribute & Yes \\ \cline{2-4} & Average words & The average words in the attribute & Yes \\ \cline{2-4} & Min words & The minimum words in the attribute & Yes \\ \cline{2-4} & Max words & The maximum words in the attribute & Yes \\ \cline{2-4} & SD words & The standard deviation in the attribute & Yes \\ \hline \multirow{2}{*}{Pair metadata} & Best containment & The containment score assuming all distinct values are covered & No \\ \cline{2-4} & Flipped containment & Containment assuming all distinct values are covered divided by max cardinality & No \\ \cline{2-4} & Name distance & Measures the difference of two attribute names using Levenshtein distance & No \\ \hline \end{tabular} \end{table} Table 7. Meta-features composing a profile Figure 5. Continuous quality in the ground truth for \(Q(A,B,0)\), \(Q(A,B,0.25)\), and \(Q(A,B,0.5)\) Two common normalization techniques are Min-Max and Z-score. The former consists on rescaling data into the range \([0,1]\), This technique, however, is sensitive to outliers which will lay on the boundaries. Oppositely, Z-score normalization overcomes this issue by rescaling values to have a mean of 0 and a standard deviation of 1. For this reason, we use Z-score to normalize meta-features. The following equation depicts the normalization process, which requires the mean and standard deviation of the metadata, which is computed from all the values of each attribute to be compared. \[Z\text{-}score=\frac{(x-\mu)}{\sigma}\] After normalizing each meta-feature we compute the distances among pairs of attributes. Here, we also compute binary meta-features. The result of this stage is a set of distance vectors \(D\) where, for each \(D_{i}\), values closer to 0 denote high similarities. **Training a regression model.** Once the distance vectors are computed, we can train the predictive model. Precisely, the goal is to train a model so that, for a pair of attributes \(A,B\), its prediction (i.e., \(\mathcal{P}(P_{u}(A),P_{u}(B),P_{b}(A,B))\)) is highly correlated to the true join quality (i.e., \(\mathcal{Q}(A,B,s)\)). For that, we fixed the intermediate value of \(s=0.25\), and evaluated several regression models performing a hyperparameter grid search to minimize the error. Precisely, we built models based on: linear regression, ensemble learning, support vector regression, and multilayer perceptrons (MLP). The resulting best model, in terms of highest coefficient of determination \(R^{2}\), was an MLP with ReLU function, a single hidden layer with dimension 100 and a value of \(\alpha=0.0001\). This model provided a \(R^{2}=0.8831\). ## 6. Experimental Evaluation In this section, we present the evaluation of our approach. On the one hand, we evaluate and compare the ability of the model to generalize and discover quality joins with respect to state-of-the-art solutions. We, also, evaluate and compare its scalability. In order to present transparent experiments and guarantee the reproducibility of results, we created an informative companion website2. There, it is possible to find all necessary resources (i.e., source code, datasets, and detailed instructions) needed to reproduce the presented results. Footnote 2: [https://www.essi.upc.edu/dim/nestriajid/](https://www.essi.upc.edu/dim/nestriajid/) We have implemented the profiling phase of Nextiajp as an extension of Apache Spark. The runtime methods are implemented as new operators over the structured data processing library Spark-SQL. We leverage on the Catalyst optimizer to efficiently compute the profiles and compare them. The predictive model is available as a _Pickle_, which allows to easily adapt it in other projects. ### Generalizability of our approach Since we empirically derive a join quality metric from ground truth, the first question is whether it is applicable to other data lake settings. Thus, the objective of this first experiment is to show the generizability of our approach. To that end, we perform experiments on datasets independently-generated from those selected in our ground truth and assess our metric's performance. **Methodology.** We consider the GitTables dataset (Kang et al., 2018) as data repository to evaluate our approach. GitTables is a large-scale corpus of 1M relational tables extracted from CSV files in GitHub repositories. Each table is provided as a Parquet file, which comes with metadata associated. On average tables have 25 columns and 209 rows. Of special interest for us are the _semantic annotation types_ each attribute has, which tell the probability a column is similar to a semantic type in a knowledge base (e.g., DBpedia). As described in (Kang et al., 2018), these are annotated using a pretrained FastText model, and the annotation corresponds to the most similar semantic type. We, then, constructed a ground truth (i.e., calculated our join quality metric with a strictness level \(s=0.25\)) considering those attributes with a semantic similarity equal to 1.0 that have the same semantic type (e.g., clusters of _author_, _city_, _name_, etc.). Since evaluating all possible combinations of attributes over the complete corpus of GitTables is unattainable (i.e., it is quadratic in the total number of attributes), we reduced the search space to the abstraction_tables and dwarf_tables datasets, which represent the ones with highest volume. **Results.** We evaluated our predictive model on the annotated GitTables ground truth, which yielded a mean squared error (MSE) value of 0.04, and a mean absolute error (MAE) value of 0.13. Since the predictive model was trained from an independently-generated ground truth, we consider these error scores to be highly acceptable. Then, we also evaluated the ability of the predictive model to discern syntactically and semantically-joinable pairs following the same approach as in Table 4. Hence, Table 8 depicts the predictive perfomance metrics of using different thresholds to determine semantically-joinable pairs over the GitTables dataset. From these results, we can conclude that the precision of our metric is monotonically increasing with higher threshold values, while maintaining a constant high recall. ### Comparison with the state-of-the-art (schema matching) Here, we experimentally compare our quality metric and its associated predictive model. **Methodology.** In this experiment we rely on the Valentine experimental suite for data discovery (Zhu et al., 2017). Valentine provides the implementation of seven schema matching algorithms (which are base for data discovery methods), together with a collection of ground \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.5} \\ \hline \(P=0.40\) & \(R=0.87\) & \(F=0.55\) & \(A=0.97\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.6} \\ \hline \(P=0.49\) & \(R=0.85\) & \(F=0.62\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.7} \\ \hline \(P=0.58\) & \(R=0.85\) & \(F=0.69\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.8} \\ \hline \(P=0.68\) & \(R=0.87\) & \(F=0.76\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.9} \\ \hline \(P=0.76\) & \(R=0.88\) & \(F=0.82\) & \(A=0.99\) \\ \hline \end{tabular} \end{table} Table 8. Performance metrics of using different thresholds over \(Q\) to predict semantically-joinable pairs. \(P\), \(R\), \(F\), and \(A\) denote, respectively, precision, recall, F-score, and accuracy truth annotated datasets. The datasets consider datasets manually annotated as well as automatically annotated. We, precisely, focus on their _joinable_ scenario in Valentine, which is equivalent to our definition of semantically-joinable pairs. We extended Valentine to incorporate new matchers for _a)_ our discrete quality metric with \(L=4\); _b)_ our continuous quality metric with a strictness level of \(s=0.25\); and _c)_ the learned predictive model of _b)_. Since Valentine's ground truth is labeled on a binary class (i.e., joinable or non-joinable), we considered variants of the previously discussed matchers with different threshold values. We extended Valentine's plots, which are represented as boxplots, with our performance metrics under both instance-based and hybrid scenarios. Since our approach does not consider data transformations to calculate the join quality, we did not consider the scenario where Valentine incorporates noise in the instances and schema, and focused only on the verbatim setting. **Results.** We, first report on the _recall at size of ground truth_ evaluation metric. This is a metric that, if the size of ground truth is \(N\), then only the first \(N\) proposed matches will be considered for the recall calculation. This is a metric that allows to assess the quality of the ranking a method provides. Figure 6, depicts the obtained results by means of boxplots. On the left-hand side, we encounter the instance-based methods, which are those that compute their rankings based on the data values of each attribute. Hence, here, we present the results for our discrete quality metric (i.e., _DQ_), and the continuous one with three thresholds (i.e., _CQ_). Here, we can observe that the quality of the ranking provided by all our matcher variants is equal or better than state-of-the-art instance-based methods. This is another indicator of how our proposed metric generalizes, since here it is evaluated on an independently labeled ground truth. Yet, more interestingly, we shift our attention to the right-hand side of Figure 6, depicting the recall at size of ground truth for hybrid approaches (i.e., those that make use of schema and auxiliary data structures previously generated from the instances). Here, we compare the matchers encapsulating our predictive models (i.e., _ML_) with 0.5, 0.6 and 0.7 as threshold to determine joinability. As it can be observed, our predictive approach yields better rankings than most competitors only being in pair with EmbDI (Chen et al., 2017), a method that finds relationships between columns comparing their previously trained embeddings. Next, we compare the effectiveness of the approaches by means of the _precision at 50%_ performance metric. This is a metric that computes the precision only over the top 50% of the provided ranking, which allows to determine the level of false positives a matcher yields. As shown in Figure 7, both our calculated instance-based and predicted metrics clearly outperform the state-of-the-art in terms of precision results. This is, all pairs returned by our approach are indeed true positives. Surprisingly, approaches such as EmbDI which have high recall scores, clearly fail on effectively identifying true positive matches. Concerning our approach, the obtained precision results are extremely good, with only few outliers provided by the predicted metric. After examining Valentine's benchmark, we believe that this is due to the fact that its ground truth datasets have a lack of heterogeneity and are not representative of a data lake environment, as opposite to the GitTables scenario evaluated in Section 6.1. ### Comparison with the state-of-the-art (data discovery) Here, we provide a comparison with state-of-the-art data discovery systems which are not part of the Valentine suite. Precisely, we compare Nextiajp with Aurum (Navarro et al., 2017), D3L (Chen et al., 2017), and WarpGate (Floran et al., 2017). These are systems whose course code is openly available. Unfortunately, no other solutions in the realm of approximate data discovery (i.e., those based on hash or profiles) could be considered due to the fact that _a)_ the code is openly available but it cannot be reproduced due Figure 6. Recall at size of ground truth scores provided by Valentine to outdated dependencies or it is hardcoded for specific environments, or _b_) they assume unrealistic data lake scenarios expecting all datasets to be loaded and pre-processed in a relational database. **Methodology.** For evaluation purposes, we collected 139 independent datasets from those used for the ground truth. We further divided such datasets into 4 testbeds: extra-small \(\text{XS}\) (\(0-1\) MB), small \(S\) (\(1-100\) MB), medium \(M\) (\(100\) MB \(-1\) GB) and large \(L\) (\(>1\) GB). Respectively, these contain 28, 46, 46, and 19 datasets, and 159, 590, 600, and 331 string attributes. For each testbed, manual ground truth was generated with a quality level according to the quality of the join (from 1 to 4), where those above 3 are considered semantically-joinable. Such testbeds are also available in the paper's companion website. The ability to rank joinable pairs on all the systems was assessed over top-K queries, where we report _Precision@K_ and _Recall@GroundTruth_. The former scoring the ratio of true positives over K, and the latter scoring the ratio of true positives over the size of the ground truth for each query. To that end, for each candidate pair, we measure the quality metric with a _balanced_ strictness level. **Results.** We build on the openly available experimental results reported in [10], where Aurum, D3L and WarpGate are assessed over testbeds \(S\) and \(M\), since these are the largest ones in terms of datasets and ground truth defined in this paper. Then, in Figures 8 and 9 we report, respectively, top-K results for such two testbeds (note AU stands for Aurum, WG for WarpGate, and \(\text{NXjD}\) for \(\text{NextiajD}\)). From the obtained results, we can see that \(\text{NextiajD}\) consistently outperforms the alternative systems in terms of precision and recall as the value of \(k\) increases. Thus, \(\text{NextiajD}\) provides more accurate rankings than the alternatives with respect to the available ground truth. It is important, however, to remark that due to the lack of community consensus on how to quantify joinability for data discovery, the compared systems do not target the same objective metric. Thus, in this experiment we have followed the definition proposed in this paper, which is motivated to address large-scale data lake scenarios where data are highly denormalized and file formats embed tabular data. On such metric, \(\text{NextiajD}\) outperforms the alternative systems in terms of precision and recall for top-K queries. Figure 8. Precision and recall scores on testbed \(S\) for state-of-the art data discovery systems Figure 7. Precision at 50% scores provided by Valentine Figure 9. Precision and recall scores on testbed \(M\) for state-of-the art data discovery systems ### Scalability As previously discussed, our most intensive task with regard to computational resources is the generation of attribute profiles from datasets. Thus, here we perform stress tests of this component by means of two experiments. **Methodology.** We generated a 10GB base CSV file with 5 columns and systematically extended it in batches of 10GBs, up to 60GBs. Next, we followed a similar strategy with regard to columns. We created a 20GB base file that was systematically extended with a new duplicate column each time. The resulting files were stored in a Hadoop HDFS cluster, using the default block size and replication parameters. In order to simulate a realistic large-scale scenario, we also converted each input file to Apache Parquet3 format. Parquet is an specialized hybrid layout that fragments data into row group partitions (i.e., physically-independent columns), while it also embeds numerous statistics to optimize queries. To evaluate the scalability of our approach in terms of distribution, we compute the profiling runtime using \(n\) Spark workers (cf. HDFS standoates) in the range \(1\ldots 3\). Footnote 3: [https://parquet.apache.org/](https://parquet.apache.org/) **Results.** Figure 10 depicts the profiling runtime for an increasing file size. Regardless of the number of workers and data format used, the runtime linearly scales with the file size. As expected, profiling Parquet files are much more efficient than CSV ones (i.e., an approximate 4x to 5x speed-up), as we can benefit from statistics and compression when computing certain meta-features. As depicted in Figure 11, we can also observe that the profiling runtime trend scales linearly with the number of columns. Similarly to the previous case, using Parquet significantly speeds up the process, here with a 7x to 8x factor. Finally, in Table 9, we show the average profile size per each testbed from those earlier presented. The disk usage is proportional to both the number of rows and columns. Although the number of columns is the leading factor for the profiling size, the dataset cardinality impacts on the size of some meta-features (e.g., frequent words, soundex, etc.). In any case, the profile sizes are reasonable. Thus, they can be precomputed offline and stored together with the dataset as metadata. The only exception to this would be binary meta-features. As final conclusion, these experiments show that our approach does not introduce any blocking factor hindering parallelism and can fully benefit from it. ## 7. Conclusions and Future Work We have presented a novel learning-based approach for data discovery on large-scale repositories of heterogeneous, independently created datasets. Our work is motivated by (i) the poor predictive performance of current profile-based solutions, and (ii) the inability to scale-up of hash-based approaches, as well as their low precision, which is undesirable for large-scale scenarios. In order to overcome these limitations, we propose a scalable method yielding good precision, and grounded on a novel qualitative definition of join quality. We have experimentally shown that our approach outperforms the state-of-the-art data discovery approaches in terms of predictive and runtime performance. As future work, we look for adapting our approach to detect semantic non-syntactic join relationships (i.e., requiring some simple transformation on the values before joining). Based on such predictions, the system should be able to propose the required transformations to join. ###### Acknowledgements. The authors are grateful to Tianji Cong for kindly providing the raw experimental results for the experiment reported in Section 6.3. This work was partly supported by the DOGO4ML project, funded by the Spanish Ministerio de Ciencia e Innovacion under project PID2020-117191RB-100 / AEI/10.13039/501100011033. Sergi Nadal is partly supported by the Spanish Ministerio de Ciencia e Innovacion, as well as the European Union - NextGenerationEU, under project FJC2020-045809-1 / AEI/10.13039/501100011033.
2307.16363
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA Acceleration
Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4\% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at \url{https://github.com/asdvfghg/BearingPGA-Net}.
Jing-Xiao Liao, Sheng-Lai Wei, Chen-Long Xie, Tieyong Zeng, Jinwei Sun, Shiping Zhang, Xiaoge Zhang, Feng-Lei Fan
2023-07-31T01:43:38Z
http://arxiv.org/abs/2307.16363v1
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA Acceleration ###### Abstract Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at [https://github.com/asdvfghg/BearingPGA-Net](https://github.com/asdvfghg/BearingPGA-Net). Bearing fault diagnosis, deep learning, model compression, field programmable gate array (FPGA) ## I Introduction Rotating machines, including turbines, pumps, compressors, fans, etc., are widely used in industrial fields [1, 2, 3]. Rolling bearings, whose primary roles are to support the whole machine and reduce friction, are crucial components in rotating machines. Statistics from authoritative institutes reveals that bearing failure accounts for approximately 45%-70% of all mechanical faults [4, 5]. Therefore, detecting bearing failure is an essential task in industry and civil fields. A timely and accurate detection can greatly enhance the reliability and efficiency of bearings, thereby reducing the enormous economic loss. Since bearing failures often exhibit unique abnormal vibration [6, 7, 8], the most common approach to diagnosing bearing faults is to install an acceleration sensor on mechanical surfaces to measure vibration signals, and then a diagnosis algorithm is applied to detect faulty signals. Over the past decade, deep learning methods, represented by convolutional neural networks (CNNs) and attention-based models, have been dominating the problem of bearing fault diagnosis [9, 10, 11, 12]. Despite the significant success achieved by deep learning methods, the growing model size renders them impractical in industrial settings, since deploying deep learning models often demands high-performance computers. But a factory usually has multiple rotating machines. For instance, in nuclear power plants, there are numerous rotating machines such as seawater booster pumps and vacuum pumps that require monitoring of their vibration signals [13]. Thus, installing computers for each machine will not only undesirably occupy space but also impede production due to excessive circuit connections. In fact, the field programmable gate array (FPGA), a class of low-power consumption, high-speed, programmable logic devices, is widely used in industrial fields. FPGA has high-speed signal processing capabilities such as time-frequency analysis, which can enhance the accuracy and real-time performance of fault detection [14, 15, 16]. By leveraging its parallel computing capabilities, FPGA dramatically increases the speed and energy efficiency, achieving over a thousand times improvement than a digital signal processor (DSP) in fault detection execution [17]. However, the FPGA suffers from limited storage space and computational resources, which brings huge trouble in bearing fault diagnosis, as it needs to process long signal sequences in a timely manner. In brief, to deploy fault diagnosis in practical scenarios, we need to address two challenges: designing lightweight but well-performed deep learning models and deploying these models on embedded hardware such as FPGAs without compromising the performance too much. Along this line, researchers have put forward lightweight deep models for bearing fault diagnosis, aiming to strike a balance between diagnostic accuracy and model complexity [18, 19, 20, 21]. Nevertheless, these lightweight models have not been really deployed on the embedded hardware, and whether or not the good performance can be delivered as expected remains uncertain. On the other hand, two types of hardware have been utilized to deploy CNNs, System-on-Chip (SoC) FPGA and Virtex series FPGA. The former, exemplified by ZYNQ series chips which integrate an ARM processor with an FPGA chip, have been employed to deploy a limited number of bearing fault diagnosis methods [22, 23]. These approaches rely on the PYNQ framework, which translates the Python code into FPGA-executed Verilog through the ZYNQ board. While the PYNQ framework reduces development time, the FPGA resources available for CNN acceleration are limited, reducing the processing time compared to FPGA boards. As for the latter, some prior researchers have designed FPGA-based accelerators for CNNs and successfully deployed CNNs on Virtex series FPGA [24, 25, 26]. However, these FPGAs, capable of deploying multi-layer CNNs, cost ten times higher than commonly used mid-range FPGAs (Kintex series). Consequently, widespread deployment is hindered due to excessive costs. To resolve the tension between model compactness and deployment, here we propose BearingPGA-Net: a lightweight and deployable bearing fault diagnosis network via decoupled knowledge distillation and FPGA acceleration. First, we apply the decoupled knowledge distillation (DKD) to build a lightweight yet well-performed network, named as BearingPGA-Net. Knowledge distillation (KD) is a form of model compression that transfers knowledge from a large network (teacher) to a smaller one (student) [27, 28], which is deemed as more effective than directly prototyping a small network. Knowledge distillation can be categorized into response-based KD (logit KD) and feature-based KD. Albeit previous studies have demonstrated the superiority of feature-based KD over logit KD for various tasks [29, 30], we select the logit KD. Our student network comprises only a single convolutional layer for deployment, and there are no hidden features for feature-based KD to distill. Recently, a novel approach called decoupled knowledge distillation has been proposed, which reformulates the traditional logit distillation by decomposing the knowledge distillation loss function into two separate functions. The training of these functions can be flexibly balanced based on the task [31]. The decoupling is a strong booster for the performance of logit KD, and we favorably translate it into our task. Second, we utilize Verilog to implement neural network accelerators. By devising parallel computing and module reuse, we deploy BearingPGA-Net and enhance its computing speed in a Kintex-7 FPGA. Specifically, we construct basic arithmetic modules with basic multiplication and addition units for the convolutional and fully-connected layers. To cater to the requirements of BearingPGA-Net, we fuse a ReLU and Max-pooling layer to further reduce computation. These computational modules are also translatable to other CNN-based tasks. Moreover, we design a tailored layer-by-layer fixed-point quantization scheme for neural networks, ensuring minimal loss of parameter accuracy in FPGA computations while also cutting the number of parameters by half. Notably, our FPGA framework fully leverages the computational resources of the Kintex-7 FPGA, which is a widely used mid-range FPGA in industrial fields. Compared to previous implementations via SoC FPGA and Virtex FPGA, BearingPGA-Net+Kittex FPGA achieves preferable power consumption, model compactness, and high performance, which is highly scalable in real-world fault diagnosis scenarios. In summary, our contributions are threefold: \(\bullet\) We build BearingPGA-Net, a lightweight neural network tailored for bearing fault diagnosis. This network is characterized by a single convolutional layer and is trained via decoupled knowledge distillation. \(\bullet\) We employ fixed-point quantization to compress the parameters of BearingPGA-Net by 50% and propose a CNN accelerators scheme, where we utilize parallel computing and module reuse techniques to fully leverage the computational resources of the Kintex-7 FPGA. \(\bullet\) Compared to lightweight competitors, our proposed method demonstrates exceptional performance in noisy environments, achieving an average F1 score of over 98% on CWRU datasets. Moreover, it offers a smaller model size, with only 2.83K parameters. Notably, our FPGA deployment solution is also translatable to other FPGA boards. ## II Related Works **1) Lightweight CNNs for bearing fault diagnosis.** Some lightweight networks were proposed for deployment in resource-constrained environments. Yao _et al._ introduced the stacked inverted residual convolution neural network (SIRCNN), comprising one convolutional layer and three inverse residual layers [18]. Similarly, Fang _et al._ developed the lightweight efficient feature extraction network (LFEE-NET), which is a feature extraction module conjugated with a lightweight classification module. However, despite the simple structure of the classification network, its feature extraction network is complex [19]. Other lightweight models incorporated a multi-scale model [20] or a self-attention mechanism [21], demonstrating the superior performance in handling few-shot or cross-domain issues. But these networks have not yet been deployed on embedded devices. Therefore, it remains unclear how much performance will be sacrificed when deployed. **2) Bearing fault diagnosis models for FPGAs.** There were only a small number of models successfully deployed on FPGAs. An FPGA-based multicore system was proposed for real-time bearing fault diagnosis using acoustic emission (AE) signals [32]. It designed a high-performance multicore architecture including 64 processing units running on a Xilinx Virtex-7 FPGA to support online processing, and using time-frequency analysis and support vector machine (SVM) for diagnosis [32]. Toumi implemented an envelope spectrum and multi-layer perceptron (MLP) structure on a ZYNQ-7000 FPGA, achieving around 90% accuracy in CWRU datasets [22]. Ji _et al._ used the knowledge distillation technique to train a single-layer convolutional neural student network and deployed it into a ZYNQ FPGA through parameter quantization, resulting in an approximate 8% improvement compared to training the network directly [23]. Despite these achievements, the task of deploying fault diagnosis models in FPGAs still have a large room for improvement, as their performance has not yet reached the level of the large model. Our idea is to combine CNNs and signal processing techniques so that it can achieve real-time and high-performance fault diagnosis in industrial fields. ## III Method As depicted in Fig. 1, prototyping BearingPGA-Net follows a two-step pipeline: i) training a lightweight BearingPGA-Net via decoupled knowledge distillation; ii) deploying the BearingPGA-Net into an FPGA. Notably, we devise a layer-by-layer fixed-point quantization method to convert the parameters (weights and bias) of PyTorch's CNN, which are originally in 32-bit floating format, into a 16-bit fixed-point format. Additionally, for online diagnosis, the measured signal is amplified by a signal conditioner and then converted into a digital signal by an Analog-Digital (AD) converter. Subsequently, the FPGA's FIFO (first in first out) performs clock synchronization and buffering, followed by executing convolutional neural network operations. Finally, the diagnosing result is displayed using four LEDs. ### _Training BearingPGA-Net_ The BearingPGA-Net is trained via decoupled knowledge distillation (DKD), which is an approach to training a small model (student) with a well-trained large model (teacher). It forces the student to emulate the output of the teacher. During the training process, the teacher model is trained first. Subsequently, its parameters are freezed to generate the outputs as new labels to train the student model. **1) Constructing teacher and student networks.** For the teacher network, we adopt a widely-used 1D-CNN architecture known as the WDCNN [33], which consists of six CNN blocks and one fully-connected layer. In our implementation, we adjust the number of weights in the fully-connected layer to fit the input data size. Additionally, we design a one-layer student network specifically for FPGA deployment (BearingPGA-Net). This student network comprises a single 1D-CNN layer, a ReLU activation layer and a Max-Pooling layer, followed by a fully-connected layer mapping the latent features to the logits. The structure information of BearingPGA-Net is shown in Tab. I. **2) Decoupled knowledge distillation.** Despite numerous forms of knowledge distillation, our method adopts the response-based KD, which utilizes the teacher model's logits for knowledge transfer. This is because the limited hardware resource in a low-cost FPGA cannot hold two or more convolutional layers. Then, there are no hidden features for feature-based KD to distill. In classical knowledge distillation, soft labels, which are the logits produced by the teacher, are deemed as distilled knowledge [27]. Soft labels are obtained by the softmax function converting the logits \(z_{i}\) of a neural network into the probability \(p_{i}\) of the \(i\)-th class: \[p_{i}=\frac{\exp{(z_{i}/T)}}{\sum_{j=1}^{C}\exp(z_{j}/T)}, \tag{1}\] where \(C\) represents the number of classes, and \(T\in\mathbb{R}^{+}\) serves as the temperature factor. When the temperature \(T\) is higher, the probability distribution over classes becomes smoother. A lower value of \(T\) (where \(T<1\)) sharpens the output, increasing the disparity in probability values of different classes. For classification, the cross-entropy (CE) loss is adopted to measure the difference between the probability of the predicted label \(p\) and the ground truth \(y\): \[\mathcal{L}_{CE}(y,p(z,T))=-{\sum_{i=1}^{C}}y_{i}\log{p_{i}}, \tag{2}\] and KL-Divergence measures the similarity between the probability labels of the teacher \(p^{\mathcal{T}}\) and the student \(p^{\mathcal{S}}\), \[\mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{\mathcal{S}}(z,T))=-{\sum_{i=1}^{C}}p _{i}^{\mathcal{T}}\log{\frac{p_{i}^{\mathcal{S}}}{p_{i}^{\mathcal{T}}}}. \tag{3}\] KD combines two loss functions: \[\mathcal{L}_{KD}=(1-\alpha)\mathcal{L}_{CE}(y,p^{\mathcal{S}})+\alpha T^{2} \mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{\mathcal{S}}(z,T)), \tag{4}\] where \(\alpha\) is the scale factor to reconcile the weights of two loss functions, and \(T^{2}\) keeps two loss functions at the same level of magnitude. Combining \(\mathcal{L}_{KL}\) and \(\mathcal{L}_{KD}\) helps the student get the guidance of the teacher and the feedback from the ground truth. This ensures that the student model learns effectively and reduces the risk of being misled. Fig. 1: The overall framework for prototyping and deploying BearingPGA-Net. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Layer & Kernel size & Channel & Output size & Padding & Stride \\ \hline Conv1d & 64x4 & 4 & 128x4 & 28 & 8 \\ \hline ReLU & - & - & 128x4 & - & - \\ \hline MaxPool & 2×1 & 4 & 64x4 & 0 & 2 \\ \hline Linear & (256,10) & - & 10 & - & - \\ \hline \end{tabular} \end{table} TABLE I: The structure parameters of BearingPGA-Net. However, \(\mathcal{L}_{KL}\) implies that all logits from the teacher are transferred to the student on an equal booting. Intuitively, student models should have the ability to filter the knowledge they receive. In other words, knowledge relevant to the current target category should be reinforced, while knowledge far from the target category should be attenuated. The coupling in classical KD harms the effectiveness and flexibility across various tasks. To address this, decoupled knowledge distillation, which factorizes \(\mathcal{L}_{KL}\) into a weighted sum of two terms (the target class and non-target class), was proposed [31]. Let \(p_{t}\) and \(p_{/t}\) be probabilities of the target and non-target classes, respectively. Then, we have \[p_{t}=\frac{\exp{(z_{t}/T)}}{\sum_{j=1}^{C}\exp{(z_{j}/T)}},\ p_{/t}=\frac{ \sum_{i=1,i\neq t}^{C}\exp{(z_{i}/T)}}{\sum_{j=1}^{C}\exp{(z_{j}/T)}}, \tag{5}\] and \[\hat{p}_{t}=\frac{p_{t}}{p_{/t}}=\frac{\exp{(z_{t}/T)}}{\sum_{i=1,i\neq t}^{C} \exp{(z_{i}/T)}}. \tag{6}\] Then, \(\mathcal{L}_{KL}\) can be factorized as \[\begin{split}&\mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{S}(z,T)) \\ =&-p_{t}^{\mathcal{T}}\log\frac{p_{t}^{\mathcal{S}}}{ p_{t}^{\mathcal{T}}}-\sum_{i=1,i\neq t}^{C}p_{i}^{\mathcal{T}}\log\frac{p_{t}^{ \mathcal{S}}}{p_{t}^{\mathcal{T}}}\\ =&\underbrace{-p_{t}^{\mathcal{T}}\log\frac{p_{t}^{ \mathcal{S}}}{p_{t}^{\mathcal{T}}}-p_{/t}^{\mathcal{T}}\log\frac{p_{/t}^{ \mathcal{S}}}{p_{/t}^{\mathcal{T}}}}_{\mathcal{L}_{KL}^{NCKD}}-p_{/t}^{ \mathcal{T}}\underbrace{\sum_{i=1,i\neq t}^{C}\hat{p}_{i}^{\mathcal{T}}\log \frac{\hat{p}_{i}^{\mathcal{S}}}{\hat{p}_{i}^{\mathcal{T}}}}_{\mathcal{L}_{KL }^{NCKD}},\end{split} \tag{7}\] where \(\mathcal{L}_{KL}^{TCKD}\) denotes the KL loss between teacher and student's probabilities of the target class, named target class knowledge distillation (TCKD), while \(\mathcal{L}_{KL}^{NCKD}\) denotes the KL loss between teacher and student's probabilities of the non-target classes, named non-target class knowledge distillation (NCKD). The derivation can also be found in [31]. Thus, the entire DKD loss function is \[\mathcal{L}_{DKD}=(1-\alpha)\mathcal{L}_{CE}(y,p^{\mathcal{S}})+\alpha T^{2}( \beta\mathcal{L}_{KL}^{TCKD}+\gamma\mathcal{L}_{KL}^{NCKD}), \tag{8}\] where \(p_{t}\) and \(p_{/t}\) are merged into two hyperparameters \(\beta\) and \(\gamma\) to coordinate the contributions of two terms. In bearing fault diagnosis, encouraging TCKD while suppressing NCKD (\(\beta>\gamma\)) often leads to performance improvement. By optimizing this decoupled loss, the knowledge acquired by the teacher model is more easily imparted into the student model, thereby improving the student network's performance. ### _Deploying BearingPGA-Net into FPGA_ The PYNQ framework can directly convert the Python code into FPGA bitstream files, but it is specifically designed for the architecture of a System-on-Chip (SoC) FPGA. This type of FPGA integrates an ARM processor and an FPGA chip into a single package. However, the FPGA in such SoC has only 50k look-up table (LUT), while other FPGAs have over 200k LUT. As a result, the computation speed of the SOC FPGA is much slower [22, 23] than other FPGAs. In contrast, without bells and whistles, we design a simple yet pragmatic CNN acceleration scheme that directly translates the network computation modules in logic circuitry, which is directly deploying the module into FPGA chips, without the need for ARM chip translation, thereby fully leveraging the FPGA's high-speed computing capabilities. Specifically, we convert the operations of BearingPGA-Net into Verilog and optimize the place and route (P&R) of the logical connection netlist such as logic gates, RAM, and flip-flops to implement hardware acceleration. The entire process includes register transfer level (RTL) design, simulation and verification, synthesis, implementation (P&R), editing timing constraints, generation of bitstream files, and the FPGA configuration. Notably, our scheme is also scalable to other FPGA chips such as Virtex-7series. **1) Fixed-point quantization.** The original network in Python employs the 32-bit floating-point numbers for high-accuracy calculations. However, these 32-bit numbers consume a significant amount of memory, which cannot allow the deployment of single-layer networks. To address this, we employ fixed-point quantization, which converts the numbers to 16-bit fixed-point format. In this format, the bit-width keeps intact for integers, decimals, and symbols in a floating-point number. The reduction in precision is up to a half of the original, despite a minor decrease in performance. The fixed-point number is expressed as \[Q_{X,Y}=\pm\sum_{i=0}^{X-1}b_{i}\cdot 2^{i}+\sum_{j=1}^{Y}b_{X+j}\cdot 2^{-j}. \tag{9}\] where \(Q_{X,Y}\) is a fixed-point number that contains \(X\)-bit integers and \(Y\)-bit decimals, \(b_{i}\) and \(b_{X+j}\) are the value on the binary bit. The storage format of fixed-point numbers in an FPGA is \((\pm,b_{i},\cdot,b_{X+j})\). Furthermore, when considering a fixed bit-width, it is crucial to determine the trade-off between the bit-widths of Fig. 2: An example of bit-width of integers and decimals in each layer of BearingPGA-Net on FPGA computation, where \((S,X,Y)\) denotes the number of symbol bit, integer bits, and decimal bits, respectively. integers and decimals. It is important to have a sufficiently large range to minimize the probability of overflow while ensuring a small enough resolution to reduce quantization error [34]. In this study, a dynamic fixed-point quantization is assigned to each layer of the BearingPGA-Net to preserve the overall accuracy [35, 36]. The specific bit-width for integers and decimals is determined based on the maximum and minimum values of parameters in each layer, so different models may have various bit-width. The allocation of bit-width of integers and decimals in each layer is illustrated in Fig. 2. **2) The overall workflow.** As shown in Fig. 3. Parameters are stored in the ROM after quantization, and we implement the BearingPGA-Net accelerator (FFT, convolutional layer, max-pooling conjugated with ReLU, fully-connected layer) using Verilog. The FFT operation is performed by the IP core, and we carefully design logic circuit modules for all operations of the BearingPGA-Net to carry out. Moreover, some specific modules used in FPGAs are described below: Receiving filter (RF) selector: It splits the input signal into 128 segments of 64 points with a stride of 8, which is equivalent to sliding a convolutional kernel of \(1\times 64\) with a stride of 8 over the input signal. ROM control (Ctrl): It retrieves weight parameters in ROM for the convolutional and fully-connected layers. For the convolutional layer, it reads 256 convolutional weight parameters (4 \(\times\) 64 kernels) once a time and 10 weight parameters per cycle to update the input of the multiplication-accumulation module for the fully-connected layer. In addition, the FPGA directly stores bias parameters (4 for the convolutional layer and 10 for the fully-connected layer) in registers. Shift module: It shifts the position of the decimal point, which converts the bit-width of integers and decimals from (2,13) to (7,8) to adjust the fixed-point numbers in the down-stream fully-connected layer. Classification module: It selects the maximum value among 10 outputs of the fully-connected layer as the failure category of the bearing. Softmax is waived in FPGA deployment. **3) BearingPGA-Net accelerator.** CNN accelerator designs can be generally classified into two groups: computing acceleration and memory system optimization [24]. Since our BearingPGA-Net consists of only one convolutional layer and one fully-connected layer, memory management does not require additional optimization. We focus on optimizing the FPGA acceleration of each layer in BearingPGA-Net. Multiplication-accumulation unit. As FPGA registers cannot directly perform matrix calculations, we design the multiplication-accumulation unit beforehand. The logic circuit diagram of the multiplication-accumulation unit is shown in Fig. 4, where the accumulation operation is executed in \(N\) cycles, and the result register is used to store the temporary results in each cycle. The expression of the multiplication-accumulation unit is \[y=\sum_{i}^{N}(p_{i}\times q_{i}), \tag{10}\] where \(p_{i}\) and \(q_{i}\) are 16-bit fixed-point numbers. Convolutional layer. Fig. 5 illustrates the implementation of the convolution layer. The signal after passing through the RF selector is divided into segments of \(128\times 64\). Subsequently, the multiplication-accumulation unit uses weights \(w\in\mathbb{R}^{1\times 64}\) stored in the ROM to multiply the signal \(s_{i}\in\mathbb{R}^{1\times 64}\). Each multiplication-accumulation unit runs for 64 cycles and adds the bias \(b\). The equation for this operation is \(z_{i}=\sum_{j=1}^{64}(s_{i,j}\times w_{j})+b\). Moreover, hardware acceleration is achieved using 128 parallel multiplication-accumulation units. This enables each convolutional kernel to complete the full convolution in just 64 cycles. The parallel output is merged into \(\mathbf{z}=\{z_{1},z_{2},\cdots,z_{128}\}\). With 4 convolutional kernels, the layer requires 4 external cycles, totaling 256 cycles. The speedup is 128-fold relative to conventional implementations. ReLU and max-pooling layer. ReLU activation and max-pooling are back-to-back in the BearingPGA-Net. Also, both ReLU and max-pooling functions attempt to find the maximum value between two numbers. Therefore, we fuse them into a single module to save the inference time. Suppose that the input signal is \(\mathbf{x}\in\mathbb{R}^{1\times n}\), the ReLU activation function is \[p_{i}=\max(0,x_{i}),\ i=1,2,\cdots,n, \tag{11}\] followed by the max-pooling layer, where the window size is \(k\) and the stride is \(s\). The output of this layer is \[q_{i}=\max(\{p_{j}\}_{j=i\times s}^{i\times s+k-1}),\ i=1,2,\cdots,\lfloor \frac{n-k}{s}\rfloor+1, \tag{12}\] Fig. 4: The circuit diagram of a multiplication-accumulation unit, which includes a fixed-point multiplier, a fixed-point adder, a reset register, and a result register. Fig. 3: The overall diagram for deploying the BearingPGA-Net into FPGA. where \(\lfloor.\rfloor\) is the floor function to ensure the last pooling window is fully contained within the signal length. We devise a strategy to reduce the amount of computation as Algorithm 1 shows. Two 16-bit numbers, denoted as \(x_{1}\) and \(x_{2}\), are firstly compared in terms of their symbol bit. If both numbers are smaller than 0, the output is set to 0, thus executing the ReLU function directly. On the other hand, if the numbers have opposite signs, the negative number is killed. These operations can save one maximum comparison. Numerical comparisons are only performed when both numbers are positive, in which case the complete ReLU and max-pooling operations are executed. By adopting our method, we are able to save approximately \(2/3\) of LUT resources. ``` 0:\(x_{1},x_{2}\) (\(x[15]\) symbol bit, \(x[14:0]\) number bits) 1:if\(x_{1}[15]>0\) AND \(x_{2}[15]<0\)then 2:return\(x_{1}\) 3:else if\(x_{1}[15]<0\) AND \(x_{2}[15]>0\)then 4:return\(x_{2}\) 5:else if\(x_{1}[15]<0\) AND \(x_{2}[15]<0\)then 6:return\(0\) 7:else if\(x_{1}[15]>0\) AND \(x_{2}[15]>0\)then 8:if\(x_{1}[14:0]>=x_{2}[14:0]\)then 9:return\(x_{1}\) 10:else if\(x_{1}[14:0]<x_{2}[14:0]\)then 11:return\(x_{2}\) 12:endif 13:endif ``` **Algorithm 1** ReLU and Max-pooling Fully-connected layer. The fully-connected layer establishes connections between the 256 outputs of the ReLU and max-pooling fusion layers and the final 10 outputs. Unlike on computers, the fully-connected layer in FPGAs consumes fewer resources, as there is no need to slide windows. For the fully-connected layer, we reuse multiplication-accumulation units. Initially, the size of weights in the fully-connected layer is \(256\times 10\). To simplify the computations, we divide this matrix into 256 segments, each consisting of 10 weights. Consequently, 10 multiplication-accumulation units perform parallel computation over 256 cycles to generate the output. The implementation for this design is illustrated in Fig. 6. In summary, our FPGA accelerator offers two main advantages: i) Parallelism: multiplication-accumulation units are simultaneously calculated, which greatly boosts computational efficiency. ii) Module reuse: due to the constraint of limited FPGA computing resources that cannot parallelize 1024 units concurrently, multiplication-accumulation units are reused to strike a balance between resources and speed. ## IV Experiments ### _Datasets Descriptions_ **1) CWRU dataset.** This widely-used dataset is curated by Case Western Reserve University Bearing Data Center, which consists of two deep groove ball bearings mounted on the fan-end (FE) and drive-end (DE) of the electric motor. Electros discharge machining is used to inject single-point defects with diameters of 7, 14, and 21 mils into the outer race, inner race, and ball of both bearings, respectively. As a result, there are ten categories in this dataset, including nine types of faulty bearings and one healthy bearing. Specifically, the motor shaft is subjected to four levels of load (0HP, 1HP, 2HP, 3HP, where HP denotes horsepower), which slightly affects the motor speed (1797 r/min, 1772 r/min, 1750 r/min, 1730 r/min). Vibration data are collected at 12 kHz and 48 kHz, respectively. In this study, we analyze the vibration signals collected at 12 kHz on the DE side of the motor. **2) HIT dataset.** The bearing fault test is conducted in the MIIT Key Laboratory of Aerospace Bearing Technology and \begin{table} \begin{tabular}{|l|l|l|l|} \hline Label & Faulty Mode & Label & Faulty Mode \\ \hline 1 & Health & 6 & OR (Moderate) \\ \hline 2 & Ball cracking (Minor) & 7 & OR (Severe) \\ \hline 3 & Ball cracking (Moderate) & 8 & IR (Minor) \\ \hline 4 & Ball cracking (Severe) & 9 & IR (Moderate) \\ \hline 5 & OR cracking (Minor) & 10 & IR (Severe) \\ \hline \end{tabular} \end{table} TABLE II: Ten healthy statuses in our HIT dataset. OR and IR denote that the faults appear in the outer race and inner race, respectively. Fig. 5: The implementation diagram of the convolutional layer. Fig. 6: The implementation diagram of the fully-connected layer. Equipment at Harbin Institute of Technology (HIT). Fig. 7 shows the bearing test rig and the faulty bearings used in the experiment. We utilize HC7003 angular contact ball bearings, which are designed for high-speed rotating machines. The accelerometer is directly attached to each bearing to collect vibration signals of the bearings. Similar to the CWRU dataset, we injected defects at the outer race (OR), inner race (IR), and ball at three severity levels (minor, moderate, severe). Tab. II presents ten healthy statuses of bearings in our dataset. For the test, a constant motor speed of 1800 r/min is set, and vibration signals are acquired using the NI USB-6002 device at a sampling rate of 12 kHz. Each bearing vibration is recorded for 47 seconds, resulting in 561,152 data points per category. Our dataset is more challenging than the CWRU dataset because the bearing faults in our dataset are cracks of the same size but varying depths whose vibration signals between different faults exhibit more similarity. ### _Experimental Configurations_ **1) Data preprocessing.** First, the raw long signal is divided into segments of 2,048 points and standardized. Next, the Fast Fourier Transform (FFT) is applied to convert the signal from the time domain to the frequency domain. Previous research has experimentally demonstrated that employing signal processing techniques such as FFT and wavelet transform, can enhance the performance of shallow neural networks [37]. Because the characteristics of non-stationary vibration signals are more pronounced in the frequency or time-frequency domain than the time domain, this compensates for the constrained feature extraction capability of shallow networks. Specifically, the starting point of the segment of 2,048 points is randomly chosen, and the stride is set to 28 to resample the raw signal. All classes of signals are sampled 1,000 times, resulting in a total of 10,000 samples. Then, the dataset is randomly divided into training, validation, and testing sets with a ratio of 2:1:1. Next, the Gaussian noise is added to the signals to simulate noise in the industrial fields and also evaluate different models' performance in noisy experiments. The signal-to-noise ratio (SNR) is calculated as \(\text{SNR}=10\log 10(P_{s}/P_{n})\), where \(P_{s}\) and \(P_{n}\) are the average power of the signal and noise, respectively. Lastly, the signals are standardized using z-score standardization. Notably, each signal \(\mathbf{x}\) is transformed as \(\mathbf{x}^{\prime}=(\mathbf{x}-\mu)/\sigma\), where \(\mu\) and \(\sigma\) are the mean and standard deviation of the signal \(\mathbf{x}\), respectively. For the CWRU dataset, the SNR ranges from -6dB to 2dB in 2dB intervals. For our dataset, the SNR ranges from 0dB to 8dB in 2dB intervals. As shown in Fig. 8, the signal amplitude in our dataset is lower than in the CWRU dataset. Therefore, even relatively low-intensity noise (0dB SNR) can overshadow the signal. **2) Baselines.** We compare our method against four state-of-the-art lightweight baselines: wide deep convolutional neural networks (WDCNN) [33], lightweight efficient feature extraction networks (LEFE-Net) [38], lightweight transformers with convolutional embeddings and linear self-attention (CLFormer) [19], and knowledge distillation-based student convolutional neural networks (KDSCNN) [23]. All these models are published in flagship journals of this field in recent years. As summarized in Tab. III, our model enjoys the smallest model size, relatively low computational complexity, and the second shortest inference time compared to its competitors. Although our model has slightly higher computational complexity and inference time than KDSCNN due to the introduced FFT, it has a substantially smaller number of parameters (2.83K vs 5.89K), while the model size is the first priority for deployment on FPGAs. **3) Implementation settings.** All experiments are conducted in Windows 10 with Intel(R) Core(TM) 11th Gen i5-1135G7 at 2.4GHz CPU and one NVIDIA RTX 3080 8GB GPU. Our code is written in Python 3.10 with PyTorch 2.1.0. For all compared models, we use stochastic gradient descent (SGD) [39] as the optimizer with momentum=0.9 and Cosine Annealing LR [40] as the learning rate sched \begin{table} \begin{tabular}{|l|c|c|c|} \hline Method & \#Parameters & \#FLOPs & Inference time \\ \hline LEFE-Net & 73.95K & 14.7M & 22.4540s \\ \hline WDCNN & 66.79K & 1.61M & 1.6327s \\ \hline CLFormer & 4.95K & 0.63M & 0.7697s \\ \hline KDSCNN & 5.89K & 70.66K & 0.2219s \\ \hline BearingPGA-Net & 2.83K & 78.34K & 0.6431s \\ \hline \end{tabular} \end{table} TABLE III: The properties of compared models. #FLOPs denotes floating point operations. Time is the elapsed time to infer 1000 samples on an NVIDIA Geforce GTX 3080 GPU. Fig. 8: Comparison of two datasets under noise. Fig. 7: The test rig of faulty bearings to collect data. ular. The training epochs are set to 75, and we use grid search to find the best hyperparameters. \([32,64,128,256]\) is for the batch size, and \([10^{-4},3\times 10^{-4},10^{-3},3\times 10^{-3},0.01,0.03,0.1,0.3]\) is for the learning rate. In KD-based methods, we search \(T\) from \([1,2,3,4,5,6,7,8,9]\), and \(\alpha\) from \([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]\), while in our model we have to additionally search \(\beta\) and \(\gamma\). We configure \([0,0.2,0.5,1,2,4]\) for \(\beta\) and \([1,2,4,8,10]\) for \(\gamma\), respectively. **4) Evaluation metrics.** We use the F1 score to validate the performance of the proposed method. The metric is defined as \[\mathrm{F1~{}score}=\frac{1}{\mathrm{C}}\mathrm{\sum}_{i=1}^{\mathrm{C}}\frac {2\mathrm{TP}_{i}}{2\mathrm{TP}_{i}+\mathrm{FP}_{i}+\mathrm{FN}_{i}},\] where \(\mathrm{TP}_{i}\), \(\mathrm{FP}_{i}\), and \(\mathrm{FN}_{i}\) are the numbers of true positives, false positives, and false negatives for the \(i\)-th class. \(\mathrm{C}\) denotes the number of classes. All results are the average of ten runs. **5) FPGA description.** We utilize the ALINX 7325B-FPGA as the hardware deployment board, as shown in Fig. 9. This board incorporates the Kintex-7 XC7K325T FPGA chip, which is widely used in the industrial field due to its optimal balance between price and performance-per-watt. The clock frequency is set to 100MHz, and we utilize four LEDs as indicators of the fault diagnosis result, where ten categories are encoded as 4-bit binary numbers. The specifications of the FPGA chip are summarized in Tab. IV. ### _Classification Results_ Here, we compare the performance of the student model (BearingPGA-Net) with its competitors. **1) Results on the CWRU and HIT datasets.** On the CWRU dataset, as shown in Tab. V, BearingPGA-Net achieves the top F1 scores in 0HP and 1HP conditions and the second-best scores in 2HP and 3HP conditions. Moreover, our method maintains over 95% F1 score at the -6dB noise level, whereas the KDSCNN performs the worst on the CWRU-0HP dataset at -6dB noise (79.95%). Although WDCNN performs comparably, its number of parameters is approximately 22 times larger than ours. These results demonstrate BearingPGA's strong diagnosis performance despite its small parameter size. On the HIT dataset, as Tab. VI shows, our method achieves the highest average F1 score in noisy scenarios. Although LEFE-Net slightly outperforms ours on clean signals, the difference is only 1%, and the LEFE-Net has a substantially larger model size. Furthermore, the performance of other models (CLFormer: 76.37%, KDSCNN: 74.98%) falls behind ours (83.30%) by a large margin. Overall, our model is the best performer on this dataset. **2) Model compression results.** We compare the compression performance of DKD. Firstly, Tab. VII shows the properties of teacher and student models. DKD effectively compresses the teacher model, reducing model parameters by 17 times, FLOPs by 10.6 times, and inference time by 2.45 times. Secondly, Tab. VIII compares the performance of the models. Clearly, DKD enhances the performance of student model, though still slightly below the teacher (with an average F1 score approximately 1% lower). Comparing the student model with and without DKD, distillation provides considerable improvements, especially in noisy environments. On the CWRU-3HP, DKD improves the average F1 score by 3.7%. Notably, DKD admits 8.23% and 8.46% gains at -6dB on CWRU-2HP and 0dB on the HIT dataset, respectively. These results demonstrate DKD can successfully transfer knowledge to shallow models, despite substantial compression. these hyperparameters. The results are presented in Tab. X. Firstly, the parameter of temperature (\(T\)) does not appear to be highly influential. While a larger value of \(T\) leads to slightly better performance, the improvement is only about 1%. Secondly, the analysis shows that all three scale parameters have a significant impact. In particular, setting \(\alpha\) to 0.2 yields superior results. Once \(\alpha\) surpasses 0.3, there is a noticeable decline in performance, with approximately 64% drop in F1 score. Unlike \(\alpha\), \(\beta\) exhibits less sensitivity, but increasing its value still leads to improved results. Conversely, a lower value (below 2) is recommended for the parameter \(\gamma\). The aforementioned phenomenon exemplifies the characteristics of DKD. Regarding the parameter \(\alpha\), it is utilized to regulate both \(\mathcal{L}_{CE}\) and \(\mathcal{L}_{KL}\) loss. A lower value of \(\alpha\) implies that the student network predominantly learns from the ground truth, while any knowledge transmitted by the teacher model serves merely as supplementary information. Next, \(\beta\) and \(\gamma\) are employed to control the TCKD and NCKD losses, respectively. Consistent with the findings of the ablation experiments, a larger value for \(\beta\) and a smaller value for \(\gamma\) yield optimal results, as they compel the model to effectively harness the knowledge pertaining to the target classes. tion. Tab. XI reveals that the quantized model demonstrates a lower-than-0.4% performance drop in terms of F1, Recall, and Precision scores relative to the original model. Additionally, the parameters in the quantized model are reduced by more than half. These findings highlight the effectiveness of the designed quantization method and convolution architecture for FPGA deployment. Furthermore, the confusion matrices between the FPGA and PyTorch models are compared. As illustrated in Fig. 10, the FPGA model exhibits \(>18\) errors in classifying ball faults (B0-B2), \(<4\) errors for inner race faults (IR0-IR2), and 1 additional error in healthy bearings, compared to PyTorch results. The BearingPGA-Net when deployed on FPGA only generates a total of 24 extra errors compared to the original version across 2500 samples. **4) Resource analysis.** The resource usage for the entire network is summarized in Tab. XII. The LUT resource and block RAM (BRAM) resource exhibit utilization rates of 74.40% and 58.20%, respectively, indicating that BearingPGA-Net makes good use of the FPGA's computational and memory resources. Next, we analyze the resource consumption specifically within the LUT, which is the key computational resource in an FPGA. As illustrated in Fig. 11, the convolution and FFT modules account for the majority of resources in the LUT, constituting approximately 50% and 23%, respectively. This observation underscores the limited computational resources available on a Kintex-7 FPGA, _i.e._, a single convolutional layer alone consumes a half LUT resource. Moreover, we record the model inference time and power consumption on both the CPU and the FPGA, where the Python project is deployed on a portable Intel NUC Mini PC. As presented in Tab. XIII, the FPGA demonstrates an inference time that is approximately \(1/201\) of the CPU's and a power consumption that is nearly \(1/42\) of the CPU. This significant reduction is attributed to the flexibility of FPGAs in processing CNN operations to enhance efficiency. Additionally, FPGA employs parallel computing techniques, with the advantages of low power consumption and accelerated computing rates. Such low power consumption also makes online bearing fault monitoring and diagnosis possible. ## V Conclusions In this paper, we have proposed a lightweight and deployable CNN, referred to as BearingPGA-Net. By integrating FFT and DKD, BearingPGA-Net outperforms other state-of-the-art lightweight models when signals are noisy. Additionally, we have fully utilized the computational resources of the Kintex-7 FPGA and designed CNN accelerating scheme. The parallelism and model reuse significantly improves the running speed of our method, while the customized parameter quantization greatly alleviates the performance drop. Our deployment scheme has achieved over 200 times faster diagnosis speed compared to CPU and maintained over 97% \begin{table} \begin{tabular}{|l|c c c c c|} \hline \(T\) & 1.5 & 2 & 2.5 & 3 & 3.5 \\ \hline F1(\%) & 95.28 & 94.85 & 95.20 & **96.19** & 96.18 \\ \hline \(\alpha\) & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline F1(\%) & 92.49 & **96.19** & 32.13 & 32.25 & 32.19 \\ \hline \(\beta\) & 0.2 & 0.5 & 1 & 2 & 4 \\ \hline F1(\%) & 89.49 & 91.08 & 92.84 & 95.07 & **96.20** \\ \hline \(\gamma\) & 0.2 & 0.5 & 1 & 2 & 4 \\ \hline F1(\%) & **96.41** & 95.58 & 96.19 & 96.30 & 32.09 \\ \hline \end{tabular} \end{table} TABLE X: The hyperparameter sensitivity in DKD on the CWRU-0HP dataset with -6dB noise. \begin{table} \begin{tabular}{|l|c c c|} \hline Hardware & Inference time & Power Consumption \\ \hline Intel [email protected] & 1160us & 28W \\ \hline Kintex-7 XC7K325T@100MHz & 5.77us & 0.67W \\ \hline \end{tabular} \end{table} TABLE XIII: The inference time and power consumption of BearingPGA-Net on CPU and FPGA for 1000 samples. \begin{table} \begin{tabular}{|l|c c c|} \hline Resources & Used resources & Available resources & Resource occupancy \\ \hline LUT & 151637 & 203800 & 74.40\% \\ LUTRAM & 936 & 64000 & 1.46\% \\ FF & 180099 & 407600 & 44.19\% \\ BRAM & 259 & 445 & 58.20\% \\ DSP & 185 & 840 & 22.02\% \\ IO & 6 & 500 & 1.20\% \\ BUFG & 3 & 32 & 9.38\% \\ PLL & 1 & 10 & 10.00\% \\ \hline \end{tabular} \end{table} TABLE XII: The resource report of Kintex-7 XC7K325T FPGA Fig. 11: The LUT consumption with respect to each module. \begin{table} \begin{tabular}{|l|c c c c|} \hline & \#Parameters & F1 (\%) & Recall (\%) & Precision (\%) \\ \hline PyTorch & 2.83K & 97.39 & 97.40 & 97.57 \\ \hline FPGA & 1.42K & 97.12 & 97.34 & 97.12 \\ \hline \end{tabular} \end{table} TABLE XI: Comparison of 32-bit Pytorch and 16-bit FPGA models on HIT dataset. Fig. 10: The confusion matrices of results produced by models before and after quantization on the HIT dataset, where B, IR, OR denote faults at the ball, the inner race, and the outer race, respectively, and H denotes the healthy bearing. classification F1 score on the HIT bearing dataset. Finally, our model introduces a new approach to realizing online bearing fault diagnosis.
2305.07417
Recent results from DANSS
DANSS is a solid state scintillator neutrino spectrometer placed at a small distance from the commercial nuclear reactor of Kalininskaya NPP. The distance from the detector to the center of the reactor core can be changed online in the range 10.9-12.9 m. This fact together with a very high neutrino counting rate (more than 5000 events per day) and low background makes DANSS an ideal detector to search for neutrino oscillations in $1~eV^2 \Delta m^2$ range. We report the results based on the statistics of 6 million events, obtained between April 2016 and March 2022. The results include limits in the short range oscillation parameter space, fuel evolution studies and the bump in the neutrino spectrum. The talk will also cover our plans of the detector upgrade.
Igor Alekseev
2022-12-21T11:30:32Z
http://arxiv.org/abs/2305.07417v1
# Recent results from DANSS ###### Abstract: DANSS is a solid state scintillator neutrino spectrometer placed at a small distance from the commercial nuclear reactor of Kalininskaya NPP. The distance from the detector to the center of the reactor core can be changed online in the range 10.9-12.9 m. This fact together with a very high neutrino counting rate (more than 5000 events per day) and low background makes DANSS an ideal detector to search for neutrino oscillations in 1 eV\({}^{2}\Delta m^{2}\) range. We report the results based on the statistics of 6 million events, obtained between April 2016 and March 2022. The results include limits in the short range oscillation parameter space, fuel evolution studies and the bump in the neutrino spectrum. The talk will also cover our plans of the detector upgrade. Details of the DANSS detector and physics results of the first year of its operation could be found elsewhere [1, 2] as well as the results obtained by the previous year [3, 4]. This short contribution is concentrated over our progress during the last year. Our progress in the inverse beta-decay (IBD) statistics accumulation is illustrated in fig. 1. Now we have data for 3 full fuel campaigns and 4 reactor off periods. An important progress is reached in the understanding of our calibration. In addition to the reactions already used for calibration purposes delayed event spectrum is also analyzed as a calibration source (fig. 2 left). Now the calibration set includes \({}^{12}\)B decays from two reactions, induced by atmosphere muons, \(n+^{12}\)C\(\rightarrow^{12}\)B\(+p\) and \(\mu^{-}\) capture by \({}^{12}\)C; \({}^{22}\)Na and \({}^{60}\)Co radioactive sources; neutrons from \({}^{248}\)Cm fission and IBD, stopped muons decays. \({}^{12}\)B decay data is used to set the scale, because behavior of the produced electron is the most similar to IBD positron we need to measure and this data is accumulated uniformly during the run. No additional smearing is added in the Monte-Carlo simulations any more for a good reproduction of the experimental data. The scale of all the sources in the calibration set agrees within \(\pm 0.2\) with an exception of \({}^{22}\)Na, which has an offset 1.8%. The problem could be in contamination of the sample with \({}^{26}\)Al with slightly different energy of the decay. We keep systematic error 2% in the energy scale until we find a solution of this problem. Continuous reactor monitoring during 3 full fuel campaigns allows us to make a study of counting rate and neutrino spectrum evolution with the change in the fuel composition. The rate dependence over fission fraction of \({}^{239}\)Pu is shown in fig. 2 (middle). Our data demonstrate slope slightly steeper than the slope coming from MC-simulations with HM model [5, 6], while Daya Bay data show less steep slope [7]. We also have new results in light sterile neutrino search. After the new data was included into the analysis the 90% confidence level limit to sterile neutrino parameters in the region of \(\Delta m^{2}\sim 0.9\) eV\({}^{2}\) became as stringent as \(\sin^{2}2\theta<0.004\) (Gaussian CLs method), but two points with close \(\Delta\chi^{2}\sim-10\) manifested themselves. A Felt Figure 1: IBD statistics accumulation during 6 years of DANSS operation used to obtain their significance. The result is shown in fig. 2 (right). A dark blue area corresponds to \(3\sigma\)-limit. Much more conservative \(3\sigma\)-limit from Gaussian CLs method is shown by red line for comparison. The best fit point has significance \(2.35\sigma\), which is much less than we need to claim an indication of the 4th neutrino existence. The collaboration appreciates the permanent assistance of the KNPP administration and Radiation Safety Department staff. This work is supported by Ministry of Science and Higher Education of the Russian Federation under Contract No. 075-15-2020-778.
2309.08397
Topological Exploration using Segmented Map with Keyframe Contribution in Subterranean Environments
Existing exploration algorithms mainly generate frontiers using random sampling or motion primitive methods within a specific sensor range or search space. However, frontiers generated within constrained spaces lead to back-and-forth maneuvers in large-scale environments, thereby diminishing exploration efficiency. To address this issue, we propose a method that utilizes a 3D dense map to generate Segmented Exploration Regions (SERs) and generate frontiers from a global-scale perspective. In particular, this paper presents a novel topological map generation approach that fully utilizes Line-of-Sight (LOS) features of LiDAR sensor points to enhance exploration efficiency inside large-scale subterranean environments. Our topological map contains the contributions of keyframes that generate each SER, enabling rapid exploration through a switch between local path planning and global path planning to each frontier. The proposed method achieved higher explored volume generation than the state-of-the-art algorithm in a large-scale simulation environment and demonstrated a 62% improvement in explored volume increment performance. For validation, we conducted field tests using UAVs in real subterranean environments, demonstrating the efficiency and speed of our method.
Boseong Kim, Hyunki Seong, D. Hyunchul Shim
2023-09-15T13:47:18Z
http://arxiv.org/abs/2309.08397v1
# Topological Exploration using Segmented Map with ###### Abstract Existing exploration algorithms mainly generate frontiers using random sampling or motion primitive methods within a specific sensor range or search space. However, frontiers generated within constrained spaces lead to back-and-forth maneuvers in large-scale environments, thereby diminishing exploration efficiency. To address this issue, we propose a method that utilizes a 3D dense map to generate Segmented Exploration Regions (SERs) and generate frontiers from a global-scale perspective. In particular, this paper presents a novel topological map generation approach that fully utilizes Line-of-Sight (LOS) features of LiDAR sensor points to enhance exploration efficiency inside large-scale subterranean environments. Our topological map contains the contributions of keyframes that generate each SER, enabling rapid exploration through a switch between local path planning and global path planning to each frontier. The proposed method achieved higher explored volume generation than the state-of-the-art algorithm in a large-scale simulation environment and demonstrated a 62% improvement in explored volume increment performance. For validation, we conducted field tests using UAVs in real subterranean environments, demonstrating the efficiency and speed of our method. ## I Introduction Utilizing unmanned vehicles in subterranean environments for exploration is of paramount importance in the field of robotics, with the objective of reducing human casualties and minimizing property damage. Studies on exploration have significantly evolved over the past five years, greatly influenced by the DARPA Subterranean Challenge [1, 2, 3]. These studies have primarily proposed algorithms based on LiDAR sensors that can operate effectively in visually degraded, large-scale environments with dust or darkness. Among various platforms, UAVs have garnered significant attention due to their high mobility, enabling them to achieve greater exploration efficiency. UAVs possess the advantage of being able to operate independent of terrain, making them suitable for flexible mission execution in environments such as stairs, cliffs, or various irregular terrains [4, 5, 6], which makes them a preferred platform for exploration. However, despite these advantages, UAVs still face several challenges such as limitations in weight, size, and flight duration. For instance, solving three-dimensional perspective problems within onboard computers with limited computational capabilities and developing efficient exploration algorithms to cover as much area as possible within the given time constraints are demanded. To address these issues, we propose a novel topological exploration method, as shown in Fig. 1, that segments the regions from a global-scale perspective and fully leverages the Line-of-Sight (LOS) feature of LiDAR sensors. In more detail, the proposed method utilizes segmented regions and the LiDAR keyframes that have contributed to the generation of these regions to determine the execution of local path planning and global path planning. This reduces unnecessary back-and-forth maneuvers of the UAV during exploration, thereby increasing exploration efficiency. We compared our proposed method with the state-of-the-art algorithm GB planner2 [3] in a large-scale simulation environment. Furthermore, we demonstrated the high exploration efficiency and speed of the proposed method through field tests using UAVs in real subterranean environments. The primary contributions of this paper are as follows: 1. Rapid exploration within large-scale 3D environments through frontier that generated from a global-scale perspective using Segmented Exploration Regions (SERs). 2. Efficient path planning using a novel topological map that takes into account the relationship between SERs and the most contributed LiDAR keyframes. 3. Demonstration of the practicality and superiority of the proposed method through large-scale simulation environments and field tests in real subterranean environments. Fig. 1: An instance of the proposed segmented map-based topological exploration algorithm using a UAV in a subterranean environment. ## II Related Works Autonomous robotic exploration has been approached in various ways. A common method involves the use of frontiers [7, 8, 9, 10], which detect boundaries between known and unknown areas. [7] initiated the frontier-based scheme, directing the robot to the nearest frontier in a greedy manner. [8] refined the greedy method, implementing high-speed traversal with minimal velocity changes. In [9], an information structure was presented that includes frontier cells and viewpoints for hierarchical exploration planning. Similarly, [10] introduced a hierarchical framework based on viewpoints and cuboid subspace centroids. Sampling-based methods explore uncharted spaces by randomly generating viewpoints [11, 12, 13], RRT nodes [14, 15, 16], or motion primitives [17]. They are predominantly inspired by the RRT-based algorithms [18], emphasizing efficient exploration of complex environments. [11] is the early work that employs "next best views" (NBVs) to maximize coverage of unknown volumetric spaces. [12] enhances the NBV planner to address the curse of dimensionality by reducing sampling space and incorporating a history of explored areas. In contrast, [17] generates a sequence of motion primitives via randomized control space sampling. Recent research has increasingly focused on large-scale subterranean environments. Notably, the field of underground exploration has seen a surge in interest due to the DARPA Subterranean Challenge [19]. To explore large-scale, multi-branched spaces, topology [20, 21] and graph-based approaches [22, 3] have been proposed for representing exploration regions. In a recent effort, [21] employs convex polyhedra to separate 3D regions, aiding the selection of local exploration directions. This approach leverages the separated regions to generate frontier points and select local exploration directions. The graph-based planners [22], on the other hand, utilize a rapidly-exploring random graph to identify collision-free local paths that maximize the exploration gain. These methodologies are further enhanced by incorporating global path planning, which assists homing operations [1, 22] and re-positioning [21, 3] of exploring robots. While the results are promising, these random sampling-based methods generate a redundant graph representation in a local 3D space and require intensive computation, affecting the efficiency of exploration planning. In this study, we introduce a novel frontier generation scheme designed from a global-scale perspective to minimize redundancies, such as back-and-forth maneuvers, thereby enhancing the exploration performance. We also present an exploration strategy that employs the keyframe contribution to facilitate efficient and rapid exploration in large-scale environments. ## III Problem Statement When launching robots for exploration in unknown areas, LiDAR-based localization (LiDAR Inertial Odometry or SLAM) is imperative. The entirety of map points \(V\subset\mathbb{R}^{3}\), generated from LiDAR-based localization, can be partitioned into the explored region \(V_{\text{cover}}\) and the unexplored region \(V_{\text{uncover}}\), utilizing the keyframes \(K\subset\mathbb{R}^{3}\) representing the map-generating positions and the coverage \(\zeta_{coverage}\) (\(V=V_{\text{cover}}\cup V_{\text{uncover}}\)). When the map points generated from \(i\)-th keyframe \(K_{i}\in\mathbb{R}^{3}\) are \(Z_{i}\subset\mathbb{R}^{3}\), then \(V\) can be expressed as \(V=\{Z_{i,i\in 0,1,2,\cdots,k}\}\), and \(V_{\text{cover}}\) is consists of all {x,y,z} points in \(V\) that have a Euclidean distance from each element of \(K\) within \(\zeta_{coverage}\). The primary objective of the proposed exploration algorithm is to reduce the \(V_{\text{uncover}}\), ultimately satisfying \(V=V_{\text{cover}}\). ## IV Proposed Approach ### _Segmented Exploration Regions (SERs)_ For efficient exploration in large-scale underground environments, we divide \(V_{\text{uncover}}\) into multiple Segmented Exploration Regions (SERs) \(V_{\text{SERs}}\subset V_{\text{uncover}}\), using Euclidean distance clustering techniques as shown in Fig. 2. Unlike existing methods that generate frontiers within a specific sensor range, the proposed exploration method divides the three-dimensional space into segments based on the geometric characteristics of the \(V_{\text{uncover}}\), generating frontiers at a global scale. \(V_{\text{SERs}}=V_{\text{SER}}^{0}\cup V_{\text{SER}}^{1}\cup V_{\text{SER}}^ {2}\cdots V_{\text{SER}}^{k}\) are utilized for the frontier generation, and the centroid of each \(V_{\text{SER}}^{j}\subset\mathbb{R}^{3}\) is considered as the frontier \(g^{j}\in\mathbb{R}^{3}\) that should be reached to cover the \(V_{\text{SER}}^{j}\) for exploration. As exploration progresses and \(K\), \(V\), \(V_{\text{cover}}\), and \(V_{\text{uncover}}\) are updated, the \(V_{\text{SERs}}\) is also updated in real-time. As described in Algorithm 1, the generation of SERs is carried out when a map is updated (when a new keyframe is generated) in LiDAR-based localization. The updated map is down-sampled for computational efficiency and is divided into \(V_{\text{cover}}\) and \(V_{\text{uncover}}\) by comparing it with the keyframes generated so far. For \(V_{\text{uncover}}\), we apply Euclidean distance clustering techniques to generate \(V_{\text{SERs}}\). Finally, for each \(V_{\text{SER}}^{j}\), the corresponding frontier point \(g^{j}\) is generated, Fig. 2: An overview of Segmented Exploration Regions (SERs) generation. The black lines represent 3D map points \(V\) generated from LiDAR keyframes \(K\), and the blue squares denote explored regions \(V_{\text{cover}}\) included in coverage \(\zeta_{coverage}\). SERs are generated by applying Euclidean distance clustering to unexplored regions \(V_{\text{uncover}}\), with squares of the same color indicating the same SER. Frontier corresponding to each SER is represented by star shape with the same color. which is considered as a candidate for the robot to reach in order to reduce \(V_{\text{uncover}}\). The reason keyframe generation serves as the criterion for SERs generation lies in the need to systematically manage all generated map points during exploration, pairing them with corresponding \(K_{i}\) and \(Z_{i}\). Given the inherent nature of LiDAR sensors, \(Z_{i}\) maintain a Line-of-Sight (LOS) trajectory from \(K_{i}\), a pivotal element for the generation of frontier edge, which we aim to explain in Section IV-C. ### _Real-time Graph Generation with LiDAR Keyframes_ When a new keyframe is generated, we generate edges that account for connectivity between each pair of nodes (keyframes) using \(K\) and \(V\). The graph \(G\), composed of nodes and edges, serves as the foundation for the robot's global path planning within the large-scale environment and is updated in real-time during exploration. The edge between the \(i\)-th node \(K_{i}\) and the \(j\)-th node \(K_{j}\) is determined by performing collision checks with the sub-map \(V_{s}\subset V\). \(V_{s}\) is represented by a set of \(Z\) extracted from \(k\) keyframes \(K^{k}=\{K_{k_{0}},K_{k_{1}},\cdots,K_{k_{k-1}}\}\) using the K-nearest neighbors (KNN) algorithm. Similar to the \(V_{\text{SERs}}\) generation, the generation of the graph \(G\) is performed whenever a new keyframe is generated. The proposed graph generation method not only considers connectivity between \(K_{t-1}\) and \(K_{t}\) but also involves examining the connectivity with the \(k\) nearest nodes. This approach is essential for efficient global path planning in complex environments with multiple branches, characteristic of large-scale scenarios. When a new keyframe \(K_{t}\) is generated, it is added to the keyframe array \(K\), and its corresponding \(Z_{t}\) is added to the map point array \(V\). Subsequently, using the KNN algorithm, the indices of \(k\) nearest keyframes are extracted, and a sub-map \(V_{s}\) for collision checking is swiftly generated by leveraging the features of paired \(K\) and \(V\). Collision checking between two nodes and sub-map points is conducted along line segments and sub-map points. If the vectors from each node to sub-map points form an acute angle with the line segment connecting the nodes, collision checking is performed using the Euclidean distance between the point and the line segment. Otherwise, collision checking is carried out using the Euclidean distance between the point and the node forming an obtuse angle. ### _Frontier Graph Generation with Keyframe Contribution_ In this section, we present a method for generating a graph between frontiers \(g=\{g^{j,j\in{0,1,\cdots,l}}\}\) and specific node \(K_{i}\) based on the contribution of keyframes. In Section IV-A, we explained how \(V_{\text{SERs}}\) are generated for frontiers. These frontiers are generated from \(V_{\text{SERs}}\) by applying Euclidean distance clustering to \(V_{\text{uncover}}\), making them a result of a global-scale perspective. If we know which keyframes contributed to the points composing each \(V_{\text{SER}}^{j}\), we can extract the keyframe that had the most significant contribution to a particular frontier. As previously mentioned, our exploration method is based on the LOS characteristic of LiDAR sensor points. \(V_{\text{SER}}^{j}\) are constituted by points generated from multiple keyframes \(K_{\text{SER}}^{j}\subset K\) but are generated based on geometric features of 3D dense map. Therefore, if a specific keyframe most contributed to \(V_{\text{SER}}^{j}\), we consider LOS to that frontier \(g^{j}\), even if not all points in that \(V_{\text{SER}}^{j}\) were generated by the same keyframe. Because we manage \(K\) and \(V\) as pairs, we introduce a novel map representation named the keyframe-centric map \(V_{\text{key}}=\{V_{\text{key}}^{i}\}\). \(V_{\text{key}}\subset\mathbb{R}^{6}\) includes each \(Z_{i}\) that constitutes \(V\) along with its corresponding \(K_{i}\). The \(j\) point within the map points generated from the \(i\)-th keyframe can be denoted as \(Z_{i,j}\in\mathbb{R}^{3}\), and from the perspective of \(V_{\text{key}}\), the corresponding point can be represented as \(V_{\text{key}}^{i,j}=\{Z_{i,j},K_{i}\}\). The generation of frontier edges based on keyframe contributions is detailed in Algorithm 2. The generation of the frontier graph \(G_{\text{F}}\) begins by parallelly updating the keyframe-centric map \(V_{\text{key}}\) with the newly generated \(Z_{t}\) and \(K_{t}\). Following this, Algorithm 1 in Section IV-A is applied to generate \(g\) and \(V_{\text{SERs}}\). For each \(V_{\text{SER}}^{j}\) comprising \(V_{\text{SERs}}\), the keyframe that contributes the most to the generation of that \(V_{\text{SER}}^{j}\) is extracted. As shown in Fig. 3, for all \(\{x,y,z\}\) points forming \(V_{\text{SER}}^{j}\), we concurrently search for and extract the corresponding \(V_{\text{key}}^{i,j}\) from the parallelly generated \(V_{\text{key}}\), along with the keyframe information, generating all keyframes \(K_{\text{SER}}^{j}\) that generated \(V_{\text{SER}}^{j}\). Finally, the keyframe \(K_{\text{Highest}}\) with the highest contribution within \(K_{\text{SER}}^{j}\) is extracted to construct the graph between the frontier \(g^{j}\) generated from \(V_{\text{SER}}^{j}\) and \(K_{\text{Highest}}\). It's important to note that \(G\) described in Section. IV-B is utilized for the robot's global path planning, necessitating collision checks. However, \(G_{\text{F}}\) is employed to just specify frontiers under the assumption of LOS from specific nodes, and thus, collision checks are not performed. ### _Global Path Planning within Topological Map_ In Sections IV-B and IV-C, we described the generation process of \(G\) and \(G_{\text{F}}\). In this section, we aim to describe efficient global path planning for exploration in large-scale 3D environments characterized by multiple branches, utilizing the topological map composed of \(G\) and \(G_{\text{F}}\). When a new keyframe \(K_{t}\) is generated, \(V_{\text{SERs}}\), \(g\), \(G\), and \(G_{\text{F}}\) are updated. First, for each frontier \(g^{j}\) generated from each \(V_{\text{SER}}^{j}\), we use the proposed exploration score to identify the frontier \(g^{\text{fin}}\) with the highest score. We then prioritize the exploration of \(g^{\text{fin}}\). The proposed exploration score can be expressed as \[\textbf{ExplorationScore}(g^{j})=\\ \frac{w_{\text{bio}}\textbf{Volume}(V_{\text{SER}}^{j})}{w_{\text {Dip}}\textbf{Direction}(\psi_{t},\psi_{g^{j}})\cdot w_{\text{bio}}\textbf{Distance}(K_{t},g^{j})}. \tag{1}\] As shown in (1), the proposed exploration score is composed of distance, volume, and direction factors. \(\textbf{Distance}(K_{t},g^{j})\) represents the distance between the current node \(K_{t}\) and frontier \(g^{j}\), with a higher score computed for closer frontiers, emphasizing proximity. Secondly, \(\textbf{Volume}(V_{\text{SER}}^{j})\) represents the number of points constituting \(V_{\text{SER}}^{j}\), and a higher score is computed for \(g^{j}\) generated with a greater number of points in \(V_{\text{SER}}^{j}\). This design aims to efficiently map as large volume as possible within a limited time. Lastly, \(\textbf{Direction}(\psi_{t},\psi_{g^{j}})\) represents the difference between the current robot's yaw and the direction towards Fig. 4: Global path planning results within the proposed topological map. If the SER that generated the selected frontier does not include any points of \(Z_{t}\) generated by the current keyframe \(K_{t}\), global path planning is performed using \(G\) and \(G_{\text{F}}\). Fig. 3: Comparison between the SERs map \(V_{\text{SERs}}\) and the keyframe-centric map \(V_{\text{key}}\). Each SER visible in the SERs view is generated based on the geometric features of the \(V_{\text{uncover}}\). The points visible in the keyframe-centric view represent the keyframes (colored) that generated those points. The frontier graph \(G_{\text{F}}\) is generated using the keyframe \(K_{\text{Highest}}\) that made the most significant contribution to the generation of each SER and its corresponding frontier. \(g^{j}\). A higher score is computed for frontiers with minimal direction difference, aiming to reduce back-and-forth maneuvers during exploration and enhance exploration efficiency. Note that, \(w_{\text{Dls}}\), \(w_{\text{blt}}\), and \(w_{\text{Dir}}\) represent the weights for the distance, volume, and direction factors, respectively. Using the proposed exploration score, once the frontier \(g^{\text{fhet}}\) with the highest score is selected, we proceed to decide whether to perform global path planning or local path planning towards \(g^{j_{\text{best}}}\). Fortunately, the proposed exploration algorithm leverages the keyframe-centric map \(V_{\text{key}}\) to determine which keyframes contributed to generating the selected \(V_{\text{SER}}^{j_{\text{best}}}\), allowing us to ascertain whether \(V_{\text{SER}}^{j_{\text{best}}}\) contains any point from the \(Z_{t}\) generated from the \(K_{t}\). If it does, considering the LOS from the \(K_{t}\) to \(V_{\text{SER}}^{j_{\text{best}}}\), exploration towards \(g^{j_{\text{best}}}\) is conducted using local path planning. On the other hand, if \(V_{\text{SER}}^{j_{\text{best}}}\) does not contain any point from the \(Z_{t}\), Non-Line of Sight (NLOS) is considered, leading to global path planning towards the \(g^{j_{\text{best}}}\) within the topological map composed of \(G\) and \(G_{\text{F}}\). Global path planning using the topological map is detailed in Algorithm 3, and the result is shown in Fig. 4. ## V Experiments ### _Simulation Based Evaluation_ In this section, the performance of our method was compared with the state-of-the-art GB planner2 [3] algorithm in the 3D cave simulation environment provided by the DARPA Subterranean virtual competition using UAV. To evaluate the performance, two factors, explored volume increment per second \((m^{3}/s)\) and explored volume over time \((m^{3})\), were compared, and the results are shown in Fig. 5. In the simulation environment, both algorithms used a LiDAR sensor with a Horizontal Field of View (HFOV) of 360deg, a Vertical Field of View (VFOV) of 45deg, a maximum sensor range of 80\(m\) with 32 channels. In the simulation environment, the proposed method had a coverage \(\zeta_{coverage}\) of 15\(m\), a down-sampling parameter \(v_{\text{down}}\) of 5\(m\), and \(k\) set to 10. A motion primitive-based planning algorithm was used for our local path planning, and both exploration algorithms had a maximum flight speed of \(2m/s\). As shown in Fig. 5, the proposed method exhibits fewer back-and-forth maneuvers based on our novel planning strategy (detailed in Section. IV-D) and outperformed the GB planner2 in both factors. Also, the proposed method exhibits smoother exploration paths compared to GB planner2, achieved through frontier generation from a global-scale perspective and topological exploration based on keyframe contributions. The proposed method generates a larger volume within the same time compared to GB planner2, demonstrating a 62% improvement in median value of the volume increment. ### _Experimental Evaluation_ For a more comprehensive analysis of the proposed method, we conducted field tests using UAVs in a real subterranean environment. The performance of the proposed method varies depending on the specification of LiDAR sensors, as it relies on the geometric features of the 3D dense map generated during the exploration. Therefore, we utilized two aerial platforms equipped with different LiDAR sensors to perform separate explorations within the same environment and subsequently analyzed their performance. #### V-B1 Hardware and Software Setup Two aerial platforms, as shown in Fig. 6, the USRG Quad and the USRG X8, were used. The USRG Quad is an 8-inch quadcopter platform equipped with an Ouster OS1 32-channel LiDAR with an Fig. 5: Comparison of exploration performance between the proposed method and GB planner2 using UAV. (a) Comparison of exploration trajectory between the proposed method (green) and GB planner2 (red). The inset figure on the right illustrate the results of global path planning triggered within the topological map through our exploration planning strategy, shown as the green path. (b) Quantitative comparison of explored volume over time and the explored volume increment. Fig. 6: The two custom aerial platforms used for field testing, the USRG Quad and the USRG X8. HFOV of 360\({}^{\circ}\), VFOV of 45\({}^{\circ}\), and an effective range of 90\(m\). The USRG X8 is a 5-inch coaxial octocopter platform equipped with an Ouster OS Dome 128-channel LiDAR with an HFOV of 360\({}^{\circ}\), VFOV of 180\({}^{\circ}\), and an effective range of 20\(m\). Both platforms used an Intel NUC computer with a 6-core i7-10710U CPU for real-time onboard computation, and motion primitive-based planning algorithms were employed for local path planning. Localization was achieved using our previous work [23], LiDAR Inertial Odometry, which provides keyframes, corresponding LiDAR scans, and UAV positions. During the field tests, the maximum flight speed was set to 1.5\(m/s\) for the USRG Quad, and 1.2\(m/s\) for the USRG X8, and both platforms were equipped with LED for providing visual information in dark environments. #### V-C2 Segmented map-based Topological Exploration The proposed method for the field test was set up with both UAVs having a coverage \(\zeta_{coverage}\) of 7\(m\), a down-sampling parameter of 2\(m\), and \(k\) set to 10. Both UAVs started exploration from the entrance of an underground bunker and performed exploration until their batteries were exhausted. The exploration results for USRG Quad and USRG X8 are shown in Fig. 7, while the exploration performance is shown in Fig. 8. USRG Quad covered approximately 354\(m\) by flying for 320\(s\) at an average speed of 1.1\(m/s\), while USRG X8 covered approximately 168\(m\) by flying for 210\(s\) at an average speed of 0.8\(m/s\). The inset figures labeled as 'Explored area' in Fig. 7 represent the maps (white map) generated by the two UAVs inside the underground bunker (red map). USRG Quad covered approximately 79% of the entire map, while USRG X8 covered approximately 43%. Additionally, the inset figures marked with a star shape show the flight view of the two UAVs from corresponding positions. The quantitative exploration performance based on the maximum range of the LiDAR sensor is shown in Fig. 8. As shown in Fig. 8, on a median value basis, USRG Quad had a volume increment of 118\(m^{3}/s\), while the USRG X8 had a 51\(m^{3}/s\). Through the field test using two UAVs, the proposed method has demonstrated the ability to achieve fast and efficient exploration in a real subterranean environment by leveraging our novel keyframe contribution-based topological map and exploration planning strategy. ## VI Conclusions In this paper, we proposed a topological exploration algorithm using keyframe contribution. Unlike existing methods that generate frontiers within a specific sensor range or search space, our method generates frontiers from a global-scale perspective, enhancing exploration efficiency in large-scale environments. Using the keyframe-centric map and SERs map, we generate a frontier graph by considering which keyframe contributes the most when frontiers are generated. Finally, by using data structures pairing keyframes with their corresponding scans, we design planning strategies for exploring specific frontiers, enabling rapid exploration. The proposed method surpasses the state-of-the-art GB planner2 algorithm in a large-scale cave simulation environment, showcasing a 62% improvement in the median value of volume increment. Moreover, it has demonstrated rapid and efficient exploration performance in real subterranean environments through field tests. In the future, our goal is to extend the proposed method to multi-robot or heterogeneous robot applications in various unstructured environments. Fig. 8: Comparing the quantitative exploration performance of two UAVs equipped with different LiDAR sensors. The USRG Quad equipped with a LiDAR sensor with a longer maximum range, demonstrates higher exploration efficiency compared to the USRG X8. Fig. 7: Exploration results using USRG Quad and USRG X8 in the subterranean environment located in South Korea. The white map, orange lines, light blue lines, and colored points represent the 3D dense map, graph edges, frontier graph edges, and frontiers generated during the exploration, respectively.
2310.20578
Fault-Tolerant Operation of Bosonic Qubits with Discrete-Variable Ancillae
Fault-tolerant quantum computation with bosonic qubits often necessitates the use of noisy discrete-variable ancillae. In this work, we establish a comprehensive and practical fault-tolerance framework for such a hybrid system and synthesize it with fault-tolerant protocols by combining bosonic quantum error correction (QEC) and advanced quantum control techniques. We introduce essential building blocks of error-corrected gadgets by leveraging ancilla-assisted bosonic operations using a generalized variant of path-independent quantum control (GPI). Using these building blocks, we construct a universal set of error-corrected gadgets that tolerate a single photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably, our construction only requires dispersive coupling between bosonic modes and ancillae, as well as beam-splitter coupling between bosonic modes, both of which have been experimentally demonstrated with strong strengths and high accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC in the full fault-tolerant setting. We numerically demonstrate the feasibility of our schemes using current experimental parameters in the circuit-QED platform. Finally, we present a hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code utilizing only beam-splitter couplings. Our estimates suggest that the overall noise threshold can be reached using existing hardware. These developed fault-tolerant schemes extend beyond their applicability to four-legged cat qubits and can be adapted for other rotation-symmetrical codes, offering a promising avenue toward scalable and robust quantum computation with bosonic qubits.
Qian Xu, Pei Zeng, Daohong Xu, Liang Jiang
2023-10-31T16:13:04Z
http://arxiv.org/abs/2310.20578v1
# Fault-Tolerant Operation of Bosonic Qubits with Discrete-Variable Ancillae ###### Abstract Fault-tolerant quantum computation with bosonic qubits often necessitates the use of noisy discrete-variable ancillae. In this work, we establish a comprehensive and practical fault-tolerance framework for such a hybrid system and synthesize it with fault-tolerant protocols by combining bosonic quantum error correction (QEC) and advanced quantum control techniques. We introduce essential building blocks of error-corrected gadgets by leveraging ancilla-assisted bosonic operations using a generalized variant of path-independent quantum control (GPI). Using these building blocks, we construct a universal set of error-corrected gadgets that tolerate a single photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably, our construction only requires dispersive coupling between bosonic modes and ancillae, as well as beam-splitter coupling between bosonic modes, both of which have been experimentally demonstrated with strong strengths and high accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC in the full fault-tolerant setting. We numerically demonstrate the feasibility of our schemes using current experimental parameters in the circuit-QED platform. Finally, we present a hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code utilizing only beam-splitter couplings. Our estimates suggest that the overall noise threshold can be reached using existing hardware. These developed fault-tolerant schemes extend beyond their applicability to four-legged cat qubits and can be adapted for other rotation-symmetrical codes, offering a promising avenue toward scalable and robust quantum computation with bosonic qubits. ## I Introduction Quantum error correction (QEC) enables reliable quantum information processing [1, 2, 3]. However, paradigmatic QEC schemes, particularly those employing surface codes with physical qubits [4, 5, 6, 7], suffer from huge resource overhead [8, 9]. This resource-intensive nature creates a substantial gap between the theoretical potential of fault tolerance and the capabilities of current noisy intermediate-scale quantum (NISQ) [10] devices. Encoding quantum information into bosonic systems [11, 12, 13, 14, 15] by leveraging their infinite-dimensional Hilbert spaces offers a promising avenue to reduce the overhead of QEC [16, 17, 18, 19, 20, 21]. While robust quantum memories based on single-mode bosonic codes have been experimentally demonstrated with improved memory lifetime [22, 23, 24], realizing error-corrected operations on these bosonic qubits remains a formidable task. One of the primary complexities stems from the weak non-linear interactions inherent in bosonic modes, necessitating the use of discrete-variable ancillae in systems such as circuit quantum electrodynamics (circuit QED) platform [25, 26]. However, a significant challenge arises in this hybrid system, as errors in the ancillae tend to propagate back to the bosonic mode, potentially compromising the encoded quantum information [27]. To address this issue, several methods have been developed to maintain precise control over the bosonic mode even in the presence of noisy ancillary systems [28, 29, 30]. Nevertheless, a comprehensive fault-tolerance framework for this hybrid system, along with guidelines for constructing fully fault-tolerant protocols using advanced quantum control concepts, remains conspicuously absent. Consequently, while universal error-detection operations on bosonic qubits have been constructed [31, 32] and demonstrated [33], achieving a complete set of error-corrected operations has remained a significant challenge. In this work, we bridge this gap by introducing a fault-tolerance framework tailored to the hybrid system composed of bosonic data modes and discrete-variable ancillae. Inspired by concatenated qubit codes [34], we identify essential properties for gadgets encoded in bosonic codes (referred to as "level-1" gadgets) in Sec. III. These properties play a crucial role in determining the fault tolerance of a level-1 circuit, where the overall failure probability must be suppressed to a certain order of the physical error rate. Furthermore, we demonstrate how the defined fault tolerance can be achieved through the integration of bosonic QEC with compatible quantum control techniques. Specifically, in Secs. IV and V, we establish a connection between a generalized version of path-independent control [30] (referred to as GPI) and fault tolerance, highlighting the importance of GPI operations as fundamental building blocks for error-corrected gadgets. As an application of these fault-tolerant tools, in Sec. VI, we construct universal error-corrected gadgets using GPI operations for the four-legged cat qubit [35, 14, 36]. These gadgets can tolerate a single photon loss and an arbitrary ancilla fault, while only relying on dispersive coupling between bosonic modes and ancillae [29, 37, 38] and beam-splitter (BS) coupling between bosonic modes [39, 40]. Importantly, these coupling mechanisms have been experimentally demonstrated with strong coupling strengths. Each level-1 logical qubit, encoded in a four-legged cat code, utilizes only a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC. We numerically demonstrate the second-order error suppression for the level-1 gadgets. Moreover, we show that using a teleportation gadget that pumps energy into the system and suppresses phase-rotation errors, a robust cat-encoding memory is feasible even in the presence of finite \(\chi\) mismatches in the circuit-QEC platform with current experimental parameters [29]. Finally in Sec. VII, we present a practical and hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code. While we primarily focus on the four-legged cat code throughout this work, we discuss in Sec. VIII that the fault-tolerant schemes developed herein can be readily adapted to other rotation-symmetric bosonic codes [41]. ## II System description and error model We first introduce some notations. We denote \([k]:=\{1,2,\cdots,k\}\) as the set of integers from \(1\) to \(k\). We denote \(\left[\int_{t_{k}}dt_{h}\right]_{h\in[k]}:=\int_{t_{k}}d_{t_{k}}\int_{t_{k-1}} dt_{k-1}\cdots\int_{t_{1}}dt_{1}\) as the multiple integral over variables in \(\{t_{h}\}_{h\in[k]}\), and similarly \(\left[\sum_{a_{k}}\right]_{h\in[k]}:=\sum_{a_{k}}\sum_{a_{k-1}}\cdots\sum_{a_ {1}}\) as the sum over variables in \(\{a_{h}\}_{h\in[k]}\). We denote \(A\propto B\) if there exists some \(c\in\mathbb{C}\) such that \(A=cB\). We denote \(\mathcal{T}\) as the time ordering operator. ### Preliminaries #### ii.1.1 Bosonic codes Single-mode bosonic error-correcting codes encode logical information into a subspace of the infinite-dimensional Hilbert space of an oscillator. Among them, the four-legged cat code [35, 14, 36] encodes a single logical qubit and has codewords \[\ket{\mu_{L}}=c_{\mu}\left[\ket{\alpha}+\ket{-\alpha}+(-1)^{\mu}(\ket{i \alpha}+\ket{-i\alpha})\right], \tag{1}\] where \(\mu=0/1\), \(\ket{\gamma}\) denotes a coherent state with an amplitude \(\gamma\in\mathbb{C}\), and \(c_{\mu}=1/(2\sqrt{2\exp(-|\alpha|^{2})(\cosh|\alpha|^{2}+(-1)^{\mu}\cos|\alpha |^{2})})\) are normalization constants. Given any quantum code encoding a single logical qubit, we denote \(P_{c}:=\ket{0_{L}}\bra{0_{L}}+\ket{1_{L}}\bra{1_{L}}\) as the projection onto the codespace, and \(\bar{X}_{c},\bar{Y}_{c},\bar{Z}_{c}\) the logical \(X\)-, \(Y\)-, \(Z\)-Pauli operators respectively. The capability of an error-correcting code to correct a given set of errors \(\mathbf{E}\) is given by the Knill-Laflamme (KL) condition [42]: \(P_{c}E_{l}^{\dagger}E_{j}P_{c}\propto P_{c}\) for any \(E_{i},E_{j}\in\mathbf{E}\). More specifically, we can evaluate the \(2\times 2\) QEC matrix \(\epsilon_{jk}^{c}\) for any pair of errors \(E_{j},E_{k}\)[36]: \[P_{c}E_{j}^{\dagger}E_{k}P_{c}=\epsilon_{jk}^{c}, \tag{2}\] where \(\epsilon_{jk}^{c}\) can be parametrized as \(\epsilon_{jk}^{c}=c_{jk}^{c}P_{c}+x_{jk}^{c}\bar{X}_{c}+y_{jk}^{c}\bar{Y}_{c}+ z_{jk}^{c}\bar{Z}_{c}\), where \(c_{jk}^{c},x_{jk}^{c},y_{jk}^{c},z_{jk}^{c}\in\mathbb{C}\). The KL condition is satisfied if \(x_{jk}^{c}=y_{jk}^{c}=z_{jk}^{c}=0\) for any \(j\) and \(k\). Consider the four-legged code and an error set containing a single-photon loss \(\mathbf{E}=\{I,a\}\), where \(a\) denotes the annihilation operator. First, we have \(P_{c}aP_{c}=0\), indicating that a single-photon loss is perfectly detectable. Second, \[P_{c}a^{\dagger}aP_{c}=\bar{n}P_{c}+\frac{\delta n}{2}\bar{Z}_{c}, \tag{3}\] where \(\bar{n}:=(\bra{0_{L}}a^{\dagger}a\ket{0_{L}}+\bra{1_{L}}a^{\dagger}a\ket{1_{ L}})/2\) denotes the mean photon number and \(\delta n:=\bra{0_{L}}a^{\dagger}a\ket{0_{L}}-\bra{1_{L}}a^{\dagger}a\ket{1_{L}}\) denotes the photon number difference between the two codewords. For an arbitrary \(\alpha\), \(\delta n\neq 0\), indicating the a single photon loss is not perfectly correctable. However, \(\delta n=O(e^{-2\alpha^{2}})\) as \(\alpha\gg 1\) and a single-photon loss is approximately correctable for large \(\alpha\). Furthermore, \(\delta n=0\) is exactly satisfied at a discrete set of finite \(\alpha\)[43], which we refer to as sweet spots. Similarly, one can show that for a continuous set of phase-rotation errors \(\mathbf{R}=\{e^{i\theta a^{\dagger}a}\}_{\theta\in[-\theta_{m},\theta_{m}]}\), the KL condition is approximately satisfied for large \(\alpha\) if \(\theta_{m}<\pi/4\)[41]. First, \(P_{c}e^{-i\theta_{1}a^{\dagger}a}e^{i\theta_{2}a^{\dagger}a}P_{c}=c_{12}P_{c}+ z_{12}\bar{Z}_{c}\) for any \(\theta_{1},\theta_{2}\in\mathbf{R}\) since \(e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\) preserves the photon number. Next, \[z_{12} =\left(\bra{+_{L}}e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\ket{-_{ L}}+\bra{-_{L}}e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\ket{+_{L}}\right)/2 \tag{4}\] \[\approx\left(\bra{i\alpha|\alpha e^{i(\theta_{2}-\theta_{1})}}+ \bra{-i\alpha|\alpha e^{i(\theta_{2}-\theta_{1})}}\right)/2+h.c.,\] where the the approximation utilizes that \(\ket{+_{L}}\approx(\ket{\alpha}+\ket{-\alpha})/\sqrt{2}\) and \(\ket{-_{L}}\approx(\ket{i\alpha}+\ket{-i\alpha})/\sqrt{2}\) for large \(\alpha\). Obviously, \(z_{12}\to 0\) as \(\alpha\gg 1\) as long as \(|\theta_{2}-\theta_{1}|\neq\pi/2\), which holds if \(\theta_{m}<\pi/4\). To conclude, the four-legged cat code can approximately correct a single photon loss and a continuous set of phase rotations with amplitude smaller than \(\pi/4\) (for large \(\alpha\)). In fact, cat codes serve as numerically optimized codes for certain regimes of a bosonic channel with both loss and dephasing errors [44]. #### ii.1.2 Open quantum system and Markovian quantum evolution A noisy Markovian evolution of a quantum system is described by a Lindblad master equation: \[\frac{d\rho}{dt}=\mathcal{L}(t)\rho=-i[H(t),\rho]+(\sum_{j}\mathcal{D}[\sqrt{ \gamma_{j}}J_{j}])\rho, \tag{5}\] where \(H(t)\) is the system Hamiltonian and \(\mathcal{D}[O]=O\bullet O^{\dagger}-\frac{1}{2}\{O^{\dagger}O,\bullet\}\) is a Lindblad superoperator associated with a jump operator \(O\), and \(\gamma_{j}\) is the jump rate for the error \(J_{j}\). Denote \(H_{\text{eff}}(t):=H(t)-\frac{i}{2}\sum_{j}\gamma_{j}J_{j}^{\dagger}J_{j}\) as the effective Hamiltonian, and \(\mathcal{S}:=\sum_{j}\gamma_{j}J_{j}\bullet J_{j}^{\dagger}\) as the superoperator describing quantum jumps. The Lindbladian \(\mathcal{L}(t)\) can be rewritten as \(\mathcal{L}(t)=-i[H_{\text{eff}}(t),\bullet]+\mathcal{S}\). Then, the joint quantum channel, given by the time integral of Eq. (5), admits a Dyson expansion with respect to \(\mathcal{S}\)[30]: \[\rho(t)=\mathcal{G}(t,0)\rho(0)=\sum_{q=0}^{\infty}\mathcal{G}_{q}(t,0)\rho(0), \tag{6}\] where \(\mathcal{G}_{0}(t,0)=\mathcal{W}(t,0):=W(t,0)\bullet W^{\dagger}(t,0)\), with \(W(t,0):=\mathcal{T}\exp\left[-i\int_{0}^{t}H_{\text{eff}}\left(t^{\prime} \right)dt^{\prime}\right]\), describes evolution without any quantum jumps, and \[\mathcal{G}_{q}(t)=\left[\int_{t_{h}=0}^{t}dt_{h}\right]_{h\in[q]} \mathcal{T}\big{(}\mathcal{W}(t,t_{q})\mathcal{S}\cdots \tag{7}\] \[\mathcal{S}\mathcal{W}(t_{2},t_{1})\mathcal{S}\mathcal{W}(t_{1},0 )\big{)},\] where \(\mathcal{G}_{q}\) (\(q\geq 1\)) describes the evaluation with \(q\) quantum jumps. We refer to such an expansion as the jump expansion and and \(\mathcal{G}^{[n]}:=\sum_{q=0}^{n}\mathcal{G}_{q}\) as the \(n\)-th order truncation of \(\mathcal{G}\) under the jump expansion. For quantum error correction, we care about the expansion of the channel \(\mathcal{G}\) in terms of the small noise parameter \(p:=\gamma_{i}t\) given an evolution time \(t\) (here, we have assumed equal noise rates for simplicity): \[\mathcal{G}(t,0)=\sum_{q^{\prime}}p^{q^{\prime}}\mathcal{G}_{q^{\prime}}^{ \prime}(t,0). \tag{8}\] Such an expansion can be obtained by Dyson expanding \(\mathcal{G}\) with respect to the full Lindblad superoperators of the noise (\(\sum_{j}\mathcal{D}[\sqrt{\gamma_{j}}J_{j}]\)) in Eq. (5), instead of their quantum-jump components \(\mathcal{S}\). We refer to such an expansion of \(\mathcal{G}\) as its noise expansion, and \(\mathcal{G}^{\prime[n]}:=\sum_{q^{\prime}=0}^{n}\mathcal{G}_{q}^{\prime}\) as the \(n\)-th order truncation of \(\mathcal{G}\) under the noise expansion. Observe that \(\mathcal{G}^{[n]}=\mathcal{G}^{\prime[n]}+O(p^{n+1})\), i.e. the \(n\)-th order truncation of a channel \(\mathcal{G}\) in terms of its jump expansion or its noise expansion is equivalent up to \(n\)-order of \(p\). Since \(\mathcal{G}^{[n]}\) is easier to evaluate for the purpose of this work, we will mostly consider the jump expansion of channels. A nice property of a channel's jump expansion is that it is automatically in a Kraus form: \[\mathcal{G}(t,0)=\sum_{q=0}\left[\int_{t_{h}=0}^{t}dt_{h}\right]_ {h\in[q]}\left[\sum_{j_{h}=1}^{N}\right]_{h\in[q]} \tag{9}\] \[\qquad\qquad G_{q}(\{t_{h}\},\{j_{h}\})\bullet G_{q}^{\dagger}(\{ t_{h}\},\{j_{h}\}),\] where \[G_{q}(\{t_{h}\},\{j_{h}\}):=\mathcal{T}W(T,t_{q})E_{j_{q}}\cdots E_{j_{2}}W(t_ {2},t_{1})E_{j_{1}}W(t,0). \tag{10}\] One can, therefore, view \(G_{q}(\{t_{h}\},\{j_{h}\})\) as a Kraus operator of the channel with discrete indices \(q,\{j_{h}\}\) and continuous indices \(\{t_{h}\}\). ### General setup As shown in Fig. 1(a), we consider gadgets for some bosonic code consisting of a sequence of ancilla-assisted operations (AAOs). For each AAO, a \(d_{A}\geq 2\) ancilla \(A\) is initialized in some initial state \(\ket{i}_{A}\), interacts with the bosonic mode \(C\) for some time \(T\), and is finally measured in some basis \(\mathcal{B}_{A}\). We consider continuous Markovian interactions between \(A\) and \(C\), which is described by the Lindblad master equation in Eq. (5) with a Hamiltonian \(H_{AC}(t)\) that acts on the joint system, a set of bosonic jump operators \(\{\sqrt{\kappa_{i}}F_{i}\}\), and a set of ancilla jump operators \(\{\sqrt{\gamma_{j}}J_{j}\}\). We allow adaptively choosing the later AAOs using the earlier ancilla measurement outcomes. Note that a direct operation on the bosonic mode can be viewed as a trivial AAO with the ancilla being initialized in some state \(\ket{i}\), idling for the evolution time, and measured in \(\ket{i}\) without any errors. Such an AAOs-composed bosonic gadget forms a channel \(\mathcal{N}\) on the bosonic mode, which can be decomposed as \(\mathcal{N}=\mathcal{N}_{n}\circ\mathcal{N}_{0}\), where \(\mathcal{N}_{0}\) is the target bosonic operation and \(\mathcal{N}_{n}=\sum_{k}N_{k}\bullet N_{k}^{\dagger}\) is a data noise channel represented by a set of noise Kraus operators \(\{N_{k}\}\). Fault-tolerance essentially concerns the relation between the physical channels \(\{\mathcal{G}\}\) and the resultant bosonic channel \(\mathcal{N}\). More specifically, we need to study how the noise in \(\mathcal{G}\), which we refer to as faults, propagates to the data errors \(\{N_{k}\}\) in \(\mathcal{N}_{n}\). We will need to quantify the size of the faults and the data errors and design AAOs such that small Figure 1: (a) Illustration of a level-1 bosonic gadget consisting of a sequence of ancilla-assisted operations. For each AAO, the ancilla is initialized to some state \(\ket{i}\) and measured in some basis \(\mathcal{B}_{A}\). The later AAOs can be performed adaptively using the earlier ancilla measurement outcomes. (b) Illustration of the AAO, GPI and PI operations. As a special case of AAO, the GPI operations with bosonic QEC can handle bosonic errors induced by relevant ancilla faults. The previous PI operations [30] can be regarded as a special GPI without bosonic QEC, which are designed to avoid any introduction of bosonic errors due to relevant ancilla faults. f the state \((k,\theta)\). Results only propagate to small data errors. We refer to a physical channel \(\mathcal{G}\) that contains no more than \(t\) faults as its \(n\)-th order truncation \(\mathcal{G}^{[n]}\). To quantify the size of the data bosonic errors, we need to specify a bosonic error-correcting code and an error basis. In this work, we will primarily focus on the cat codes [14; 35; 45] and a basis we termed loss-dephasing basis, which is closely related to photon loss and bosonic dephasing errors. ### Loss-dephasing basis and error metrics Typical bosonic errors include excitation loss (\(a\)), heating (\(a^{\dagger}\)), and bosonic dephasing (\(a^{\dagger}a\)). For such errors, a natural basis to use is \(\{e^{-i\theta a^{\dagger}a}a^{k},a^{\dagger k^{\prime}}e^{i\theta^{\prime}a^{ \dagger}a}\}_{k,k^{\prime}\in\mathbb{N};\theta,\theta^{\prime}\in(-\pi,\pi]}\), which is a complete basis for all single-mode bosonic operators. Neglecting heating errors \(a^{\dagger}\), which are typically small [29; 31], the relevant errors are then spanned by \(\{E_{k}(\theta):=e^{-i\theta a^{\dagger}a}a^{k}\}_{k\in\mathcal{N},\theta\in (-\pi,\pi]}\), which we refer to as the loss-dephasing basis. A four-legged cat code can only correct errors \(E_{k}(\theta)\) with small \(k\) and \(|\theta|\) (see Sec. II.1.1). This motivates us to quantify the size of \(E_{k}(\theta)\) as \(|E_{k}(\theta)|_{w}:=(k,|\theta|)\in\mathbb{N}\times[0,\pi]\). We compare the size of two errors by introducing a partial order with respect to the proper cone \(R_{+}^{2}:=[0,\infty)\times[0,\infty)\), i.e. \(|E_{k^{\prime}}(\theta^{\prime})|_{w}\geq|E_{k}(\theta)|_{w}\leftrightarrow( k^{\prime}-k,|\theta^{\prime}|-|\theta|)\in R_{+}^{2}\). We say that a bosonic noise channel \(\mathcal{N}_{n}\) contains at most \((k,\theta)\) errors if all its Kraus operators have size at most \((k,\theta)\), and a state \(|\phi^{\prime}\rangle\) is at most \((k,\theta)\) far from a target state \(|\phi\rangle\) if there exists a \(\mathcal{N}_{n}\) containing at most \((k,\theta)\) errors such that \(|\phi^{\prime}\rangle\) is in the support of \(\mathcal{N}_{n}(|\phi\rangle\,\langle\phi|)\). With this quantification of error sizes, for \(\alpha\gg 1\), the four-legged cat can correct errors \(|E_{k}(\theta)|_{w}\leq(1,\pi/4)\)[36]. Fig. 2(a) illustrates the 2-dimensional error space indicated by the number of photon loss and dephasing angle. ## III Fault-tolerance In this section, we formalize a fault-tolerance framework for the hybrid system with bosonic modes and discrete-variable ancillae in the context of concatenated codes [34]. Since the single-mode cat code alone cannot arbitrarily suppress logical errors, one needs to concatenate it with an outer qubit code for fault-tolerant quantum computing. That is, we will have three levels of gadgets. The level-0 gadgets are the physical operations; The level-1 gadgets are encoded in the cat code and the level-2 gadgets are encoded in the outer qubit code. A quantum circuit is fault-tolerantly executed using level-2 gadgets, and each level-2 gadget is executed using a level-1 circuit with multiple level-1 gadgets. In order for each level-2 gadget (or equivalent, a level-1 circuit) to be executed with a failure rate \(O(p^{t+1})\), which suppresses the physical error rate \(p\) to certain order \(t+1\), the level-1 gadgets suffice to satisfy the following properties: First, there exists a function \(f:\mathbb{N}\rightarrow\mathbb{N}\times[0,\pi]\) that satisfies: 1. \(f(m_{1})\leq f(m_{2})\leftrightarrow m_{1}\leq m_{2}\) if \(m_{1},m_{2}\leq t\). 2. \(f(m_{1}+m_{2})=f(m_{1})+f(m_{2})\) if \(m_{1}+m_{2}\leq t\). Roughly speaking, \(f(m)\) constraints the maximal size of data errors that \(a\) faults during a protocol can propagate to. For instance, for a bosonic code that can correct phase rotations smaller than \(\theta_{\max}\), we will choose \(f(m)=(m,m\theta_{0}\mod\pi)\) for some \(\theta_{0}\in[0,\theta_{\max}/t]\), which constraints that \(m\) faults can propagate to at most \(m\) photon losses and phase rotations of size at most \(m\theta_{0}\). We illustrate such an error propagation constrained by \(f\) in Fig. 2(b). Given \(f\) and \(t\), we then define \(t\)-FT fault-tolerant gadgets, including gate, error correction, state preparation, and measurement, for the hybrid system by generalizing the definitions in Ref. [34] for qubits. We remark that, the following FT definitions are related to the choice of the function \(f\). **Definition 1** (\(t\)-FT gate).: _A gate is \(t\)-FT if it satisfies: For an input codeword that has an error of size \((k,\theta)\), if at most \(m\) faults occur during the gate with \((k,\theta)+f(m)\leq f(t)\), the output state is at most \((k,\theta)+f(m)\) far from the codespace; Furthermore, ideally decoding the output state gives the same codeword as first ideally decoding the input state and then applying the ideal gate._ Note that this condition for the gate corresponds to the combination of Property 2 and Property 3 of Ref. [34]. **Definition 2** (\(t\)-FT Qec).: _A QEC gadget is \(t\)-FT if it satisfies:_ 1. _For an input codeword with an error of size_ \((k,\theta)\)_, if at most_ \(m\) _faults occur during the protocol with_ \((k,\theta)+f(m)\leq f(t)\)_, ideally decoding the output state gives the same codeword as ideally decoding the input state._ Figure 2: Illustration of bosonic error decomposition and the error propagation function \(f(m)\). (a) The bosonic loss-dephasing error can be expanded by the basis \(E_{k}(\theta)\). By defining a partial order of the size \(E_{k}(\theta)\), the bosonic error \(E_{k}(\theta)\) with at most \((k,\theta_{m})\) error can be illustrated as the green region in the plot. Here \(k=2\). (b) Suppose \(m\) faults occur during the gate implementation. To capture the propagation of faults to the final bosonic error, we introduce a function \(f(m)=(m,m\theta_{0}\mod\pi)\) as a upper bound of the induced final loss and dephasing errors. _._ 2. _For at most_ \(m\) _faults during the protocol with_ \(f(m)\leq f(t)\)_, no matter the size of the error on the input state, the output state is at most_ \(f(m)\)_-far from the code space._ Note that conditions (i) and (ii) correspond to Properties 1 and 0 of Ref. [34], respectively. State preparation and measurement are special cases of FT gates: **Definition 3** (\(t\)-FT state preparation).: _A state-preparation gadget is \(1\)-FT if it satisfies: If at most \(m\leq t\) faults occur during the protocol, the output state is at most \(f(m)\)-far from the target state; Furthermore, ideally decoding the output state gives the ideal target state._ **Definition 4** (\(t\)-FT measurement).: _A measurement gadget is \(t\)-FT if it satisfies: For an input codeword that has an error of size \((k,\theta)\), if at most \(m\) faults occur during the gate with \((k,\theta)+f(m)\leq f(t)\), the measurement is equivalent to applying the ideal measurement to the input codeword._ Based on the definition of the \(t\)-FT gadgets, we have the following proposition. **Proposition 1**.: _Using \(t\)-FT level-\(1\) gadgets, any level-\(1\) circuit has a failure rate \(O(p^{t+1})\), where \(p\in[0,1)\) is the physical error rate, i.e., the probability that one fault happens in the gadget._ Proof.: We follow the extended-rectangle formalism in Ref. [34]. Without loss of generality, we consider an ideal quantum circuit in Fig. 3(e). Here we take the single-qubit level-\(1\) circuit as an example. In practice, we realize this circuit using the noisy \(t\)-FT level-\(1\) gadgets shown in Fig. 3(a). To analyze the fault-tolerance property of this circuit, we draw several dashed boxes to cover the whole circuit. The dashed boxes are called extended rectangles exRec. For a quantum gate, an extended rectangle exRec (a dashed box in Fig. 3(a)) is a composition of a front EC gadget, a gate and a rear EC gadget, i.e. \(\text{exRec}=\text{EC}\circ\text{Ga}\circ\text{EC}\). We say that any operation Op is \(t\)-good if it contains no more than \(t\) faults. In what follows, we show that if all the dashed boxes in Fig. 3(a) are \(t\)-good, we can reduce the noisy circuit to the ideal circuit following the stream in Fig. 3. To this end, we introduce the ideal decoder ID (the blue triangles in Fig. 3 and 4), which performs an ideal recovery given a bosonic code. We also introduce a \((k,\theta)\)-filter \([(k,\theta)]\)F (the orange thin rectangles in Fig. 4) which performs a projection onto the space spanned by all states that can be obtained by acting on a codeword with an error no larger than \((k,\theta)\). First of all, we notice that if the last box in Fig. 3(a) is \(t\)-good, then based on the definition of \(t\)-FT QEC and measurement, we can equivalently convert the circuit in Fig. 3(a) to (b). Then, we follow the procedures in Fig. 4 to reduce the extended gadgets of quantum gates to the ideal gadgets: Denote the faults occur in the front EC gadget, the gate gadget and the rear EC gadget to be \(s,r,s^{\prime}\), respectively. Since the dashed box is \(t\)-good, we have \(s+r+s^{\prime}\leq t\). Fig. 4(a) and (b) are equivalent due to the second requirement of FT QEC in Def. 2; (b) and (c) are equivalent due to the first requirement of the FT gate in Def. 1; (c) and (d) are equivalent due to the first requirement of FT QEC in Def. 2; (d) and (e) are equivalent due to the second requirement of the FT gate in Def. 1. Then we can transform the circuit from Fig. 3(b) to (d) using the reduction in Fig. 4. Finally, we use the property of FT state preparation to reduce Fig. 3(d) to (e). The argument is similar to the ones for the extended gadgets of quantum gates in Fig. 4. In our gadget set-up in Sec. II.2, errors represented by Figure 3: Reduction of a FT level-\(1\) circuit to the ideal circuit. Figure 4: Reduction of the extended rectangular to an ideal gadget. quantum jump happens independently in the level-1 gadgets. Consider a level-1 circuit composed of many \(t\)-FT level-1 gadgets that can be grouped into extented rectangles (see e.g., Fig. 3(a)). If there are at most \(t\) quantum errors in each extended rectangle, we can convert it to an ideal gadget. In that case, only when more than \(t\) errors occur in the same extended rectangles at the same time can one logical error happen, which owns a probability of \(O(p^{t+1})\). In the following, we focus on constructing FT level-1 bosonic gadgets that satisfy the above definitions by integrating bosonic quantum error correction and quantum controls. More specifically, given a bosonic code \(\mathcal{C}\) that can correct loss and phase-rotation errors, e.g. the cat code, we try to design error-corrected \(\mathcal{C}\)-encoded gadgets by carefully engineering the Hamiltonian of their composing AAOs so that physical faults propagate controllably to data errors. An analogous example in the context of qubit fault-tolerance is the use of transversal gates [46], which guarantees that a single circuit fault only propagates to a single data error per code block. However, this quantum control task is more sophisticated when involving bosonic modes as we need to consider complicated continuous evolution in a larger Hilbert space. In order for a level-1 gadget to be FT, it has to tolerate both bosonic faults and ancilla faults. Tolerance against bosonic faults can be achieved relatively easily by using the error-transparency control [47], or more generally, the error-closure control [48]. Tolerance against ancilla errors is usually harder to achieve since some DV ancilla errors tend to propagate uncontrollably and a small ancilla fault could lead to a catastrophic error in the bosonic mode. Fortunately, path-independent control [49; 29; 30] has been proposed for controlling the ancilla faults propagation to the bosonic mode. However, the previously defined PI condition [30] is more stringent than what is required by fault tolerance. Thus in the next section, we will generalize the PI control, relax its requirement, and rigorously connect its generalized version to fault-tolerance analyzed in this section. ## IV Generalized path-independent operations We first review the PI control proposed in Ref. [30]. Again, we consider a Markovian interaction between a bosonic mode \(C\) and a \(d\geq 2\)-level ancilla \(A\) described by Eq. (5), where only the ancilla noises are considered, i.e. \[\frac{d\rho}{dt}=-i[H_{AC}(t),\rho]+\sum_{j}\mathcal{D}[\sqrt{\gamma_{j}}J_{j}]\rho \tag{11}\] where \(J_{j}\) are some jump operators acting only on the ancillary system. The ancilla is initialized in some initial state \(\ket{i}_{A}\), and measured in a certain basis \(\{\ket{r}_{A}\}\) after an interaction time \(T\). Let \(\mathcal{G}(T)\) denote the joint channel generated by the Lindblad master equation in Eq. (11) for a time \(T\). With a slight abuse of notation, we may neglect the subscripts \(A\) or \(C\) labeling the ancilla or the bosonic mode for states or operators without ambiguity. We denote \(\bra{\langle r|\mathcal{G}|i\rangle}:=\bra{r}\mathcal{G}(\ket{i}\bra{i}\otimes \bullet)\ket{r}\) as the (unnormalized) channel on the bosonic mode conditioned on the initial and final ancilla state \(\ket{i}\) and \(\ket{r}\)[49]. A PI gate is defined as follows. **Definition 5** (PI gate).: _An ancilla-assisted gate \(\mathcal{G}(T)\) is PI in an ancilla basis \(\mathcal{B}_{A}\) with an ancilla initial state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\),_ \[\bra{\langle r|\mathcal{G}(T)|i\rangle}\propto U_{ri}\bullet U_{ri}^{\dagger}, \tag{12}\] _where \(U_{ri}\) is some \(r\)-dependent unitary on the bosonic mode._ The PI condition in Eq. (12) implies that each conditional channel does not contain any errors (it is a unitary channel without information loss) propagated from the ancilla, although the unconditional channel might. In other words, no information is lost to the environment by performing such a noisy operation if the ancilla measurement outcome is accessible. See Fig. 5 for an illustration of such a PI gate. Note that the PI condition in Eq. (12) is for the joint channel, which could be hard to evaluate directly. As such, Ref. [49] provided an easy-to-evaluate algebraic condition for the Hamiltonian \(H_{AC}(t)\) and the jump operators \(\{J_{j}\}\) in order for \(\mathcal{G}\) to satisfy Eq. (12), which we present in Appendix A. The PI definition in Def. 5 considers an infinite number of ancilla-faults since it examines the full \(\mathcal{G}(T)\). In practice, when the ancilla noises are small, by correcting only a finite number of ancillary faults, we can suppress the real logical error rates to a higher order. As such, we define the following finite-order PI gate that only concerns a finite truncation of \(\mathcal{G}(T)\): **Definition 6** (Finite-order PI gate).: _An ancilla-assisted gate is \(n\)-PI in an ancilla basis \(\mathcal{B}_{A}\) with an ancilla initial state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\) and any \(k\leq n\),_ \[\bra{\langle r|\mathcal{G}^{[k]}(T)|i\rangle}\propto U_{ri}\bullet U_{ri}^{ \dagger}, \tag{13}\] _where \(U_{ri}\) is some \(r\)-dependent unitary on the bosonic mode._ In Appendix A, we present an algebraic condition for the Hamiltonian and jump operators in order for \(\mathcal{G}\) to satisfy Eq. (13). The PI condition, even with a finite order, is demanding since it requires the conditional channels to be exactly unitary channels and thus allows no error propagation at all. However, observe that if the bosonic mode is protected by some bosonic codes, fault-tolerance can still be achieved even if we allow error propagations, as long as the propagated errors are small and correctable. Based on this observation, we generalize the PI control and present a less stringent condition that, nevertheless, still complies with the idea of fault-tolerance: **Definition 7** (GPI operation).: _Given a single-mode bosonic code with a codespace projection \(P_{c}\), we say that an ancilla-assisted operation is \(n\)-th order generalized path-independent (GPI) in an ancilla basis \(\mathcal{B}_{A}\) with an initial ancilla state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\) and \(k\leq n\),_ \[\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\propto(\sum_{s}K_{ri}^{s}\bullet K_{ri }^{s\dagger}), \tag{14}\] _where \(\{K_{ri}^{s}\}_{s}\) satisfies the KL condition, i.e. \(P_{c}K_{ri}^{s\dagger}K_{ri}^{s^{\prime}}P_{c}\propto P_{c}\) for any \(K_{ri}^{s},K_{ri}^{s^{\prime}}\in\{K_{ri}^{s}\}_{s}\)._ Note that any conditional channel \(\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\) can be written in the form of Eq. (14), with a set of \((r,i)\)-dependent Kraus operators \(\{K_{ri}^{s}\}_{s}\). The condition that \(\{K_{ri}^{s}\}_{s}\) satisfies the KL condition implies that the conditional channel \(\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\) contains only correctable errors. The GPI condition generalizes from the PI condition in Def. 6 from the following two aspects. First, the GPI condition considers any operation (any CPTP map) to the bosonic mode as a target, while the PI condition only considers unitary gates. Second, the GPI condition allows finite propagation of ancilla faults to the bosonic mode for each conditional channel, as long as the propagated errors are correctable by the bosonic code. See Fig. 1(b) for an illustration of the relation between ancilla-assisted operations, GPI operations and PI operations. In Appendix A, we present an algebraic condition for GPI operations again by only examining the Hamiltonian and jump operators. Note that we directly present the GPI condition in the finite-order form, which is of practical relevance. ### GPI examples Here, we provide two examples of GPI operations for the four-legged cat code. #### iv.1.1 GPI SNAP gate with a three-level \(\chi\)-mismatched ancilla As an example, we consider the photon-number selective phase (SNAP) gate [37] in circuit-QED systems. In the rotating frame, a three-level ancilla with levels is dispersively coupled to a bosonic mode via the Hamiltonian \[H_{0}=-(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e}\bra{e})\otimes a^{\dagger}a, \tag{15}\] and the ancilla is frequency-selectively driven between \(\ket{g}\) and \(\ket{f}\) states: \[H_{c}(t)=\Omega\sum_{n=0}^{N}e^{-i(n\chi_{f}t-\phi_{n})}\ket{f}\bra{g}+h.c., \tag{16}\] where \(\vec{\phi}:=(\phi_{0},\phi_{1},\cdots,\phi_{N})\) is some phase vector that we can choose. We consider ancilla jump operators \(\{J_{1}=\sqrt{\gamma}\ket{e}\bra{f},J_{2}=\sqrt{\gamma}\ket{g}\bra{e},J_{3}= \sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\}\), where \(J_{1}\) describes the ancilla decays from \(\ket{f}\) to \(\ket{e}\), \(J_{2}\) the ancilla decay from \(\ket{e}\) to \(\ket{g}\), and \(J_{3}\) an ancilla dephasing with arbitrary phases \(\Delta_{e},\Delta_{f}\in\mathbb{C}\). We will use this error model throughout the paper whenever using a three-level ancilla. In the interaction picture associated with \(H_{0}\), the system Hamiltonian reads \[\tilde{H}=\Omega\left[\ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right], \tag{17}\] where \(S(\vec{\phi}):=\sum_{n=1}^{N}e^{i\phi_{n}}\ket{n}\bra{n}\) applies a number dependent phase shift \(\vec{\phi}\) to the bosonic mode. Note that we have performed the rotating wave approximation by assuming \(\Omega\ll\chi_{f}\). Denote \(\Delta\chi:=\chi_{f}-\chi_{e}\) as the \(\chi\) mismatch. The ancilla jump operators transform to \(\tilde{J}_{1}(t)=\sqrt{\gamma}|e\rangle\langle f|\otimes e^{-i\Delta\chi ta^{ \dagger}a},\tilde{J}_{2}(t)=\sqrt{\gamma}|g\rangle\langle e|\otimes e^{-i\chi_ {e}ta^{\dagger}a}\), and \(\tilde{J}_{3}=J_{3}\). We initialize the ancilla in \(\ket{g}\) and let the system evolve for a time \(T=\pi/2\Omega\), and measure the ancilla in the \(\{\ket{g},\ket{e},\ket{f}\}\) basis. In the absence of errors, the ancilla will end up in \(\ket{f}\) while the central system undergoes the target number-dependent phase shifts \(S(\vec{\phi})\), i.e. \(\bra{f}e^{-i\tilde{H}_{e}T}|g\rangle=S(\vec{\phi})\). With ancilla errors, we can explicitly write down the truncated conditional channels (in the interaction picture) up to the first order: \[\bra{\bra{g}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto\mathcal{I},\] \[\bra{\bra{f}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto S(\vec{\phi}) \bullet S^{\dagger}(\vec{\phi}),\] \[\bra{\bra{e}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto\int_{t=0}^ {T}dt\ e^{-i\Delta\chi ta^{\dagger}a}S(\vec{\phi})\bullet S^{\dagger}(\vec{ \phi})e^{i\Delta\chi ta^{\dagger}a}. \tag{18}\] If there is no \(\chi\)-mismatch, i.e. \(\Delta\chi=0\), this gate is strictly a 1-PI gate (see Eq. (13)); If there is a finite \(\chi\)-mismatch, the gate is no longer PI. Nevertheless, for a bosonic code that can correct phase rotations in the range \([-\theta_{m}/2,\theta_{m}/2]\) (e.g. \(\theta_{m}=\pi/2\) for the four-legged cat), the gate is still a 1-GPI gate if \(\Delta\chi T\leq\theta_{m}\) (see Eq. (14)). Figure 5: Illustration of a PI gate. Given an ancilla initial state \(\ket{i}\) and a measurement basis \(\mathcal{B}_{A}\), the bosonic mode undergoes a \(r\)-dependent unitary \(U_{ri}\) for any ancilla measurement outcome \(\ket{r}\in\mathcal{B}_{A}\), independent of the different system paths (see e.g. the green and orange curves, where an ancilla relaxation happens for the green curve). In Appendix A, we show that one can verify the GPI property of this SNAP gate more easily without calculating the conditional channels by checking a set of algebraic conditions for the Hamiltonian and jump operators. Also, in Appendix C, we present another GPI SNAP scheme using a qutrit and an extra flag qubit, which can tolerate even larger \(\chi\)-mismatch \(\Delta\chi T\). #### iv.2.2 GPI parity measurement with a three-level \(\chi\)-mismatched ancilla As another example of GPI operations, we consider the parity measurement for correcting photon loss errors [38] using a three-level \(\chi\)-mismatched ancilla. The system Hamiltonian (in the rotating frame) is \(H_{0}=-(\chi_{e}|e\rangle\langle e|+\chi_{f}|f\rangle\langle f|)\otimes a^{ \dagger}a\) (without ancilla drives). We denote \(|\pm\rangle\) as \((|g\rangle\pm|f\rangle)/\sqrt{2}\). The ancilla is initialized in \(|+\rangle\) and measured in the basis \(\{\ket{+},\ket{-},\ket{e}\}\). In the absence of ancilla errors, the operation performs a projection onto the even (odd) subspace of the bosonic mode conditioned on the ancilla measured in \(\ket{+}\) (\(\ket{-}\)): \[\begin{split}\langle\langle+|\mathcal{G}^{[0]}|+\rangle\rangle &=P_{+}\bullet P_{+},\\ \langle\langle-|\mathcal{G}^{[0]}|+\rangle\rangle& =P_{-}\bullet P_{-},\end{split} \tag{19}\] where \(P_{\pm}:=(I\pm e^{-i\pi a^{\dagger}a})/2\) is the projection on the even/odd parity subspace of the bosonic mode. In the presence of ancilla errors \(\{J_{1}=\sqrt{\gamma}\ket{e}\bra{f}\), \(J_{2}=\sqrt{\gamma}\ket{g}\bra{e}\), \(J_{3}=\sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\), we move to the interaction picture associated with \(H_{0}\). Now, the system Hamiltonian is \(0\) and the ancilla jump operators read \(\tilde{J}_{1}(t)=\sqrt{\gamma}|e\rangle\langle f|\otimes e^{-i\Delta\chi t^{ \dagger}a},\tilde{J}_{2}(t)=\sqrt{\gamma}|g\rangle\langle e|\otimes e^{-i\chi_ {e}t^{\dagger}a},\) and \(\tilde{J}_{3}=J_{3}=\sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\), same to those in the previous SNAP example. Here, without loss of generality, we set \(\Delta_{f}=-1\). We can calculate the the noise expansion of the joint channel up to the first order (see Eq. (9)): \[\begin{split}\tilde{\mathcal{G}}^{[1]}(T)&=W(T,0) \bullet W^{\dagger}(T,0)+\gamma\int_{t=0}^{T}G_{1}(t,1)\bullet G_{1}^{\dagger} (t,1)\\ &+\gamma\int_{t=0}^{T}G_{1}(t,3)\bullet G_{1}^{\dagger}(t,3),\end{split} \tag{20}\] where \(W(t_{2},t_{1}):=\exp\left[-iH_{\text{eff}}(t_{2}-t_{1})\right]\) with \(H_{\text{eff}}:=-\frac{i}{2}\sum_{j=1}^{3}\tilde{J}_{j}^{\dagger}\tilde{J}_{j }=-\frac{i}{2}\gamma[(1+\left|\Delta_{e}\right|^{2})\ket{e}\bra{e}+2\ket{f} \bra{f}]\), and \(G_{1}(t,j):=W(T,t)\tilde{J}_{j}(t)W(t,0)\). Note that we have dropped the term associated with the first-order quantum jump with \(\tilde{J}_{2}\), which is zero when the ancilla starts from \(\ket{+}\). Going back to the lab frame, the truncated channel is \(\mathcal{G}^{[1]}(T)=\tilde{\mathcal{G}}^{[1]}(T)\circ[U_{0}(T)\bullet U_{0}^ {\dagger}(T)]\), where \(U_{0}(T):=e^{-iH_{0}T}\). Then we can calculate the truncated conditional channels: \[\begin{split}\langle\langle+|\mathcal{G}^{[1]}|+\rangle\rangle& =[(1-\frac{p}{2})P_{+}+\frac{p}{2}P_{-}]\bullet[(1-\frac{p}{2})P _{+}+\frac{p}{2}P_{-}],\\ &+pP_{-}\bullet P_{-}+O(p^{2})\\ \langle-|\mathcal{G}^{[1]}|+\rangle\rangle&=[(1-\frac{p}{2 })P_{-}+\frac{p}{2}P_{+}]\bullet[(1-\frac{p}{2})P_{-}+\frac{p}{2}P_{+}],\\ &+pP_{+}\bullet P_{+}+O(p^{2})\\ \langle\langle e|\mathcal{G}^{[1]}|+\rangle\rangle&= \frac{p}{2T}\int_{t=0}^{T}dte^{-i(\Delta\chi t+\pi)a^{\dagger}a}\bullet e^{i( \Delta\chi t+\pi)a^{\dagger}a}\\ &+O(p^{2}),\end{split} \tag{21}\] where \(p:=\gamma T\). For a four-legged cat with \(\alpha\gg 1\), Eq. (21) satisfies the GPI condition as long as \(\Delta\chi T<\pi/2\). Note that the first two terms in Eq. (21) imply that one might obtain wrong parity measurement outcomes with a probability \(O(p)\) if the ancilla is measured in \(|\pm\rangle\). Such effective measurement errors can be suppressed to the second order by repeating the above parity measurement three times and performing majority voting, which will be discussed in the next section when we rigorously consider fault-tolerance. ## V Connection between GPI and Fault-tolerance In this section, we establish the connection between GPI quantum control and fault-tolerance defined in Sec. III. Let the bosonic mode be encoded in some bosonic code with a code projection \(P_{c}\). **Proposition 2**.: _Each AAO contained in a \(t\)-FT level-\(1\) gadget with an ancilla initial state \(\ket{i}\) and an ancilla measurement basis \(\mathcal{B}_{A}\) has to be \(t\)-GPI with respect to \(\ket{i}\), \(\mathcal{B}_{A}\), and the code projection \(P_{c}\)._ Proof.: Any \(t\)-FT gadget requires that if any \(m\leq t\) faults occur during the protocol, the output is guaranteed to be correct. However, if one AAO is not \(t\)-GPI, there exists an ancilla measurement outcome \(r\), conditioned on which the bosonic channel (see Eq. (14)) contains non-correctable errors. As a result, the final output can no longer be guaranteed to be correct. Conversely, we can combine pieces of \(t\)-GPI operations to get a \(t^{\prime}\leq t\)-FT gadgets, as shown in Fig. 1. In order to make \(t^{\prime}=t\), there are extra requirements for the AAOs, which are typically easier to satisfy than GPI. Instead of making rigorous statements about these requirements for generic gadgets, we will make case studies when constructing concrete FT gadgets. Nevertheless, we comment on some high-level ideas used to design the extra ingredients that can be combined with GPI to achieve fault tolerance here: (i) Operations are error transparent/closure against bosonic errors (see Sec. B); (ii) The propagation from ancilla faults to bosonic errors is linear; (iii) There exists at least one ancilla state \(|r\rangle\) such that the ideal conditional bosonic channel \(\langle\langle r|\mathcal{G}^{[0]}|i\rangle\rangle\) gives the target operation. As the first example, we construct 1-FT Z-axis rotation \(Z(\theta)\) for the four-legged cat using the GPI SNAP gate presented in Sec. IV.1.1. To implement a \(Z(\theta)\) gate, we choose a GPI SNAP gate with \(\Delta\chi T<\pi/2\) and \[S(\vec{\theta})=P_{0}+P_{3}+e^{i\theta}(P_{2}+P_{1}), \tag{22}\] where \(P_{j}:=\sum_{i=0}|4i+j\rangle\,\langle 4i+j|\). We consider the same ancilla jump errors as those presented in Sec. IV.1.1. In addition, we consider a single jump operator \(a\) representing a single photon loss for the bosonic mode. Then we implement the 1-FT \(Z(\theta)\) gate based on Algorithm 1 below. The 3-level ancilla basis is denoted by \(|g\rangle\),\(|e\rangle\) and \(|f\rangle\) according to Eq. (15). ``` 1\(o\gets e\).//\(o\)recordstheancillameasurementoutcome 2while\(o\neq f\)do 3 Prepare the anilla in the \(|g\rangle\) state, apply the GPI SNAP gate with \(S(\vec{\theta})\) in Eq. (22) for a time \(T=\pi/2\Omega\), and measure the ancilla in the \(|g\rangle,|e\rangle,|f\rangle\) basis with an outcome \(o\in\{g,e,f\}\). 4if\(o=e\)then 5 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 6 ``` **Algorithm 1**1-FT \(Z(\theta)\) gate Now, we verify that the above protocol satisfies the definition of a 1-FT gate in Def. 1. Here, we choose \(f(m)=(m,m\Delta\chi T/2)\) with \(\Delta\chi T/2<\pi/4\). Suppose the input error is of size \((k,\theta_{0})\) and there are \(m\) faults during the protocol. There are two cases where \((k,\theta_{0})+f(m)\leq f(1)\). First, \(m=0\) and \((k,\theta_{0})\leq(1,\Delta\chi T/2)\). Obviously, the gate is error-transparent to the phase rotation \(e^{-i\theta a^{\dagger}a}\), i.e. it simply commutes through the gate and remains at the output, since it commutes with the system Hamiltonian (see Eq. (15) and (16)). Moreover, as we shown in Appendix B.1, the gate is also error-transparent to a single photon loss \(a\) when using the form of \(S(\vec{\phi})\) in Eq. (22). Therefore, the input \((k\leq 1,\theta\leq\Delta\chi T/2)\) error simply remains at the output and stays correctable. Second, \(m=1\) and \((k,\theta)=(0,0)\). In this case, either an ancilla dephasing, or an ancilla decay, or a single photon loss occurs during the protocol. A single ancilla dephasing might cause the ancilla ending in \(|g\rangle\) instead of \(|f\rangle\) but does not add any error to the bosonic mode; A single ancilla decay from \(|f\rangle\) to \(|e\rangle\) only causes a correctable phase rotation with an angle \(|\delta\theta|\leq\Delta\chi T/2<\pi/4\)[50]; A single-photon loss simply remains at the output and stays correctable. As the second example, we construct a 1-FT QEC protocol for correcting a single-photon loss. Note that we will present a full EC gadget correcting both photon loss and dephasing errors in the next section. The protocol utilizes the 1-GPI parity measurement presented in Sec. IV.1.2, with a \(\chi\) mismatch \(\Delta\chi T<\pi/2\). ``` 1\(o_{i}\gets e\) for\(i\in\{1,2,3\}\).//\(\{o_{i}\}_{i\in[3]}\)recordthree consecutiveparitymeasurementoutcomesfor\(i\gets 1\)to\(3\)do 2while\(o_{i}=e\)do 3 Prepare the anilla in the \(|+\rangle\) state, apply the dispersive coupling for a time \(T=\pi/\chi_{J}\), and measure the ancilla in the \(=\{|+\rangle,|-\rangle,|e\rangle\}\) basis with an measurement outcome \(o_{i}\). 4if\(o_{i}=e\)then 5 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 6 ``` **Algorithm 2**1-FT photon-loss correction Now, we verify that the protocol in Alg. 2 satisfies the definition of a 1-FT QEC in Def. 2. Similar to the previous \(Z(\theta)\) gate example, we choose \(f(m)=(m,m\Delta\chi T/2)\). Assume there is an input error of size \((k,0)\) and \(m\) faults occur during the protocol. Note that since we are only correcting single photon losses for now, we assume the input has no dephasing errors. To verify condition (i) in Def. 2, we consider either \(k=1\), \(m=0\) or \(k=0\), \(m=1\). In the earlier case, the single photon loss can be perfectly corrected and the output has no error; in the latter case, we consider either an ancilla dephasing, an ancilla decay, or a single photon loss. An ancilla dephasing may flip a single parity measurement outcome but does not affect the final majority voting; A single ancilla decay only causes a correctable phase rotation with amplitude \(\leq\Delta\chi T/2<\pi/4\), which is a correctable error; A single photon loss during the protocol either gets corrected or goes undetected but remains as a correctable single photon loss at the output. For condition (ii) in Definition 2, one simply observes that a single photon loss error at the input can be detected and then corrected (although a logical error may happen when combined with another photon loss during the protocol), while a single photon loss or an ancilla decay can cause at most a \(f(1)=(1,\Delta\chi T/2)\) error that can go undetected and remains at the output. ## VI Fault-tolerant operations of four-legged cat code In this section, we focus on the four-legged cat and construct universal 1-FT level-1 gadgets that can correct a single-photon loss and any single ancilla fault, using GPI operations. The universal operation set we consider is \[\mathcal{S}=\{\text{EC},Z(\theta),X(\phi),XX(\delta),\mathcal{P}_{|+\rangle}, \mathcal{M}_{Z},\mathcal{M}_{X}\}, \tag{23}\] including error correction, \(Z\)-axis rotation, \(X\)-axis rotation, \(XX\) rotation (\(\exp(-i\delta XX/2)\)), state preparation in the \(X\) basis, measurement in the \(Z\) basis, and measurement in the \(X\) basis. One essential element for our construction is the GPI SNAP gate and GPI parity measurement described in Sec. IV.1.1 and Sec. IV.1.2, respectively. Recall that both of these two operations use a three-level ancilla, which is dispersive coupled to the bosonic mode via \(-(\chi_{e}\ket{e}\bra{e}+\chi_{f}\ket{f}\bra{f})\otimes a^{\dagger}a\), potentially with a \(\chi\) mismatch \(\Delta\chi:=\chi_{f}-\chi_{e}\). Denote the gate time for the SNAP gates as \(T\) and that for a parity measurement as \(T_{P}\). Typically \(T\gg T_{P}\) since the driving strength \(\Omega\) for the SNAP gate (see Eq. (16)) is much smaller than \(\chi_{f}\) in order for the rotating-wave-approximation to hold [29]. We choose \(f(m)=(m,m\Delta\chi T/2)\) with \(\Delta\chi T/2<\pi/8\)[51] for proving the fault-tolerance of the gadgets. Unless specially noted, all the SNAP gates and parity measurement gadgets we use have a \(\chi\) mismatch \(\Delta\chi\). Similar to the previous sections, we consider \(\{a,\ket{e}\bra{f},\ket{g}\bra{e},\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\}\) as the errors, representing a single photon loss, an ancilla decay from \(\ket{f}\) to \(\ket{e}\), an ancilla decay from \(\ket{e}\) to \(\ket{g}\), and an ancilla dephasing, respectively. ### Z-axis rotation A 1-FT \(Z\)-axis rotation with an arbitrary angle (\(\theta\)) using GPI SNAP gate is presented in Alg. 1 in the previous section. Note that a 1-FT logical gate using strictly PI SNAP gate (with no \(\chi\) mismatch) has been experimentally demonstrated for a small binomial bosonic code [29]. Here, the main difference is that our protocol allows a finite \(\chi\) mismatch. ### X-axis rotation In the large \(\alpha\) limit, a \(X\)-axis rotation is given by \[X(\phi)\approx e^{i\phi}\ket{C_{\alpha}^{+}}\bra{C_{\alpha}^{+}}+\ket{C_{i \alpha}^{+}}, \tag{24}\] where \(\ket{C_{\beta}^{\pm}}:=c_{\beta}^{\pm}(\ket{\beta}\pm\ket{-\beta})\), with \(c_{\beta}^{\pm}\) being normalization constants. We implement \(X(\phi)\) by adding a phase \(\phi\) to the subspace spanned by the two coherent states \(\ket{\alpha}\) and \(\ket{-\alpha}\). As illustrated in Fig. 6(a), we first displace the cavity by \(\alpha\) and apply a phase shift to the vacuum \(S(\vec{\phi})=e^{i\phi}\ket{0}\bra{0}+I-\ket{0}\bra{0}\) using the SNAP gate (see Sec. IV.1.1). Then we displace the cavity by \(-2\alpha\) and apply another \(S\). Finally, we displace the cavity by \(\alpha\) back to the codespace. The joint operation is: \[\begin{split} U_{X}&=D(\alpha)S(\vec{\phi})D(-2 \alpha)S(\vec{\phi})D(\alpha)\\ &=[D(\alpha)S(\vec{\phi})D(\alpha)^{\dagger}][D(-\alpha)S(\vec{ \phi})D(\alpha)^{\dagger}]\\ &\approx e^{i\theta}P_{\pm\alpha}+I-P_{\pm\alpha},\end{split} \tag{25}\] where \(P_{\pm\alpha}:=\ket{\alpha}\bra{\alpha}+\ket{-\alpha}\bra{-\alpha}=\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}+\ket{C_{\alpha}^{-}}\bra{C_{\alpha}^{-}}\). We now show that this gate is 1-FT if the \(\chi\)-mismatch during the SNAP gates is zero. Assume there is a \((k,\delta\theta)\) input error and \(m\) faults occur during the gate. Again, for 1-FT gate (see Def. 1), we only need to consider either \((k=0,\delta\theta=0)\), \(m=1\), or \((k\leq 1,\delta\theta\leq\Delta\chi T/2)\), \(m=0\). First, we consider a single fault occurring during \(U_{X}\). A single-photon loss simply commutes through the entire gate since the two SNAP gates \(S\) are error-transparent (see Appendix B) and \(D(\alpha)\) commutes with \(a\) up to a constant. A single-ancilla decay or dephasing during the \(S\) gate does not cause any error to the bosonic mode when assuming perfect \(\chi\) matching. Therefore, a single fault during the gate causes at most a \((1,0)<f(1)\)-error at the output, which is correctable. Second, we consider a \((k\leq 1,\delta\theta\leq\Delta\chi T/2)\) input error \(e^{i\delta\theta a^{\dagger}a}a^{k}\). We first notice that \(U_{X}e^{i\delta\theta a^{\dagger}a}a^{k}P_{c}\propto a^{k}U_{X}e^{i\delta \theta a^{\dagger}a}P_{c}\) since \(U_{X}\) is error-transparent for \(a^{k}\) (see Eq. (25)). Here, \(P_{c}:=\ket{+_{L}}\bra{+_{L}}+\ket{-_{L}}\bra{-_{L}}\approx C_{\alpha}^{+} \bra{C_{\alpha}^{+}}+\ket{C_{i\alpha}^{+}}+\ket{C_{i\alpha}^{+}}\bra{C_{i \alpha}^{+}}\) is the projector onto the code space of the four-legged cat. Then we only need to make sure that \(U_{X}\) is also error-transparent to dephasing \(e^{i\delta\theta a^{\dagger}a}\). Let \(E:=U_{X}e^{i\delta\theta a^{\dagger}a}U_{X}^{\dagger}\) be the effective error that \(e^{i\delta\theta a^{\dagger}a}\) propagates to after the gate. \(E\) satisfies \[\begin{split} EP_{c}&=e^{i\delta\theta a^{\dagger}a}P_{ c}+(1-e^{-i\phi})(P_{\pm\alpha}-I)e^{i\delta\theta a^{\dagger}a}\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}\\ &+(e^{i\phi}-1)P_{\pm\alpha}e^{i\delta\theta a^{\dagger}a}\ket{C_{ i\alpha}^{+}}\bra{C_{i\alpha}^{+}},\end{split} \tag{26}\] where we can see that \(U_{X}\) is not error-transparent against the dephasing due to the last two terms of Eq. (26). Fortunately, we can make it approximately error-transparent by modifying the SNAP gate: \[S(\vec{\phi})\to e^{i\phi}(P_{[s]})+I-P_{[s]}, \tag{27}\] where \(P_{[s]}:=\sum_{i=0}^{s}\ket{i}\bra{i}\) is the projection onto the \(s\)-neighborhood of vacuum. Then the gate unitary becomes \(U_{X}\to e^{i\phi}P_{\pm\alpha,s}+I-P_{\pm\alpha,s}\), where \(P_{\pm\alpha,s}:=D(\alpha)P_{[s]}D^{\dagger}(\alpha)+D(-\alpha)P_{[s]}D^{ \dagger}(-\alpha)\) is the projection onto a neighborhood of \(\ket{\alpha}\) and \(\ket{-\alpha}\). Now, the effective error for the dephasing error becomes \[\begin{split} EP_{c}&=e^{i\delta\theta a^{\dagger}a}P_{ c}+(1-e^{-i\phi})(P_{\pm\alpha,s}-I)e^{i\delta\theta a^{\dagger}a}\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}\\ &+(e^{i\phi}-1)P_{\pm\alpha,s}e^{i\delta\theta a^{\dagger}a}\ket{C_ {i\alpha}^{+}}\bra{C_{i\alpha}^{+}}.\end{split} \tag{28}\] For \(\ket{\delta\theta}\leq\Delta\chi T/2<\pi/8\), we can choose \(s=O(\ket{\alpha^{2}})\) such that the last two terms vanish in the \(\alpha\gg 1\) limit, i.e., \[\begin{split}\langle C_{ae^{i\delta\theta}}^{+}|P_{\pm\alpha,s}|C_ {ae^{i\delta\theta}}^{+}\rangle\to 1,\\ \langle C_{i\alpha e^{i\delta\theta}}^{+}|P_{\pm\alpha,s}|C_{i \alpha e^{i\delta\theta}}^{+}\rangle\to 0.\end{split} \tag{29}\] Then we have \(EP_{c}\approx e^{i\delta\theta a^{\dagger}a}P_{c}\) and the gate is error-transparent to dephasing as well. Note that 1-fault-tolerance can no longer be rigorously attained (even in the larger-\(\alpha\) limit) if using SNAP gates \(S\) with a finite \(\chi\)-mismatch. Taking the second \(S\) gate as an example, and suppose it has a \(\chi\)-mismatch \(\Delta\chi^{\prime}\), a single ancilla decay could cause a \(e^{i\delta\theta^{\prime}a^{\dagger}a}\) phase rotation with \(\ket{\delta\theta^{\prime}}\leq\Delta\chi^{\prime}T/2\) after \(S\), which propagates to \(e^{-i\delta\theta^{\prime}[a^{\dagger}a+\alpha(a+a^{\dagger})+\alpha^{2}]}\) after the later displacement. The extra displacement error after the gate is uncorrectable. Thus a single ancilla fault during the \(X\)-rotation can cause a first-order logical error with a probability \(cp\), where \(c\) is a constant depending on \(\Delta\chi^{\prime}T\). Nevertheless, if \(\Delta\chi^{\prime}T\) is small enough, the coefficient \(c\) can be made comparable or even smaller than \(p\), and we can still achieve good error suppression in practical regimes, as is shown in later numerical results in Fig. 7(a). ### XX rotation For large \(\alpha\), the \(XX\) rotation reads \[\begin{split} XX(\delta)&\approx e^{i\delta}(\ket{ C_{\alpha}^{+},C_{\alpha}^{+}}\bra{C_{\alpha}^{+},C_{\alpha}^{+}}+\ket{C_{i \alpha}^{+},C_{i\alpha}^{+}}\bra{C_{i\alpha}^{+},C_{i\alpha}^{+}})\\ &\quad+(\ket{C_{\alpha}^{+},C_{i\alpha}^{+}}\bra{C_{\alpha}^{+},C_{i\alpha}^{+}}+\ket{C_{i\alpha}^{+},C_{\alpha}^{+}}\bra{C_{i\alpha}^{+},C_{ \alpha}^{+}}).\end{split} \tag{30}\] We implement \(XX(\delta)\) by adding a phase \(\delta\) to the subspace spanned by \(\ket{\pm\alpha,\pm\alpha}\) and \(\ket{\pm i\alpha,\pm i\alpha}\). As illustrated in Fig. 6(b), we interfere the two modes through a \(50:50\) beamsplitter, apply the number dependent phase shift \(S(\vec{\delta})=e^{-i\delta}\ket{0}\bra{0}+I-\ket{0}\bra{0}\) to both ports, and then interfere through another \(50:50\) beamsplitter: \[U_{XX}=\text{BS}(\frac{\pi}{2})^{\dagger}(S\otimes S)\text{BS}(\frac{\pi}{2}), \tag{31}\] where \(\text{BS}(\theta):=\exp[\frac{\theta}{2}(ab^{\dagger}-a^{\dagger}b)]\) with \(a\) and \(b\) denoting the annihilation operator of the two involved modes, respectively. To understand the effect of \(U_{XX}\), we consider a coherent-state input \(\ket{\alpha,\beta}\). The first BS interferes the two coherent states: \[\text{BS}\ket{\alpha,\beta}=\ket{(\alpha+\beta)/\sqrt{2},(\alpha-\beta)/\sqrt {2}}, \tag{32}\] We take the approximation \(S\ket{\gamma}\approx e^{-i\delta\openone[\gamma=0]}\ket{\gamma}\), where \(\openone[x]\) is the indicator function. Then the two SNAP gates in Eq. (31) add a nontrivial phase to the R.H.S. of Eq. (32) if \(\alpha=\beta\) or \(\alpha=-\beta\): \[(S\otimes S)\text{BS}\ket{\alpha,\beta}=e^{-i\delta\openone[\alpha=\beta]+ \openone[\alpha=-\beta]}\ket{\frac{\alpha+\beta}{\sqrt{2}},\frac{\alpha-\beta }{\sqrt{2}}}. \tag{33}\] Finally, the last BS restores the input coherent state potentially with an extra phase: \[U_{XX}\ket{\alpha,\beta}=e^{-i\delta\openone[\alpha=\beta]+\openone[\alpha=- \beta]}\ket{\alpha,\beta}. \tag{34}\] We remark that, when \(\alpha\) and \(\beta\) are only chosen from a set of discrete values \(\{\alpha_{i}\}_{i}\) which are well-separated in the phase space, Eq. (34) provides an exact expression of the action of \(U_{XX}\). The rigorous form of \(U_{XX}\) is given in Eq. (101) in Appendix B.3. To conclude, a two-mode coherent state accumulates a nontrivial phase if and only if the two coherent states have matched amplitudes and aligned/anti-aligned phases. Let \(P_{\pm(i)\alpha}\) be the projection onto a four-dimensional subspace spanned by \(\ket{\alpha},\ket{-\alpha},\ket{i\alpha},\ket{-i\alpha}\), we then have \[\begin{split}& P_{\pm(i)\alpha}\otimes P_{\pm(i)\alpha}U_{XX}P_{ \pm(i)\alpha}\otimes P_{\pm(i)\alpha}\\ &=e^{i\delta}(P_{\pm\alpha}\otimes P_{\pm\alpha}+P_{\pm i\alpha} \otimes P_{\pm i\alpha})\\ &\quad\quad+(P_{\pm\alpha}\otimes P_{\pm i\alpha}+P_{\pm i \alpha}\otimes P_{\pm\alpha}).\end{split} \tag{35}\] Note that Eq. (35) implies \(P_{c}^{(AB)}U_{XX}P_{c}^{(AB)}=XX(\theta)\) where \(P_{c}^{AB}=P_{c}^{(A)}\otimes P_{c}^{(B)}\) is the projector onto the collective code space of 4-legged cat on bosonic modes \(A\) and \(B\). Now, we prove this \(XX(\theta)\) gate is 1-FT according to Def. 1. We first consider the case where there is an input error \(e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}b^{k_{a}}a^{k_ {a}}\) with \(k_{a},k_{b}\leq 1\) and \(\ket{\delta\theta_{a}},\ket{\delta\theta_{b}}\leq\Delta\chi T/2<\pi/8\), but no fault during the gate. \(b^{k_{a}}a^{k_{a}}\) simply commutes through the gate when acting on the code space due to the error-transparency form of \(U_{XX}\) in Eq. (35). Similar to proof for the \(X\)-axis rotation in Eq. (28), one can show that \(U_{XX}\) is also approximately error-transparent to the phase rotation \(e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}\) by replacing \(S\to e^{-i\delta}(\sum_{i=0}^{s}\ket{i}\bra{i})+I-(\sum_{i=0}^{s}\ket{i}\bra{ i})\). We put the proof in Appendix B.3. As a result, the input error commutes through the gate and remains correctable. To complete the proof that the \(U_{XX}\) is 1-FT, we also need to show that for a perfect input state and any single fault during \(U_{XX}\), each of the two output modes has an error of size at most \(f(1)=(1,\Delta\chi T/2)\). As shown in Appendix B.3, a single-photon loss during the gate propagates to an error of the form \(c_{1}a+c_{2}b\), where \(c_{1},c_{2}\in\mathbb{C}\), due to the error transparency of the SNAP gates and the error closure of the BSs. By using a \(\chi\)-matched ancilla for each SNAP gate, any single ancilla fault does not propagate to any bosonic data errors. We note that similar to the \(X\)-axis rotation, the \(XX\) rotation is not strictly 1-FT if there is a finite \(\chi\)-mismatch when executing the SNAP gates, as an induced phase rotation would propagate to uncorrectable errors after the last BS. Nevertheless, as we show numerically in Fig. 7, high-fidelity \(XX\) rotation can still be realized in practical regimes even with a finite but small \(\chi\)-mismatch. ### X-basis state preparation The \(+1\)\(X\) basis eigenstate is a two-legged cat state with an even photon parity \(\ket{+_{L}}=\ket{C_{\alpha}^{+}}=c_{\alpha}^{+}(\ket{\alpha}+\ket{-\alpha})\). Observe that \(\ket{+_{L}}\propto P_{+}\ket{\alpha}\), i.e. the even-parity projection of a coherent state \(\ket{\alpha}\). Thus, we can prepare the even cat state by first preparing a coherent state \(\ket{\alpha}\), and then performing a non-destructive parity measurement to project it to an even cat state (up to a parity frame update). For 1-FT state preparation, unlike the 1-FT photon-loss correction protocol in Alg. 2, we do not need to repeat the parity measurement three times as it allows a noisy output with up to \(f(1)=(1,\Delta\chi T/2)\) error for up to a single fault during the parity measurement (see Def. 3). Concretely, we implement the following protocol: ``` 1Prepare the bosonic mode in the coherent state \(\ket{\alpha}\). 2\(o\gets e\)// records the parity measurement outcome 3while\(o=e\)do 4 Prepare the ancilla in the \(\ket{+}\) state, apply the dispersive coupling for a time \(T=\pi/\chi_{f}\), and measure the ancilla in the \(=\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o\). 5if\(o=e\)then 6 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 7 Apply a parity correction if \(o=-\). ``` **Algorithm 3**\(1\)-FT \(X\)-basis state preparation Note that the \(X\)-basis state preparation here allows a finite \(\chi\)-mismatch. ### Z-basis measurement The \(Z\)-basis measurement admits the form of measuring photon number modulo \(4\). In order to obtain the correct logical measurement outcome in the presence of a single-photon loss, as required by Def. 4, we insert a non-destructive parity measurement before each logical \(Z\) measurement. The full FT protocol is presented in Alg. 4. Note that each modulo-\(4\) photon number measurement \(o_{i,b}\) is conditioned on the parity measurement outcome \(o_{i,a}\), i.e. we distinguish the photon number between \(0\mod 4\) and \(2\mod 4\) for even parity (\(o_{i,b}=+\)) and between \(3\mod 4\) and \(2\mod 4\) for odd parity (\(o_{i,a}=-\)). To verify that the \(1\)-FT measurement condition in Def. 4 holds, one simply observe that a single photon loss before the measurement can be captured by the parity measurements, and any single fault during the measurement protocol can only cause at most one measurement error on one of \(\{o_{i,b}\}_{i=1,2,3}\), and thus does not affect the majority voting. Note that the \(Z\)-basis measurement here can also allow a finite \(\chi\)-mismatch between the ancilla and the bosonic mode, as dephasing errors commute with the measurements. ``` 1for\(i\gets 1\)to\(3\)do 2\(o_{i,a}\gets e\); 3while\(o_{i,a}=e\)do 4 Prepare the ancilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/\chi_{f}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,a}\). 5\(o_{i,b}\gets e\) 6while\(o_{i,b}=e\)do 7if\(o_{i,a}=+\)then 8 Prepare the anilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/2\chi_{f}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,b}\). 9else 10 Prepare the anilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/2\chi_{f}\), apply an ancilla phase rotation \(e^{-i\frac{\pi}{2}\ket{f}\bra{f}}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,b}\). 11 12 13 14 15 16 17 Obtain the logical measurement outcome as the majority voting of \(\{o_{i,b}\}_{i=1,2,3}\). ``` **Algorithm 4**\(1\)-FT \(Z\)-basis measurement ### X-basis measurement The \(X\)-basis measurement amounts to distinguishing the phase of the coherent states modulo \(\pi\). We achieve this by interfering the mode \(a_{i}\) with another mode \(b_{i}\) in a coherent state \(\ket{\alpha}\) through a \(50:50\) beam splitter and measuring if the two output modes \(a_{o},b_{0}\) have less than \(s\) photons. We obtain a logical \(-\) if both modes have more than \(s\) photons and a logical \(+\) otherwise, i.e. we implement the following POVMs: \[M_{-} =[I_{a_{o}}-\sum_{i=0}^{s}(\ket{i}_{a_{o}}\langle i|)]\otimes[I_{b_ {o}}-\sum_{i=0}^{s}(\ket{i}_{b_{o}}\langle i|)] \tag{36}\] \[\approx(\hat{I}_{a_{i}}-\sum_{i=0}^{s}(\ket{\alpha,i}_{a_{i}} \langle\alpha,i|))(I_{a_{i}}-\sum_{i=0}^{s}(\ket{-\alpha,i}_{a_{i}}\langle- \alpha,i|)),\] \[M_{+} =I-M_{-}\] \[\approx\sum_{i=0}^{s}(\ket{\alpha,i}_{a_{i}}\langle\alpha,i|))+ \sum_{i=0}^{s}(\ket{-\alpha,i}_{a_{i}}\langle-\alpha,i|)),\] where each subscript labels the mode that a state or an operator belongs to. Measuring if one mode has less than \(s\) photons can be realized by dispersively coupling it to a qubit, driving the qubit from \(\ket{g}\) to \(\ket{e}\) conditioned on the mode having less than \(s\) photons, and measuring the qubit in the \(\ket{g},\ket{e}\) basis. In the interaction picture associated with the dispersive coupling, the Hamiltonian reads \[\tilde{H}_{AC}=\Omega\left(\ket{e}\bra{g}\otimes P_{[s]}+h.c.\right). \tag{37}\] Recall that \(P_{[s]}:=\sum_{i=0}^{s}\ket{i}\bra{i}\). In the absence of errors, the \(0\)-th order conditional operations are \[\begin{split}\langle\langle e|\mathcal{G}^{[0]}(T)\ket{g}\rangle \rangle&=P_{[s]}\bullet P_{[s]}+O(p),\\ \langle\langle g|\mathcal{G}^{[0]}(T)\ket{g}\rangle\rangle& =(I-P_{[s]})\bullet(I-P_{[s]})+O(p).\end{split} \tag{38}\] A single fault will affect the measurement outcome or cause a bosonic error diagonal in the Fock basis. The former can be tolerated by repeating the above measurement three times and performing majority voting, while the latter simply commutes with the measurements and does not affect the (later) measurement outcome. To check this \(X\)-basis measurement scheme is 1-FT according to Def. 4, we also need to verify that the measurement outcome is correct for any input error \((k,\theta)\leq(1,\Delta\chi T/2)\). First, a single-photon loss does not affect the measurement outcome since \(a\) does not change the phase of any coherent states. Second, a small phase rotation rotates \(\ket{\alpha}\) to \(\ket{\alpha e^{i\theta}}\). Similar to the argument for the \(X\)-axis rotation, the \(X\)-basis measurement outcome is correct as long as the POVM \(M_{+}\) captures \(\ket{\pm\alpha e^{i\theta}}\) but not \(\ket{\pm i\alpha e^{i\theta}}\). ### Error correction To correct both loss and dephasing errors, i.e. data errors with \((k>0,\ket{\theta}>0)\), we employ a Knill-type QEC [52] using a teleportation circuit shown in Fig. 6(c). A fresh ancilla bosonic mode \(b\) is initialized in \(\ket{+}\) state and gets entangled with the data mode \(a\) via a \(XX\) rotation along with singe-mode rotations. The data mode \(a\) is then measured in the \(Z\) basis, where the measurement outcome is used to apply a feedback \(Z\) operation on the \(b\) mode. All the gadgets here are 1-FT using previous constructions. Moreover, they are error-transparent to any input error on the \(a\) mode smaller than \(f(1)=(1,\Delta\chi T/2)\). Therefore, the input data error simply commutes through all the gates and does not propagate to the \(b\) mode. Furthermore, the 1-FT \(Z\)-basis measurement is correct for an error smaller than \(f(1)\). Therefore, such an input error can be corrected by the EC gadget. To verify the 1-FT EC conditions, we need to further show that a single fault during the teleportation circuit only leads to a correctable residual error of size at most \(f(1)\) at the output of the \(b\) mode. Since we are using 1-FT gates, the output for the \(a\) or \(b\) mode (before the \(Z\) measurement) has an error at most \(f(1)\). As shown in Fig. 7(a), we numerically evaluate the average infidelity of the teleportation gadget in Fig. 6(c). In the absence of \(\chi\) mismatch (see the blue curve), we show that it has an error rate that scales as \((\kappa/\Omega)^{2}\), manifesting the fault tolerance of its composing gadgets, which cover the entire \(\mathcal{S}\) other than the \(X\)-basis measurement. There is an error floor in the low \(\kappa/\Omega\) regime, which is exponentially suppressed by \(|\alpha|^{2}\), due to the finite-size imperfection of the \(X\) rotation and the \(XX\) rotation. In the presence of a finite \(\chi\) mismatch, a rigorous second Figure 6: Illustration of the \(X\)-axis rotation (a), \(XX\) rotation (b), and teleportation-based EC (c) in the level-1 gadgets \(\mathcal{S}\) for the four-legged cat. Figure 7: (a) Average infidelities of an error-correction gadget using teleportation in Fig. 6(c) as a function of \(\gamma/\Omega\) with perfect \(\chi\) matching (blue line) or finite \(\chi\) mismatches (orange line). Here, we use experimental parameters from Ref. [29] for the coherent interaction strengths \(\chi_{f}=2\pi\times 1\)MHz, \(\Omega=0.3\chi_{f}\), \(g_{\text{ss}}=2\chi_{f}\). We consider single-photon loss, ancilla decay from \(f\) to \(e\), ancilla decay from \(e\) to \(g\), and ancilla dephasing \(\mathcal{D}[\ket{e}\bra{e}+2\ket{f}\bra{f}]\) with rates \(\kappa\), \(\gamma_{f\to e}\), \(\gamma_{e\to g}\), and \(\gamma_{\phi}\), respectively. We assume the ancilla error rates are much larger than the cavity loss rate and set \(\gamma_{f\to e}=\gamma_{e\to g}=\gamma\), \(\gamma_{\phi}=\gamma/4\), and \(\kappa=\gamma/10\)[29]. We choose \(\alpha=2.9\), which is a sweet spot for the four-legged cat that minimizes the back action of photon loss [43]. (b) The accumulation of average infidelity and decay of mean photon number \(\langle a^{\dagger}a\rangle\) for 40 rounds of repeated parity measurements (infidelities are shown for every two rounds) followed by teleportation. We use the same coherent parameters \(\chi_{f},\Omega\) and \(g_{\text{BS}}\) as in (a), a finite \(\chi\) mismatch \(\Delta\chi=\Omega/10\), and the experimental error rates from Ref. [29]: \(\kappa=2\)KHz, \(\gamma_{f\to e}=\gamma_{e\to g}=\gamma=20\)KHz and \(\gamma_{\phi}=5\)KHz (with the same ratios between these error rates are in (a)). The teleportation pumps energy into the system and suppresses the random phase rotations caused by \(\Delta\chi\). The three Wigner plots depict the density matrix at the input, before and after the teleportation respectively. order error suppression can no longer be attained due to the induced random phase rotations during the \(X\)- and \(XX\)-rotation gates. However, sufficient error suppression can still be achieved with a finite but small \(\chi\)-mismatch in practically relevant regimes (see the orange and green curves). In practice, where photon loss is typically the predominant error source, repeated parity measurements that correct photon losses (see Alg. 2) suffice for extending the lifetime of the cats. Such a robust memory that reaches the break-even point has been experimentally demonstrated [22]. However, only parity measurements are not enough to protect the cats during long computational tasks as the mean photon number would keep decaying (the parity measurement and gates in \(\mathcal{S}\) are energy-preserving operations that commute with \(a^{\dagger}a\)) due to deterministic energy loss to the environment. We propose to solve this problem by inserting the teleportation gadget periodically in between certain rounds of parity measurements, which pumps energy into the system and restores the amplitude of the cats. Furthermore, the teleportation can suppress the accumulation of random phase rotations if, for example, there is some finite \(\chi\)-mismatch or small cavity dephasing errors \(\kappa_{\phi}\mathcal{D}[a^{\dagger}a]\). We demonstrate such effects numerically in Fig. 7(b). ## VII Concatenated QC and Hardware-Efficient Fault-Tolerance With the set of 1-FT level-1 gadgets in \(\mathcal{S}\), we can concatenate the level-1 logical qubits (four-legged cats) with a level-2 qubit code for arbitrary error suppression. We show such a concatenated architecture in Fig. 8. The basic elements for each level-1 qubit are simply a storage bosonic mode and a three-level ancilla that are dispersively coupled. The ancilla is used for the fault-tolerant operation of the bosonic qubit in each storage mode, including state preparation, photon-loss correction, gates, and measurements (see Sec. VI). In addition, a small number of extra bosonic modes shared by neighboring storage modes, which we refer to reservoir modes, are used to pump energy into the storage modes periodically via teleportation (see Fig. 6(c)). The level-2 QEC requires certain couplings between level-1 qubits. Importantly, we can achieve this by introducing only BS coupling between nearest-neighbor storage bosonic modes (see Fig. 8) for 2D stabilizer codes. The level-2 syndrome-extraction circuits are made of non-destructively measurement of high-weight Pauli operators, featuring a sequence of two-qubit entangling gates such as the CNOT gate. In Fig. 9(a), we show how one can get a level-1 CNOT gate using 1-FT single-mode and two-mode rotations in \(\mathcal{S}\). Although the complied circuit is long with a depth 6, we note that one can usually reduce the depth per CNOT gate when considering the full stabilizer measurement circuits. As an example, we can measure a weight-\(n\)\(X\) Pauli operator using a circuit of depth \(2n+4\) (see Fig. 9(b)). We leave the evaluation and optimization of the error rates of level-1 gates, e.g. the CNOT gate, as well as the threshold and resource overhead of concatenated codes to future work. Nevertheless, we remark that each CNOT gate (se Fig. 9(a)) uses similar gadgets as those for teleportation (see Fig. 6(c)), and each CNOT gate in a syndrome extraction depth (see Fig. 9(b)) has a similar depth as the teleportation on average, we expect the CNOT gates have a similar error rate as the teleportation shown in Fig. 7(b). Using this rough estimates, a gate error rate below \(10^{-2}\), which corresponds to the gate error threshold for the surface codes, is achievable using the current circuit-QED hardware. To sum up, our construction of \(\mathcal{S}\) in this work enables a practical, hardware-efficient architecture for fault-tolerant quantum computing, which features only one bosonic mode and one qutrit per level-1 logical qubit and only requires low-order physical interactions (dispersively coupling and beam-splitter interaction) that have been experimentally demonstrated. Furthermore, realizing high-fidelity level-1 gadgets with error rates below the threshold requirement of the level-2 codes is promising for near-term experimental demonstrations. ## VIII Discussion The fault-tolerant gadgets \(\mathcal{S}\) that we develop in Sec. VI for the four-legged cat can be applied to other rotation-symmetric codes [41], whose codewords are invariant under a \(N\)-fold rotation \(R=\exp[i(2\pi/N)a^{\dagger}a]\). Taking Figure 8: Hardware layout for concatenated 2D codes with four-legged cats. Each level-1 logical qubit (blue box) consists of a storage bosonic mode and a three-level ancilla, which are dispersively coupled. BS coupling between neighboring storage bosonic modes is required for the level-2 QEC. In addition, reservoir modes (with only one shown here as an example) shared between neighboring storage modes are used to pump energy into the system via teleportation (see Fig. 6(c)). \(N=2\) for example, an arbitrary code with a two-fold rotation symmetry have codewords \[\begin{split}|+_{\Theta}\rangle&\approx\frac{1}{\sqrt{ \mathcal{N}_{+}}}(I+e^{i\pi a^{\dagger}a})|\Theta\rangle,\\ |-_{\Theta}\rangle&\approx\frac{1}{\sqrt{\mathcal{N }_{-}}}(e^{i\pi a^{\dagger}a/2}+e^{i3\pi a^{\dagger}a/2})|\Theta\rangle,\end{split} \tag{39}\] where \(\mathcal{N}_{\pm}\) are normalization constants, and the approximation holds when the base state \(\ket{\Theta}\) is localized in phase space, i.e. \(\bra{\Theta}e^{i\pi a^{\dagger}a/2}\ket{\Theta}\approx 0\). The fault-tolerant gadgets in \(\mathcal{S}\) can be applied to such an arbitrary rotation-symmetric code with a localized base state \(\ket{\Theta}\), except that for the \(X\)-basis state preparation in Alg. 3, we need to replace the initial state with \(\ket{\Theta}\) in the first step. In particular, the \(X\) rotation and \(XX\) rotation still work since they are based on the phase-space locality of the base state. In Tab. 1, we compare our construction of fault-tolerant gadgets for rotation-symmetrical codes that can correct photon losses with those in the literature. In particular, compared to the gadgets in Ref. [41] using bosonic ancillae, our gadgets using qutrit ancillae avoid the demand of nonlinear interaction between bosonic modes and the phase measurement, which are both challenging to engineer in practice. ###### Acknowledgements. We thank Wenlong Ma and Takahiro Tsunoda for helpful discussions. The authors acknowledge support from the ARO (W911NF-23-1-0077), ARO MURI (W911NF-21-1-0325), AFOSR MURI (FA9550-19-1-0399, FA9550-21-1-0209), NSF (OMA-1936118, ERC-1941583, OMA-2137642), NTT Research, Packard Foundation (2020-71479), and the Marshall and Arlene Bennett Family Research Program. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. The authors are also grateful for the support of the University of Chicago Research Computing Center for assistance with the numerical simulations carried out in this paper. ## Appendix A Appendix A: Algebric conditions for PI and GPI In this section, we provide algebraic conditions for PI gates (Def. 5, Def. 6) and GPI gates (special case of Def. 7 when the target operation is a unitary). Recall that we are considering a Markovian evolution for the joint system of ancilla \(A\) and bosonic mode \(C\), described by the Lindblad master equation in Eq. (11), with a joint Hamiltonian \(H_{AC}(t)\) and a set of ancilla errors \(\{J_{j}\}\). We first provide definitions and properties of a set of structured algebras that we will use. Let \(\mathcal{B}_{A}=\{\ket{m}_{A}\}_{m\in[d_{A}]}\) be an orthonormal basis for a \(d_{A}\)-dimensional ancilla, and \(\mathcal{B}_{C}=\{\ket{n}_{C}\}_{n\in[\infty]}\) be an orthonormal basis for an infinite-dimensional bosonic mode. Let \(\mathbf{M}_{A}\) (\(\mathbf{M}_{C}\)) be the ordinary matrix algebra for the ancilla (bosonic mode). \(\mathbf{M}_{A}\) (\(\mathbf{M}_{C}\)) is a vector space over \(\mathbb{C}\) with a basis \(\{\ket{m}_{A}\bra{n}_{m,n\in[d_{A}]}\) (\(\{\ket{m}_{C}\bra{n}_{m,n\in[\infty]}\)). In addition, \(\mathbf{M}_{A}\) (similarly for \(\mathbf{M}_{C}\)) is equipped with a multiplication operation \[\ket{a}_{A}\bra{b}\ket{c}_{A}\bra{d}=\delta_{b,c}\ket{a}_{A}\bra{d}, \tag{40}\] for any \(a,b,c,d\in[d_{A}]\). For any algebra \(\mathbf{M}\), we denote \(\mathbf{M}=\bra{\mathbf{S}}\) for a set \(\mathbf{S}\) if any element \(a\) in \(\mathbf{M}\) can be written as \(a=\sum_{j}c_{j}\alpha_{j}\), where \(c_{j}\in\mathbb{C}\) and \(\alpha_{j}\) is a product of some elements in \(\mathbf{S}\). In other words, \(\mathbf{M}\) is generated by \(\mathbf{S}\). Let \(\mathbf{M}_{AC}=\mathbf{M}_{A}\otimes\mathbf{M}_{C}\) be the matrix algebra for the joint system. We define the reduction of an algebra on the joint system to an algebra only on the ancilla as a surjective map from \(\mathbf{M}_{AC}\) to \(\mathbf{M}_{A}\): **Definition 8** (Reduction of a joint algebra).: _Given any algebra \(\mathbf{H}\subseteq\mathbf{M}_{AC}\) on the joint system and an ancilla basis \(\mathcal{B}_{A}\) we define the reduction of \(\mathbf{a}\) on \(\mathcal{B}_{A}\) as:_ \[\begin{split}\mathbf{H}|_{\mathcal{B}_{A}}:=\langle\{\ket{m} \bra{n}\ket{m},\ket{n}\in\mathcal{B}_{A};\\ \exists h\in\mathbf{H},\bra{m}h\ket{n}\neq 0\}\rangle.\end{split} \tag{41}\] Next, we define a family of subalgebras of \(\mathbf{M}_{AC}\) that satisfy a special property: **Definition 9** (PI matrix algebra. Definition 1 of Ref. [49]).: _Let \(\mathcal{B}_{A}\) be some orthonormal basis for \(A\). We say that a subalgebra \(\mathbf{P}\) of \(\mathbf{M}_{\mathbf{AC}}\) is a PI algebra associated with \(\mathcal{B}_{A}\) if it satisfies:_ 1. \(\mathbf{P}=\langle\{\ket{m}\bra{n}\otimes U_{mn}\}\rangle\) _for some set of_ \((m,n)\in[d_{A}]\times[d_{A}]\) _and_ \((m,n)\)_-dependent unitaries_ \(U_{mn}\in\mathbf{M}_{C}\)_._ 2. \(\mathbf{P}\) _is isomorphic to its reduction on_ \(\mathcal{B}_{C}\) _via the map_ \(\ket{m}\bra{n}\otimes U_{mn}\rightarrow\ket{m}\bra{n}\)_._ Figure 9: Compilation of level-1 CNOT (a) and a stabilizer \(X^{\otimes 2}\) measurement circuit (b) using our constructed 1-FT level-1 gadgets in \(\mathcal{S}\). Note that the second condition posts three requirements on the unitaries \(U_{mn}\): 1. \(U_{ma}U_{bn}=\delta_{a,b}U_{mn}\). 2. \(U_{mn}=U_{nm}^{\dagger}\). 3. \(U_{mm}=I\). These requirements lead to the following properties of operators in a PI algebra: **Proposition 3** (Property of operators lying in a PI algebras).: _Let \(\mathbf{P}=\left\langle\left\{\left|m\right\rangle\left\langle n\right|\otimes U _{mn}\right\}\right\rangle\) be some PI algebra associated with an ancilla basis \(\mathcal{B}_{A}\), and let \(\mathbf{P}|_{\mathcal{B}_{A}}\) be its reduction on \(\mathcal{B}_{A}\). For any operator \(O\in\mathbf{P}\) and \(\left|r\right\rangle,\left|i\right\rangle\in\mathcal{B}_{A}\), we have \(\left\langle r\right|O\left|i\right\rangle\propto U_{ri}\), where \(U_{ri}:=I\) if \(\left|r\right\rangle\left\langle i\right|\notin\mathbf{P}|_{\mathcal{B}_{A}}\)._ Proof.: If \(O\in\mathbf{P}\), we can write \(O\) as \(O=\sum_{m,n}o_{mn}\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\) for some \(o_{mn}\in\mathcal{C}\). If \(\left|r\right\rangle\left\langle i\right|\in\mathbf{P}|_{\mathcal{B}_{A}}\), we have \(\left\langle r\right|O\left|i\right\rangle=o_{ri}U_{ri}\propto U_{ri}\); If \(\left|r\right\rangle\left\langle i\right|\notin\mathbf{P}|_{\mathcal{B}_{A}}\), we have \(\left\langle r\right|O\left|i\right\rangle=0\times I\). Note that Prop. 3 also implies that for any operator \(O=\prod_{i}O_{i}\) that is a product of operators lying in a PI algebra \(\mathbf{P}=\left\langle\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\right\rangle\), we have \(\left\langle r\right|O\left|i\right\rangle\propto U_{ri}\). ### PI conditions Ref. [49] provides a simple algebraic condition for PI gates using PI algebras: **Proposition 4** (Algebraic condition for PI gates. Theorem 1 of Ref. [49]. ).: _An ancilla-assisted gate is PI (see Def. 5) in an ancilla basis \(\mathcal{B}_{A}\) if the Hamiltonian and all the Lindblad jump operators are all in some PI algebra associated with \(\mathcal{B}_{A}\)._ Note that the condition in Prop. 4 guarantees PI up to infinite order. In the following, we generalize it for finite-order PI gates. First, the effrective Hamiltonian \(H_{\mathrm{eff}}=H_{AC}(t)-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}\) needs to be in some PI algebra, i.e. \(H_{\mathrm{eff}}(t)=\sum_{m,n=1}^{d_{A}}\xi_{mn}(t)\left|m\right\rangle\left\langle n \right|\otimes U_{mn}\) for some \(\xi_{mn}(t)\in\mathbb{C}\) and unitaries \(U_{mn}\). Next, we define a \(n\)-th order path algerbra \(\mathbf{p}^{[n]}\) containing all the possible paths contained in the noise expansion of the system dynamics up to \(n\)-th order, and an associated \(n\)-th order reachable state set \(\mathbf{S}_{A}^{[n]}\) containing all ancilla states reachable via the \(n\)-th order paths in \(\mathbf{p}^{[n]}\) when starting from \(\left|i\right\rangle\), and a \(n\)-th order error set \(\mathbf{E}^{[n]}\subseteq\{J_{j}\}\) containing all possible errors that can act nontrivially on \(\mathbf{S}_{A}^{[n-1]}\): **Definition 10** (Finite-order path algebras, reachable states, and error sets).: _Given a Hamiltonian \(H_{AC}(t)\) that lies in some PI algebra associated with an ancilla basis \(\mathcal{B}_{A}\), a set of errors \(\{J_{j}\}\), and an initial ancilla state \(\left|i\right\rangle\), we define the zeroth-order path algebra \(\mathbf{p}^{[0]}\) as an algebra that contains all the paths in the effective Hamiltonian \(H_{\mathrm{eff}}(t):=H_{AC}(t)-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}=\sum_{m,n=1}^{d_{A}}\xi_{mn}(t)\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\):_ \[\mathbf{p}^{[0]}:=\langle\left\{\left|m\right\rangle\left\langle n\right| \otimes U_{mn}\mid\exists t\in[0,T],\xi_{mn}(t)\neq 0\right\}\rangle, \tag{11}\] _Let \(\mathbf{p}^{[0]}|_{\mathcal{B}_{A}}\) be the reduction of \(\mathbf{p}^{[0]}\) on \(\mathcal{B}_{A}\). We define a zeroth order reachable state set including all states reachable via the zeroth order paths when starting from \(\left|i\right\rangle\):_ \[\mathbf{S}_{A}^{[0]}:=\{\left|m\right\rangle\in\mathcal{B}_{A}\mid\left|m \right\rangle\left\langle i\right|\in\mathbf{p}^{[0]}|_{\mathcal{B}_{A}}\}, \tag{12}\] _and we define a zero-th order error set \(\mathbf{E}^{[0]}:=\emptyset\)._ _Then, we define a \(n\geq 1\)-th order path algebra \(\mathbf{p}_{AC}^{[n]}\) and a \(n\)-th order reachable state set inductively:_ \[\mathbf{E}^{[n]} =\mathbf{E}^{[n]}\cup\{J_{j}\left|\,\exists\left|m\right\rangle \in\mathbf{S}_{A}^{[n-1]},J_{j}\left|m\right\rangle\neq 0\}, \tag{13}\] \[\mathbf{P}^{[n]} =\langle\mathbf{p}^{[n-1]}\cup\mathbf{E}^{[n]}\rangle,\] \[\mathbf{S}_{A}^{[n]} =\{\left|m\right\rangle\,\left|\,\left|m\right\rangle\left\langle i \right|\in\mathbf{p}^{[n]}|_{\mathcal{B}_{A}}\}.\] **Proposition 5** (Algebraic conditions for finite-order PI gates).: _Given an ancilla-assisted gate \(\mathcal{G}(T)\) generated by a Hamiltonian \(H_{AC}(t)\) and jump errors \(\{J_{j}(t)\}\). \(\mathcal{G}(T)\) is n-GPI in an ancilla basis \(\mathcal{B}_{A}\) for an initial ancilla state \(\left|i\right\rangle\) if \(H_{AC}(t)\cup\{J_{j}^{\dagger}J_{j}\}\cup\mathbf{E}^{[n]}\) are in some PI algebra, \begin{table} \begin{tabular}{c c c} \hline \hline Gadgets & Prior schemes & Our scheme \\ \hline \multirow{4}{*}{Error correction} & PI parity measurement [38]; & \multirow{2}{*}{\begin{tabular}{c} MPI parity measurement \\ + one-bit teleportation with a shared ancillary mode. \\ Engineered dissipation [18; 53]. \\ \end{tabular} } \\ & & \\ \hline \multirow{2}{*}{\(Z\)-type gates} & PI SNAP gate [29; 30]; & \multirow{2}{*}{ \begin{tabular}{c} GPI SNAP gate \\ \end{tabular} } \\ & Self-Kerr \((a^{\dagger}a)^{2}\) Hamiltonian [41]. & \\ \hline \multirow{2}{*}{\(X\)-type gates} & Teleported Hadamard gate & \(X\)-axis rotation \\ & with an ancillary bosonic mode [41]. & using cavity displacements and SNAP gates \\ \hline Entangling gate & CZ gate using cross-Kerr \(a^{\dagger}a\otimes b^{\dagger}b\)[41]. & \(XX\) rotation using beam-splitter + SNAP gates. \\ \hline \(X\)-basis measurement & Phase measurement [41]. & Beam splitter + SNAP gates. \\ \hline \end{tabular} \end{table} Table 1: Comparison of different constructions of fault-tolerant gadgets for rotation-symmetrical codes that can correct photon losses. We denote \(Z\)-type gates as those that preserve the photon number (alternatively, those that add photon-number dependent phases), and \(X\)-type gates as those that do not preserve the photon number. _where \(\mathbf{E}^{[n]}\) is the and \(n\)-th order error set constructed from \((H_{AC}(t),\{J_{j}\},\mathcal{B}_{A},|i))\)._ Proof.: Let \(H_{AC}(t)\cup\{J_{j}^{\dagger}J_{j}\}\cup\mathbf{E}^{[n]}\) be in some PI algebra \(\mathbf{P}=\langle\{|m\rangle\,\langle n|\otimes U_{mn}\}\rangle\). The effective Hamiltonian \(H_{\mathrm{eff}}(t)=H_{AC}(t)-\frac{i}{2}J_{j}^{\dagger}J_{j}\in\mathbf{P}\). Furthermore, the non-jump propagator \(W(t_{2},t_{1}):=\mathcal{T}\exp[-\int_{t=t_{1}}^{t_{2}}dtH_{\mathrm{eff}}(t)]\) is also in \(\mathbf{P}\). Let \(\mathcal{I}_{k}:=\{j\mid J_{j}\in\mathbf{E}^{[k]}\}\) be the indices of the \(k\)-th order error set. Now, consider a \(k\)-th order Dyson expansion of \(\mathcal{G}(T)\) (see Eq. (9)): \[\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle= \left[\int_{t_{h}=0}^{T}dt_{h}\right]_{h\in[k]}\left[\sum_{j_{h} \in\mathcal{I}_{k}}\right]_{h\in[k]}\] \[G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})\bullet G_{ri}^{lk}(\{t_{h },j_{h}\}_{h\in[k]}), \tag{10}\] where \[G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})= \langle r|\,\mathcal{T}\Bigg{\{}W(T,t_{k}) \tag{11}\] \[\times\prod_{h=1}^{k}K_{j_{h}}(t_{h})W(t_{h},t_{h-1})\Bigg{\}} \left|i\right\rangle,\] with \(t_{0}=0\). Since all the operators in Eq. (11) are in \(\mathbf{P}^{\prime}\), we have \(G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})\propto U_{ri}\) according to Prop. 3. As such, \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\propto U_{ri}\bullet U_{ri}\). Therefore, \(\langle\langle r|\mathcal{G}^{[k]}(T)|i\rangle\rangle\propto U_{ri}\bullet U_{ri}\) for any \(k\leq n\) and the \(n\)-PI condition in Eq. (13) is satisfied. ### GPI conditions Here, we further relax the condition for finite-order PI in Prop. 5 and provide algebraic conditions for finite-order GPI gates. **Proposition 6** (Algebraic conditions for finite-order GPI gates).: _Given a bosonic code with a code projection \(P_{c}\) and an ancilla-assisted gate generated by a Hamiltonian \(H_{AC}(t)\) and jump errors \(\{J_{j}(t)\}\). \(\mathcal{G}(T)\) is \(n\)-GPI in an ancilla basis \(\mathcal{B}_{A}\) for an initial ancilla state \(|i\rangle\) if_ 1. _There exists some PI algebra_ \(\mathbf{P}=\langle\{|m\rangle\,\langle n|\otimes U_{mn}\}\rangle\) _such that_ \(H_{AC}(t)\in\mathbf{P}\)_, and any error_ \(J_{j}(t)\in\mathbf{E}^{[n]}\)_, where_ \(\mathbf{E}^{[n]}\) _is the_ \(n\)_-th order error set constructed from_ \((H_{AC}(t),\{J_{j}\},\mathcal{B}_{A},|i\rangle)\)_, is in the form_ \(J_{j}(t)=\sum_{m,n}\left|m\right\rangle\langle n|\otimes R_{mn}^{j}(t)U_{mn}\)_, where_ \(R_{mn}^{j}(t)\) _are unitaries._ 2. _Let_ \(\xi:=\{R_{mn}^{j}(t)\mid J_{j}\in\mathbf{E}^{[n]};m,n\in[d_{A}];t\in[0,T]\}\)_. Any error_ \(E\in\xi\) _satisfies_ \[[E,U_{mn}]=0.\] (12) 3. _Let_ \(\epsilon:=\{\mathcal{T}\prod_{i=1}^{n}E_{i}(t_{i})\}_{E_{i}(t_{i})\in\xi\cup I}\)_. Errors in_ \(\epsilon\) _satisfy the Knill-Laflamme condition with respect to_ \(P_{c}\)_._ Proof.: We follow the same proof as that for Prop. 5. Now, each Kraus operator \(G_{ri}^{k}(\{t_{h},j_{h}\})\) of \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\) (see Eq. (10) and Eq. (11)) reads \[G_{ri}^{k}(\{t_{h},j_{h}\})=U_{ri}\sum_{m_{h},n_{h}}c_{m_{h},m_{h}}\left[ \mathcal{T}\prod_{h=1}^{k}R_{m_{h}n_{h}}^{j_{h}}(t_{h})\right], \tag{13}\] for some \(c_{m_{h},n_{h}}\in\mathbb{C}\). Here, we have replaced the form of \(J_{j}\) in the first condition of Prop. 6 into Eq. (11). The operators \(R_{m_{h}n_{h}}^{j_{h}}\) cumulate to the front as if they were transparent to the unitaries due to the second condition in Prop. 6. Then, the Kraus operators of \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\), given by the union of all \(G_{ri}^{k}(\{t_{h},j_{h}\})\), are all linear combinations of errors in \(\{U_{ri}\mathcal{T}\prod_{i=1}^{k}E_{i}(t_{i})\}_{E_{i}(t_{i})\in\xi}\), where \(\xi\) is defined in the second condition of Prop. 6. Finally, the Kraus operators of \(\langle\langle r|\mathcal{G}^{[n]}(T)|i\rangle\rangle\) are all linear combinations of errors in \(\epsilon\), up to a same unitary \(U_{ri}\). Therefore, the errors are correctable if \(\epsilon\) satisfies the KL condition Prop. 6. As an example, we consider the GPI SNAP gate in Sec. IV.1.1 using a \(\chi\)-mismatched three-level transmon. In the presence of the ancilla relaxations, one can show that this gate is 1-GPI by checking the conditions in Proposition 6. In the interaction picture, the first-order error set \(\mathbf{E}^{[1]}=\{\ket{e}\bra{f}\otimes e^{-i\Delta\chi ta^{\dagger}a}\}\). We can find a PI algebra such that the first condition of Prop. 6 is satisfied: \[\mathbf{P}=\langle\{|f\rangle\,\langle g|\otimes S,|g\rangle\,\langle f|\otimes S ^{\dagger},|e\rangle\,\langle f|\otimes I\}\rangle. \tag{14}\] Here, there is only a single \(J_{j}\) with \(R_{ef}(t)=e^{-i\Delta\chi ta^{\dagger}a}\). Therefore, \(\xi=\{R_{ef}(t)\}_{t\in[0,T]}\) and \(\epsilon=\xi\cup I\). By choosing \(S\) as a logical operator for the four-legged cat and noticing that \([R_{ef}(t),S]=[R_{ef}(t),S^{\dagger}]=0\), the second condition of Proposition 6 is also satisfied. Finally, \(\epsilon=\{R_{ef}(t)\}_{t\in[0,T]}\cup I\) satisfies the KL condition w.r.t. \(P_{c}\) of the four-legged cat as long as \(\Delta\chi T<\pi/2\) (see Sec. II.1.1). ## Appendix B Error transparent/closure control for bosonic errors In this section, we review the error-transparent [47] and error-closure [32] quantum control techniques that enable fault tolerance against central system (bosonic) errors. For bosonic errors, we neglect their contribution to the no-jump evolution (the no-jump propagator is purely generated by the Hamiltonian) in the jump expansion of a quantum channel (see Eq. (7)). Such an approximation can be justified when considering the photon loss, whose back-action associated with \(a^{\dagger}a\) is correctable in the large-\(\alpha\) regime for cat codes (see Sec. II.1.1). We consider a unitary \(U(t)\) generated by a Hamiltonian \(H(t)\) that acts only on the bosonic mode. Consider a bosonic code with a code projection \(P_{c}\) and a bosonic error \(E\). The error-transparent control aims to engineer \(H(t)\) such that the dynammics is transparent to \(E\), i.e. errors that occur during the gate are equivalent to those that occur after the unitary: \[U(T,t_{p})E\cdots EU(t_{2},t_{1})EU(t_{1},0)P_{c}\propto E^{p}U(T,0)P_{c}, \tag{20}\] for any \(T>t_{p}>\cdots>t_{1}>0\) and \(p\geq 1\). We say that the unitary is \(k\)-th order error-transparent to \(E\) if Eq. (20) is satisfied for \(p\leq k\). When using \(U(T,0)\) as a logical gate for the bosonic code, we typically require \(H(t)\) (and thereby, \(U(t)\)) to be block-diagonal, i.e. \((I-P_{c})H(t)P_{c}=0\). In this case, Eq. (20) is equivalent to \[U(T,t)EU^{\dagger}(T,t)P_{j}\propto EP_{j}, \tag{21}\] for any \(T>t>0\) and \(j\leq k-1\), where \(P_{j}:=E^{j}P_{c}\). Obviously, Eq. (21) is satisfied if \(E\) commutes with \(H(t)\) when acting on the error spaces up to \((k-1)\)-th order, i.e. \[[E,H(t)]P_{j}=0, \tag{22}\] for \(j\leq k-1\). We note that Eq. (22) is only a sufficient condition for the error-transparency definition in Eq. (21). For instance, Eq. (21) is also satisfied if \([E,H(t)]P_{j}\propto EP_{j}\). In this work, we are interested in ancilla-assisted gates. Similar to the PI/GPI, in the case where we initialize the ancilla in \(\ket{i}\) and projectively measure it in a basis \(\mathcal{B}_{A}=\{\ket{m}_{A}\}\), we only care about the conditional bosonic channels given a measurement outcome \(m\). As such, we consider the following conditional error transparency, which is easier to achieve than unconditional error-transparency presented above: **Definition 11** (Conditional error transparency).: _Given a bosonic code with a code projection \(P_{c}\), an initial ancilla state \(\ket{i}\) and an ancilla orthonormal basis \(\mathcal{B}_{A}=\{\ket{m}_{A}\}\), we say that an ancilla assisted unitary \(U(t)\) is \((P_{c},\ket{i},\mathcal{B}_{A})\)-error-transparent to a bosonic error \(E\) up to \(k\)-th order if for any \(\ket{r}\in\mathcal{B}_{A}\) and \(p\leq k\):_ \[\bra{r}U(T,t_{p})E\cdots EU(t_{1},0)P_{c}\propto E^{p}\bra{r}U(T,0)\ket{i}P_ {c}, \tag{23}\] Ref. [32] generalizes the error transparency condition to the so-called error closure condition. In the case of a single error \(E\) and a static Hamiltonian \(H_{0}\), they first construct a vector space \(\epsilon\) with a basis \(\{E,[H_{0},E]\}\) over \(\mathbb{C}\), and the error-closure condition is satisfied if for any \(e\in\epsilon\): 1. \([H_{0},e]\in\epsilon\). 2. Errors in \(\epsilon\) are correctable (satisfying the KL condition) with respect to \(P_{c}\). Such a condition guarantees that each first-order error trajectory gives: \[e^{iH_{0}(T-t)}Ee^{iH_{0}t}=e^{iH_{0}(T-t)}Ee^{-iH_{0}(T-t)}e^{iH_{0}t}=E^{ \prime}e^{iH_{0}t}, \tag{24}\] where \(E^{\prime}:=e^{iH_{0}(T-t)}Ee^{-iH_{0}(T-t)}\in\epsilon\) using the first condition. Then the desired unitary is implemented up to a correctable error \(E^{\prime}\) according to the second condition. The error closure condition generalizes the error transparency condition as it allows errors to propagate to correctable errors, rather than rigorously commuting through the unitary, in a similar spirit as our generalization from PI to GPI. ### \(Z\)-axis rotation Recall that a \(Z\)-axis rotation is implemented by a GPI SNAP gate (see Sec. IV.1.1). In the interaction picture associated with the base Hamiltonian \(H_{0}=-(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e}\bra{e}\otimes a^{\dagger}a\), the Hamiltonian is static \(\tilde{H}=\Omega\left[\ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right]\), and the photon loss error reads \(\tilde{a}(t)=e^{i(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e})t}\otimes a\). Note that \([\tilde{a}(t),\tilde{H}]I\otimes P_{c}\neq 0\) and the unconditional error transparency does not hold. Fortunately, we now show that the conditional error transparency in Eq. (23) holds up to a single-photon loss if we choose \(S(\vec{\phi})\) appropriately. For \(p=1\), the L.H.S. of Eq. (23) reads \[\bra{r}\tilde{U}(T,t)\tilde{a}(t)U(t,0)\ket{g}P_{c} \tag{25}\] \[=\sum_{m\in\{f,g\}}e^{i\chi_{mt}t}\tilde{U}_{rm}(T,t)a\tilde{U}_{ mg}(t,0)P_{c}.\] Recall that \(\tilde{U}(t_{2},t_{1})\) is in a PI algebra (see Appendix A) \(\mathbf{P}=\bra{\ket{f}\bra{g}\otimes S,\ket{g}\bra{f}\otimes\mathbb{S}^{ \dagger},\ket{g}\bra{g}\otimes I,\ket{e}\bra{e}\otimes I,\ket{f}\bra{f} \otimes I}\). Therefore, \(\tilde{U}_{fg}\propto S(\vec{\phi})\), \(\tilde{U}_{gf}\propto S^{\dagger}(\vec{\phi})\), and \(\tilde{U}_{gg},\tilde{U}_{ff}\propto I\). Choosing \(S(\vec{\phi})\) as a logical gate, we have \(\tilde{U}_{mg}(t,0)P_{c}=P_{c}\tilde{U}_{mg}(t,0)P_{c}\) for \(m\in\{f,g\}\). If \([\tilde{U}_{rm}(T,t)a]P_{c}=0\) for any \(r,m\in\{g,f\}\), we can then swap \(\tilde{U}_{mi}(T,t)\) and \(a\) in Eq. (25) and obtain \(\bra{r}\tilde{U}(T,t)\tilde{a}(t)U(t,0)\ket{g}P_{c}\propto a\tilde{U}_{rg}(T,0)P _{c}\). Such a condition is equivalent to \[[a,S(\vec{\phi})]P_{c}=[a,S^{\dagger}(\vec{\phi})]P_{c}=0, \tag{26}\] which is simply that the applied unitary \(S(\vec{\phi})/S^{\dagger}(\vec{\phi})\) is error transparent to \(a\). This can be satisfied by setting \(S(\phi)=P_{0}+P_{3}+e^{i\theta}(P_{2}+P_{1})\). ### \(X\)-axis rotation Here, we show that the \(X\)-axis rotation in error-transparent to a single photon loss by showing the two involved SNAP gates (see Eq. (25)) satisfy the conditional error transparency in Def. 11. Taking the first SNAP gate as an example, the proof is the same as that for the \(Z\)-axis rotation in the previous section, except that we now need to change \(P_{c}\) to \(D(\alpha)P_{c}\) when verifying Eq. (26). Recall that \(S=e^{i\theta}P_{[s]}+I-P_{[s]}\) for the \(X\)-axis rotation, where \(P_{[s]}=\sum_{i=0}^{s}\left|i\right\rangle\left\langle i\right|\) is a projection into a neighborhood of vacuum. We take the large-\(\alpha\) approximation \(\left|+_{L}\right\rangle\approx\left|C_{\alpha}^{+}\right\rangle\) and \(\left|-_{L}\right\rangle\approx\left|C_{i\alpha}^{+}\right\rangle\). Then, \[\begin{split} aSD(\alpha)\left|+_{L}\right\rangle& \approx 2\alpha\left|2\alpha\right\rangle\approx SaD(\alpha)\left|+_{L} \right\rangle,\\ aSD(\alpha)\left|-_{L}\right\rangle&\approx aD( \alpha)\left|-_{L}\right\rangle\approx SaD(\alpha)\left|-_{L}\right\rangle, \end{split} \tag{103}\] where we have used \(S\left|\beta\right\rangle\approx\left|\beta\right\rangle\) for \(\left|\beta\right|\gg 1\). Eq. (103) thus verifies that \(S\) commutes with \(a\) when acting on \(D(\alpha)P_{c}\). ### \(Xx\) rotation Here, we show that the \(XX\) rotation gate in Sec. VI.3 is robust against a small input phase rotation or the ancilla relaxation/dephasing or cavity loss that occurs in the middle. We first consider the input phase rotation error \(E=e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}\). Our aim is to show that \[U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})\approx E(P_{c}\otimes P_{c}). \tag{104}\] If we consider the SNAP gate \(S\) to be replaced to the robust version \(S=e^{-i\theta}P_{[s]}+(I-P_{[s]})\), we can write \(U_{XX}\) in the following form, \[U_{XX}=e^{i\theta}P_{[s],I}+(I-P_{[s],I)}, \tag{105}\] where \[P_{([s],I)}=BS(\frac{\pi}{2})^{\dagger}\left(P_{[s]}\otimes I+I\otimes P_{[s] }\right)BS(\frac{\pi}{2}), \tag{106}\] is a projector onto the space where the input bosonic modes \(A\) and \(B\) have almost clean interference results (i.e., one output mode is close to vacuum) under the balanced beam-splitter \(BS(\frac{\pi}{2})\). We can get Eq. (34) in Sec. VI.3 from Eq. (105). To prove Eq. (104), we note that \[\begin{split}& U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})=EP_{ \pm,\pm}+EP_{\pm,\mp}+\\ &(e^{-i\theta}-1)(I-P_{([s],I)})EP_{\pm,\pm}+(e^{i\theta}-1)P_{( [s],I)}EP_{\pm,\mp},\end{split} \tag{107}\] where \[\begin{split} P_{\pm,\pm}&=\left|+_{L},+_{L} \right\rangle\left\langle+_{L},+_{L}\right|+\left|-_{L},-_{L}\right\rangle \left\langle-_{L},-_{L}\right|,\\ &\approx\left|C_{\alpha}^{+},C_{\alpha}^{+}\right\rangle\left\langle C _{\alpha}^{+},C_{\alpha}^{+}\right|+\left|C_{i\alpha}^{+},C_{i\alpha}^{+} \right\rangle\left\langle C_{i\alpha}^{+},C_{i\alpha}^{+}\right|\\ & P_{\pm,\mp}&=\left|+_{L},-_{L}\right\rangle \left\langle+_{L},-_{L}\right|+\left|-_{L},+_{L}\right\rangle\left\langle-_{L },+_{L}\right|,\\ &\approx\left|C_{\alpha}^{+},C_{\alpha}^{+}\right\rangle\left\langle C _{\alpha}^{+},C_{i\alpha}^{+}\right|+\left|C_{i\alpha}^{+},C_{\alpha}^{+} \right\rangle\left\langle C_{i\alpha}^{+},C_{\alpha}^{+}\right|.\end{split} \tag{108}\] To simplify Eq. (107), we notice that, \[\begin{split}&\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}|P_{([s],I)}|C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}\rangle\\ =&\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e ^{i\theta_{b}}}^{+}|BS(\frac{\pi}{2})^{\dagger}(P_{[s]}\otimes I+I\otimes P_ {[s]})\\ &\quad BS(\frac{\pi}{2})|C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e ^{i\theta_{b}}}^{+}\rangle.\end{split} \tag{109}\] Since \[\begin{split}& BS(\frac{\pi}{2})|C_{\alpha e^{i\theta_{a}}}^{+},C_{ \alpha e^{i\theta_{b}}}^{+}\rangle\\ =&\mu_{\alpha}^{2}BS(\frac{\pi}{2})\left(|\alpha e^{i \delta\theta_{a}}\rangle+|-\alpha e^{i\delta\theta_{a}}\rangle\right)_{A} \left(|\alpha e^{i\delta\theta_{b}}\rangle+|-\alpha e^{i\delta\theta_{b}} \rangle\right)_{B}\\ =&\mu_{\alpha}^{2}\bigg{(}\big{(}\left|\frac{\alpha }{\sqrt{2}}(e^{i\delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{A }\big{(}\left|\frac{\alpha}{\sqrt{2}}(e^{i\delta\theta_{a}}-e^{i\delta \theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(e^{i\delta\theta_{a}}-e^{i\delta \theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(e^{i \delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(-e^{i\delta\theta_{a}}+e^{i \delta\theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(-e^{i \delta\theta_{a}}-e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(-e^{i\delta\theta_{a}}-e^{i \delta\theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(-e^{i \delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\bigg{)}.\end{split} \tag{110}\] Here \(\mu_{\alpha}=1/\sqrt{2(1+\exp(-2|\alpha|^{2}))}\) is the normalization factor of the cat state. when \(|\delta\theta_{a}|,|\delta\theta_{b}|<\pi/8\), we have \(|e^{i\delta\theta_{a}}-e^{i\delta\theta_{b}}|<2\sin(\pi/8)\). As a result, either the components on mode \(A\) or the ones on mode \(B\) will be almost covered in the region of \(P_{[s]}\), which implies that when \(\alpha\gg 1\), we can choose a value of \(s=O(|\alpha|^{2})\) such that \[\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}|P_{([s],I)} |C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}\rangle\to 1. \tag{111}\] Similarly, we will have \[\begin{split}&\langle C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}|P_{([s],I)}|C_{i\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i \theta_{b}}}^{+}\rangle\to 1,\\ &\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i\theta_{b}}}^{+}|P_{([s ],I)}|C_{\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i\theta_{b}}}^{+}\rangle \to 0,\\ &\langle C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}|P_{( [s],I)}|C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}\rangle \to 0.\end{split} \tag{112}\] When Eq. (111) and (111) hold, we can simplify Eq. (107) to \[U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})\approx E(P_{\pm,\pm}+P_{\pm,\mp}) =E(P_{c}\otimes P_{c}). \tag{113}\] i.e., \(U_{XX}\) is robust against small phase rotation. Now, we consider the error occurs during the process of \(XX\) rotation. First, we show that a single photon loss during the \(XX\) rotation can only propagate to at most a single loss per mode by combing the idea of error transparency and error closure. Recall that a \(XX\) rotation is implemented by two SNAP gates sandwiched by two BSs: \(U_{XX}=\text{BS}(S\otimes S)\text{BS}\). We first show that a single photon loss during the BS can only propagate to an error of the form \(c_{1}a+c_{2}b\), where \(c_{1},c_{2}\in\mathbb{C}\), since the BS satisfies the errorclosure condition. To prove Eq. (14), we take the approximation \(\ket{+_{L}}\approx|C_{\alpha}^{+}\rangle\) and \(\ket{-_{L}}\approx|C_{i\alpha}^{+}\rangle\) and show that \([S\otimes S,c_{1}a+c_{2}b]\text{BS}\ket{\pm_{L},\pm_{L}}=0\). For \(\ket{+_{L},-_{L}}\), \[(S\otimes S)(c_{1}a+c_{2}b)\text{BS}\ket{+_{L},-_{L}}=(c_{1}a+c_{ 2}b)\text{BS}\ket{+_{L},-_{L}} \tag{15}\] \[= (c_{1}a+c_{2}b)(S\otimes S)\text{BS}\ket{+_{L},-_{L}},\] since \(S\otimes S\) acts trivially on both \(\text{BS}\ket{+_{L},-_{L}}\) and \((c_{1}a+c_{2}b)\text{BS}\ket{+_{L},-_{L}}\). The same argument also applies to \(\ket{-_{L},+_{L}}\). For \(\ket{+_{L},+_{L}}\), we have \(\text{BS}\ket{+_{L},+_{L}}\propto(\ket{2\alpha,0}+\ket{0,2\alpha}+\ket{-2 \alpha,0}+\ket{0,-2\alpha})\). Then \[(S\otimes S)(c_{1}a+c_{2}b)\text{BS}\ket{+_{L},+_{L}} \tag{16}\] \[= e^{i\theta}2\alpha[c_{1}(\ket{2\alpha,0}+\ket{-2\alpha,0})+c_{2} (\ket{0,2\alpha}+\ket{0,-2\alpha})]\] \[= (c_{1}a+c_{2}b)(S\otimes S)\text{BS}\ket{+_{L},+_{L}}.\] Similarly, we can show \([S\otimes S,c_{1}a+c_{2}b]\text{BS}\ket{-_{L},-_{L}}=0\). Combining the error-closure property of the BSs and the error-transparency property of the SNAP gates, we conclude that a single photon loss during the execution of the \(XX\) rotation can propagate to an error of the form \(c_{1}^{\prime}a+c_{2}^{\prime}b\), which is correctable by the four-legged cats. ## Appendix C More GPI examples Here, we provide more examples of ancilla-assisted bosonic operations that are GPI. Recall that the SNAP gate using a three-level transmon that we present in the main text (Sec. IV.1.1) is GPI only if the \(\chi\) mismatch \(\Delta\chi\) is small than \(\pi/2T\). In the scenario where \(\Delta\chi\geq\pi/2T\), we can add another flag qubit [54, 55, 56] to make the gate GPI. Notice that the major reason why the SNAP gate with a single ancilla is not GPI when \(\Delta\chi\) is large is that the random dephasing range on the bosonic mode is too large due to the uncertainty of when an ancilla relaxation from \(\ket{f}\) to \(\ket{e}\) happens. Therefore, we can add an extra flag qubit to narrow down the ancilla-relaxation time window, and thus reducing the dephasing range. As shown in Fig. 10, we apply two \(X\) gates to the flag qubit controlled by the ancilla \(\ket{e}\) state at time \(T/2\) and \(T\), respectively. As before, we consider adjacent-level relaxation errors for both the ancilla and the flag qubits, as well as arbitrary forms of dephasing errors. The flag qubit starts from \(\ket{g}\) and only gets excited to \(\ket{e}\) if the ancilla relaxes from \(\ket{f}\) to \(\ket{e}\) at a time \(t\in[T/2,T]\). As such, a single ancilla relaxation incurs a random phase rotation of angle \(\theta=\Delta\chi t\) on the bosonic mode, where \(t\in[0,T/2]\) if the flag qubit is measured in \(\ket{g}\) while \(t\in(T/2,T]\) if the flag qubit is measured in \(\ket{e}\). Formally, we can calculate the bosonic channels conditioned on the measurement outcomes of both the ancilla and flag qubits: \[\langle\langle g,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\mathcal{I}, \tag{17}\] \[\langle\langle f,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto S\bullet S^{\dagger},\] \[\langle\langle e,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\int_{\theta=0}^{\Delta\chi T/2}Se^{-i\theta a^{\dagger}a} \bullet e^{i\theta a^{\dagger}a}S^{\dagger},\] \[\langle\langle e,e|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\int_{\theta=\Delta\chi T/2}^{\Delta\chi T}Se^{-i\theta a^ {\dagger}a}\bullet e^{i\theta a^{\dagger}a}S^{\dagger},\] where the first and second bits represent the ancilla and the flag qubit state for \(\ket{\phi,\psi}\rangle\), respectively. According to Eq. (17), the gate is 1-GPI if \(\Delta\chi T/2<\pi/2\), or \(\Delta\chi<\pi/T\). Therefore, we can allow twice as large \(\chi\) mismatch by introducing another flag qubit. Note that we do not necessarily require the CNOT gates to be infinitely fast and noiseless, and Eq. (17) holds as long as the CNOT Hamiltonian is diagonal in the ancilla basis, e.g. \(H_{\text{CNOT}}\propto\ket{e}_{A}\bra{e}\otimes\left(\ket{e}_{f}\bra{g}+\ket{ g}_{f}\bra{e}\right)\). We remark that one can similarly construct a 1-GPI parity measurement that can tolerate larger \(\chi\) mismatch by introducing another flag qubit. ## Appendix D Details of numerical simulations Here, we provide the details of numerical simulations of the teleportation-based QEC and the parity-measurement shown in Fig. 7. In Fig. 6(c) we have shown that the teleportation-based QEC circuit can be decomposed to a logical \(\ket{+_{L}}\) state preparation gadget, a \(X(\pi/2)\) gate on the input mode, two \(Z(\pi/2)\) gates acting on two bosonic modes respectively, a \(XX(\pi/2)\) gate on two bosonic modes, a logical \(Z\) measurement on the input mode, and a potential \(Z\) gate on the output mode. Note that the final \(Z\) gate can be done in the software by updating the Pauli frame. The \(X(\pi/2)\) and \(XX(\pi/2)\) gates can further be decomposed to displacement operators, SNAP gates and/or beam-splitting operations following Fig. 6(a) and (b). The 1-FT \(\ket{+_{L}}\)-state preparation and the 1-FT logical \(Z\) measurement are done by the procedures in Algorithm 3 and 4, respectively, based on repeated single-shot parity/\(Z\)-basis measurement by dispersive-coupling to a 3-level ancilla and majority vote. In the numerical simulation, we assume the displacement operations can be performed quickly and ignore the Figure 10: GPI SNAP gate with a flag qubit. The flag qubit is excited to \(\ket{e}\) only if the ancilla decays from \(\ket{f}\) to \(\ket{e}\) at \(t\in[T/2,T]\). faults that occur during them. We also assume a perfect preparation of the coherent state \(\ket{\alpha}\) and the measurement on the 3-level ancilla. On the other hand, we consider a noisy simulation of all other three basic gadgets including the dispersive coupling between a bosonic mode and a 3-level ancilla, the SNAP gate, and the beam-splitting interaction. Below, we discuss the simulation details of these three noisy gadgets. The dispersive coupling Hamiltonian \(H_{0}\) is given by Eq. (15). We set the dispersive coupling coefficient \(\chi_{f}=2\pi\times 1\)MHz. In the simulation, we mainly consider 4 types of Markovian noise: ancilla relaxation \(J_{f\to e}=\sqrt{\gamma_{f\to e}}\ket{e}\bra{f}\), \(J_{e\to g}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}\), ancilla dephasing \(J_{ph}=\sqrt{\gamma_{\phi}}(\ket{e}\bra{e}+2\ket{f}\bra{f})\) and cavity loss \(F_{a}=\sqrt{\kappa_{1}}a\). The SNAP gate Hamiltonian \(\tilde{H}\) in the interaction picture of \(H_{0}\) is given by Eq. (17). For the convenience of numerical simulation, we move to another interaction picture associated with a \(\chi\)-matched Hamiltonian, \[H_{0}^{\prime}=-\chi_{f}(\ket{f}\bra{f}+\ket{e}\bra{e})\otimes a^{\dagger}a. \tag{18}\] In the interacting picture of \(H_{0}^{\prime}\), the SNAP gate Hamiltonian becomes \[\tilde{H}^{\prime}=\Delta\chi\ket{e}\bra{e}\otimes a^{\dagger}a+\Omega\left[ \ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right], \tag{19}\] where \(\Delta\chi=\chi_{f}-\chi_{e}\). We set the Rabi drive strength \(\Omega=0.3\chi_{f}\). The jump operators are then converted to \(\tilde{J}_{f\to e}=\sqrt{\gamma_{f\to e}}\ket{e}\bra{f}\), \(\tilde{J}_{e\to g}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}e^{i\chi_{f}ta^{ \dagger}a}\), \(\tilde{J}_{ph}=\sqrt{\gamma_{\phi}}(\ket{e}+2\ket{f}\bra{f})\) and \(\tilde{F}_{a}=\sqrt{\kappa_{1}}(P_{g}+e^{i\chi_{f}t}(P_{e}+P_{f}))\otimes a\) where \(P_{k}:=\ket{k}\bra{k}\) for \(k=g,e\), and \(f\). Note that \(\tilde{J}_{e\to g}\) and \(\tilde{F}_{a}\) are time-dependent which rotate quickly. To ease the simulation, we make a conservative estimate and approximate \(\tilde{J}_{e\to g}\) by \(\tilde{J}_{e\to g}^{\prime}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}\otimes e^{i \frac{a}{2}a^{\dagger}a}\), i.e., as long as the \(e\to g\) relaxation happens, a large dephasing error will be introduced on the cavity. To simplify \(\tilde{F}_{a}\), we first notice that \(\mathcal{D}[\tilde{F}_{a}]\approx\mathcal{D}[\sqrt{\kappa_{1}}P_{g}\otimes a] +\mathcal{D}[\sqrt{\kappa_{1}}(P_{e}+P_{f})\otimes a]\) where we ignore all the fast rotating terms. This can be understood as a dephasing error between \(P_{g}\) and \(P_{e}+P_{f}\) introduced by the cavity loss. As a simple approximation, we merge the cavity-induced ancilla dephasing with the real ancilla dephasing, i.e., we set the cavity loss jump operator to be \(\tilde{F}_{a}^{\prime}=\sqrt{\kappa_{1}}a\), while the effective ancilla dephasing rate for \(\tilde{J}_{f\to e}\) becomes \(\gamma_{\phi}^{\prime}=\gamma_{\phi}+\kappa_{1}/4\). The factor \(1/4\) is introduced because we set \(\Delta_{f}=2\) for \(f\)-level in \(J_{ph}\). The beam-splitting Hamiltonian is \(H_{\text{BS}}=ig_{\text{BS}}(ab^{\dagger}-a^{\dagger}b)\) where \(g_{\text{BS}}\) is the BS interaction strength. We set the beam-splitting interaction strength \(g_{\text{BS}}=2\chi_{f}\). The major noise we consider during the procedure are the cavity loss \(F_{a}=\sqrt{\kappa_{1}}a\) and \(F_{b}=\sqrt{\kappa_{1}}b\). To simulate the dissipative time-evolution described above, we use the Monte-Carlo solver in the QuTiP package [57]. This can be easily done for a composite system with one bosonic mode and a 3-level ancilla. However, in the simulation of \(XX(\pi/2)\) gate, we need to finish the following 3-step simulation: 1. For a product input state \(\ket{\psi}_{A}\otimes\ket{\psi^{\prime}}_{B}\) on two bosonic modes \(A\) and \(B\), simulate the noisy beam-splitting interaction \(\text{BS}(\frac{\pi}{2})\). The output state is a two-mode entangled state \(\ket{\Psi}_{AB}\). 2. For the entangled state \(\ket{\Psi}_{AB}\) input, simulate the noisy tensor-ed SNAP gate \(S\otimes S\) with two 3-level ancilla \(A^{\prime}\) and \(B^{\prime}\). The output state is a two-mode entangled state \(\ket{\Psi^{\prime}}_{AB}\). 3. For the entangled state \(\ket{\Psi^{\prime}}_{AB}\) input, simulate the noisy beam-splitting interaction \(\text{BS}^{\dagger}(\frac{\pi}{2})\). The output state is a two-mode entangled state \(\ket{\Psi^{\prime\prime}}_{AB}\). The major bottleneck is the Step 2, where we need to consider a simulation of two bosonic modes \(A\) and \(B\) and two 3-level ancilla \(A^{\prime}\) and \(B^{\prime}\). To get rid of this costly simulation, we first perform a Schmidt decomposition on the entangled state \(\ket{\Psi^{\prime}}_{AB}=\sum_{k}\sqrt{p_{k}}\ket{u_{k}}_{A}\otimes\ket{v_{k} }_{B}\). Then, we simulate the SNAP gate on the bosonic mode \(A\) and \(B\) separately. Then, we simulate the SNAP gate on each components \(\ket{u_{k}}\) or \(\ket{v_{k}}\) separately, i.e. \((S\otimes S)\ket{\Psi^{\prime}}_{AB}=\sum_{k}\sqrt{p_{k}}(S\ket{u_{k}}_{A}) \otimes(S\ket{v_{k}}_{B})\). Then, taking the simulation of the SNAP gate on mode \(A\) as an example, we need to estimate when the quantum jumps occur and which jump operator occurs [57]. This is determined by the following unnormalized expectation values \[O_{j}(t)=\bra{\tilde{\Psi}}(t)J_{j}^{\dagger}J_{j}|\tilde{\Psi}(t)\rangle\,, \tag{20}\] where \[\begin{split}\ket{\tilde{\Psi}(t)}&=e^{-itH_{\text{ eff}}}\ket{\Psi}_{AB}\otimes\ket{gg}_{A^{\prime}B^{\prime}}\,,\\ H_{\text{eff}}&=(H_{0}^{\prime})_{AA^{\prime}} \otimes I_{BB^{\prime}}-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}.\end{split} \tag{21}\] Here the summation of \(j\) is taken for the 4 different jump operators we consider. We can simplify the form of \(p_{j}\) to \[\begin{split}& O_{j}(t)=\sum_{k}p_{k}O_{j}^{(k)}(t)\\ &=\sum_{k}p_{k}\bra{u_{k}\otimes g}e^{itH_{\text{eff}}}J_{j}^{ \dagger}J_{j}e^{-itH_{\text{eff}}}|u_{k}\otimes g\rangle\,.\end{split} \tag{22}\] Here, \(\ket{u_{k}\otimes g}:=\ket{u_{k}}_{A}\otimes\ket{g}_{A^{\prime}}\). It is easy to verify that, for the ancillary relaxation and dephasing, the value of \(O_{j}^{(k)}(t)\) for all the Schmidt components are the same. We also check numerically that for the cavity loss, the value of \(O_{j}^{(k)}(t)\) for the Schmidt components with large weight \(p_{k}\) are almost the same. This means that we can approximate the overall simulation of \(\ket{\Psi}_{AB}\) by fixing the quantum jump location of all the components \(\{\ket{u_{k}}_{A}\otimes\ket{g}_{A^{\prime}}\}\) to be the same. This can be done in the Monte-Carlo solver in QuTiP by passing the same random seed for all the Schmidt components.
2304.00047
PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels
Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice. A promising technique for this still-open problem is to train models on the encoded data. Our approach, called Privately Encoded Open Datasets with Public Labels (PEOPL), uses a certain class of randomly constructed transforms to encode sensitive data. Organizations publish their randomly encoded data and associated raw labels for ML training, where training is done without knowledge of the encoding realization. We investigate several important aspects of this problem: We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user (e.g., adversary) and a faithful user (e.g., model developer) that have access to the published encoded data. We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks. Empirically, we compare the performance of our randomized encoding scheme and a linear scheme to a suite of computational attacks, and we also show that our scheme achieves competitive prediction accuracy to raw-sample baselines. Moreover, we demonstrate that multiple institutions, using independent random encoders, can collaborate to train improved ML models.
Homa Esfahanizadeh, Adam Yala, Rafael G. L. D'Oliveira, Andrea J. D. Jaba, Victor Quach, Ken R. Duffy, Tommi S. Jaakkola, Vinod Vaikuntanathan, Manya Ghobadi, Regina Barzilay, Muriel Médard
2023-03-31T18:03:53Z
http://arxiv.org/abs/2304.00047v1
# PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels ###### Abstract Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice. A promising technique for this still-open problem is to train models on the encoded data. Our approach, called Privately Encoded Open Datasets with Public Labels (PEOPL), uses a certain class of randomly constructed transforms to encode sensitive data. Organizations publish their randomly encoded data and associated raw labels for ML training, where training is done without knowledge of the encoding realization. We investigate several important aspects of this problem: We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user (e.g., adversary) and a faithful user (e.g., model developer) that have access to the published encoded data. We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks. Empirically, we compare the performance of our randomized encoding scheme and a linear scheme to a suite of computational attacks, and we also show that our scheme achieves competitive prediction accuracy to raw-sample baselines. Moreover, we demonstrate that multiple institutions, using independent random encoders, can collaborate to train improved ML models. random encoding, outsourced training, sensitive data release, collaborative learning. ## I Introduction For many applications, training Machine Learning (ML) models requires advanced and expensive computational resources and a large set of labeled data with sufficient diversity. An attractive approach is to enable co-operation across organizations by outsourcing the training data from several centers to the cloud where the model is trained. However, privacy concerns have complicated the construction of multi-institution cohorts and limited the utilization of external ML resources. For example, to protect patient privacy, regulations such as HIPPA [1] and GDPR [2] prohibits sharing patients' identifiable information. Similarly, companies may be reluctant to share data that is part of their intellectual property and provides them a competitive edge. It is well-understood now that merely removing metadata, e.g., patient's name, is not enough for hiding sensitive information [3]. Characterizing and mitigating this challenge is the main focus of this paper. We are interested in developing a computationally effective mechanism to encode sensitive date in order to facilitate outsourcing of training for ML. Inherent in our model is the notion that labels will be perforce _public_ but that we wish to reduce the information leakage beyond the release of labels. There are a number of solutions that have been developed for different notions of privacy. For instance, federated learning [4] trains models in a distributed fashion, and in conjunction with adding noise during training [6], it can obtain the theoretical notion of differential privacy [7], see Fig. 1 (a). However federated learning frameworks require a tight coordination across data-owners and model developers to jointly perform the training. Thereby, such approaches are not suitable for enabling data-owners to deposit their datasets publicly. As another instance, cryptographic methods such as secure multi-party computation, fully homomorphic encryption, and functional encryption [8, 9, 10, 11] enable public sharing and offer extremely strong security guarantees by hiding everything about the data. However, these security guarantees come at the cost of extremely high computational and communication overheads for training today's advanced ML models [12, 13]. Moreover, the cryptographic methods do not accommodate the collaborative setting, i.e., training a single classifier using data of multiple data-owners, unless the data-owners trust each other and share the same key. We argue that common ML training tasks do not require a strong level of security to hide everything about the data. An example is the training task of an ML model for diagnosing medical complications from chest x-ray images. The training task already implies the information that most images in the dataset contain human 24 ribs without looking into individual ones. Consequently, the notion of security adopted by the cryptographic methods is an overkill for the outsourced ML training task, where labels are public. Instead, we seek an efficient encoding scheme to protect the information that is not already implied by the general information about the training task and the samples' labels. Random encoders were recently considered in the literature to train models directly on encoded data [5, 14, 15]. Instrahide [5] used random linear mixing of images in the private dataset and some public dataset to generate the encoded samples and encoded labels. It demonstrated the feasibility of training models on the randomly transformed data and its potentials, see Fig. 1 (b). However, as we show in this paper, the linearity of this scheme renders it vulnerable to adversarial distribution attacks. DauntLess [14] used random dense neural networks for encoding the samples and random mixing for encoding the labels. The authors proved perfect privacy is feasible when the encoding scheme is data-dependent, which is costly and not aligned with the philosophy of the outsourced training and collaborative learning. In this paper, we characterize randomized encoding schemes used by data-owners to publish their _encoded_ data, with associated _uncoded_ labels, from both privacy and utility perspectives. While developing practical schemes that guarantee that the encoded data cannot be used for any purpose other than the the designated training task (i.e., perfect schemes) remains an open challenge, our theoretical results offer primitives for improving scheme privacy. Building on these insights, we present PEOPL, an encoding scheme based on the random selection of an encoder from a rich family of neural networks. We empirically demonstrate that PEOPL obtains improved privacy over linear baselines while obtaining competitive accuracy to raw-sample baselines. When applied in the multi-institutional setting, each site uses independent, uncoordinated random encoders to encode their data. With the help of label information, models trained in this setting can map these independently constructed encodings into a shared feature space, see Fig. 1 (c). Our empirical results do not guarantee privacy against all possible attackers and PEOPL should not be used in real-world sensitive settings today. However, these results do reflect improved privacy-utility trade-offs compared to common baselines and our analysis demonstrates a promising new direction for scheme design. The organization of this paper are as follows: In Section II, we formulate the problem of hiding sensitive data via a random transform. We propose the notions of privacy score and utility score, using the Shannon entropy [16], that quantify the performance of the probability distribution, according to which the random transform is chosen. In Section III, we propose how to improve the private key distribution to obtain a better privacy score. In particular, we show that function composition maintains or increases the privacy score. Furthermore, we study two types of attacks on the encoder that is chosen by a data-owner, given the available information to the adversary: The optimal attack which is based on the actual probability distribution, and a sub-optimal attack which is based on a mismatched distribution. In Section IV, motivated by our theoretical results, we implement the family of possible encoding functions as random convolutional neural networks and random recurrent neural networks, for encoding the images and the texts, respectively. Our implemented schemes are designed for the cases where sensitive data does not have overlap with the relevant publicly-available datasets. We hypothesized that these schemes offer improved privacy over linear approaches. In Section V, we empirically compare our method to linear baselines on two chest x-ray image datasets MIMIC-CXR [17], CheXpert [18], Fig. 1: Three approaches for mitigating privacy concerns in outsourced training and collaborative learning. (a) Federated learning [4]: There exists a copy of global model at each data-owner. The local models are updated using data of their data-owner, and the model updates are exchanged with the server, rather than the sensitive data. (b) Instahide [5]: The sensitive samples are linearly mixed with each other and with some public samples. The mixed samples and mixed labels are transferred to the server for training. (c) PEOPL: Each sample is encoded by a nonlinear transform dedicated to the data-owner. The encoders of data-owners are sampled from a random distribution independently from each other, and they do not need to be shared with other data-owners or with the server. The encoded samples and raw labels are transferred to the server for training. and one text dataset SMS spam collection [19]. Section VI and VII are dedicated to discussion and conclusions, respectively. ## II Problem Setting and Performance Scores We first introduce necessary notations and definitions. We denote a set of elements with Calligraphic letters, e.g., \(\mathcal{X}\). A transformation (encoder) is denoted with \(T:\mathcal{X}\rightarrow\mathcal{Z}\). The cardinality (the number of elements) of a set and the factorial function are denoted by \(|.|\) and \((.)!\), respectively. For notation purposes, we occasionally impose a total order \(\preceq\) on \(\mathcal{X}\) to represent it by a vector \((x_{1},\ldots,x_{|\mathcal{X}|})\) such that \(x_{i}\preceq x_{j}\) if \(i\leq j\). We then can represent an encoder by a vector with size \(|\mathcal{X}|\) whose \(i\)-th element is \(T(x_{i})\). Without loss of generality, we assume all logarithms are in base \(2\) in this paper. The mathematical derivations used rely on both stochastic (random) and deterministic variables. All stochastic variables are denoted with bold font, e.g., \(\mathbf{x}\), while deterministic variables are denoted with non-bold italic font, e.g., \(x\). All proofs appear in the appendix. We start with description of the sensitive data release problem. We then define privacy and utility scores for the problem, which are two possibly competing targets. ### _Problem Description_ We denote the set of all samples by \(\mathcal{X}\) and assume it is a finite set. Each sample \(x\in\mathcal{X}\) is labeled by a labeling function \(L:\mathcal{X}\rightarrow\mathcal{Y}\), where the set of labels \(\mathcal{Y}\) is finite. We consider a model with three participants, Alice, Bob and Eve, to be consistent with the common terminology of privacy (see Fig. 2). We consider a setting where a data-owner (Alice) has some sensitive set of samples \(\mathcal{X}_{A}\subset\mathcal{X}\). Alice needs to train a classifier on her sensitive data to estimate the labeling function \(L(\cdot)\). To do so, she wishes to communicate her labeled data \(\{(x,L(x))\}_{x\in\mathcal{X}_{A}}\) with an ML developer (Bob) for learning. However, we assume an adversary (Eve) is also able to observe all communication that take place between Alice and Bob. In general, we consider the information revealed about Alice's data beyond their labels is privacy leakage and must be limited. **Assumption 1**.: _For the ease of theoretical derivations, we assume Alice's samples given their cardinality are chosen uniformly and independently from \(\mathcal{X}\). Thus, \(\Pr[\boldsymbol{\mathcal{X}_{A}}=\mathcal{X}_{A}]=\mathbb{C}\), where \(\mathbb{C}\) is independent of the realization of \(\boldsymbol{\mathcal{X}_{A}}\)._ Occasionally, we attempt to protect some sensitive features about Alice's data (other than the published labels). For this purpose, let \(S:\mathcal{X}\rightarrow\bar{\mathcal{Y}}\) be another labeling function that describes sensitive features of samples in \(\mathcal{X}\), where the set \(\bar{\mathcal{Y}}\) is finite. We assume that Eve has prior knowledge as a public dataset with identical distribution as Alice's data, denoted with \(\mathcal{P}\subset\mathcal{X}\). Each public sample is associated with some labels, including \(L(x)\) and \(S(x)\) for every \(x\in\mathcal{P}\). We consider the following type of schemes: Alice chooses a one-to-one encoding function \(T_{A}:\mathcal{X}\rightarrow\mathcal{Z}\) at random, from a family of functions \(\mathcal{F}\) according to distribution \(\Pr[\boldsymbol{T_{A}}=T_{A}]\). Thus, \[\mathcal{F}=\{T:\Pr[\boldsymbol{T_{A}}=T]\neq 0\}.\] She then transmits \(\mathcal{O}_{T_{A}}(\mathcal{X}_{A})=\{(T_{A}(x),L(x))\}_{x\in\mathcal{X}_{A}}\) to Bob. Bob then trains an ML classifier on the encoded data, thus seeking to learn \(L_{A}=L\circ T_{A}^{-1}\) on the encoded space \(T_{A}(\mathcal{X})\). In this setting, a scheme is the distribution according to which Alice chooses \(T_{A}\). Eve also receives Alice's labeled encoded data \(\mathcal{O}_{T_{A}}(\mathcal{X}_{A})=\{(T_{A}(x),L(x))\}_{x\in\mathcal{X}_{A}}\), which we call Eve's observation. We assume Eve also knows the scheme, so she has access to \(\Pr[\boldsymbol{T_{A}}=T_{A}]\), but not exactly the transform sampled by Alice. Thus, we define Eve's prior knowledge \(\mathcal{K}_{e}\) as follows: \[\mathcal{K}_{e}=\{\{(x,L(x),S(x))\}_{x\in\mathcal{P}},\Pr[\boldsymbol{T_{A}} =T_{A}]\}.\] The Shannon entropy \(\mathrm{H}[\boldsymbol{x}]\) of a random variable \(\boldsymbol{x}\) quantifies the average level of uncertainty inherent in its possible outcomes [16]. By definition, given the distribution of \(\boldsymbol{x}\) and the space set \(\mathcal{X}\) (set of possible values that the random variable \(\boldsymbol{x}\) can take), \[\mathrm{H}[\boldsymbol{x}]=-\sum_{x\in\mathcal{X}}\Pr[\boldsymbol{x}=x]\log \Pr[\boldsymbol{x}=x].\] The conditional entropy \(\mathrm{H}[\boldsymbol{x_{1}}|\boldsymbol{x_{2}}]\) quantifies the average level of _uncertainty_ inherent in the possible outcomes of a random variable \(\boldsymbol{x_{1}}\) when the outcome of another (possibly correlated) random variable \(\boldsymbol{x_{2}}\) is known, and \(\mathrm{H}[\boldsymbol{x_{1}}|\boldsymbol{x_{2}}]=\sum_{x}\mathrm{H}[\boldsymbol {x_{1}}|\boldsymbol{x_{2}}=x]\Pr[\boldsymbol{x_{2}}=x]\). ### _Privacy Definition and Eve's Attacks_ Given Eve's prior knowledge \(\mathcal{K}_{e}\) and observation \(\mathcal{O}_{T_{A}}(\mathcal{X}_{A})\), in this paper, we are interested in what she learns about Alice's private encoder \(\boldsymbol{T_{A}}\). Since each \(T_{A}\in\mathcal{F}\) is a one-to-one function from \(\mathcal{X}\) to \(\mathcal{Z}\), if Eve identifies \(T_{A}\), she can revert back Alice's encoded data into their sensitive original representation. Eve uses her probability distribution on \(T_{A}\) given her prior knowledge and her observations, i.e., \[P(T)=\Pr[\boldsymbol{T_{A}}=T\mid\mathcal{O}_{T_{A}}(\mathcal{X}_{A}),\; \mathcal{K}_{e}], \tag{1}\] to break Alice's encoding scheme. Given Eve's prior knowledge and observations, she chooses an encoder that has the highest likelihood to be Alice's encoder. Therefore the _optimal attack_ of Eve is defined as follows: **Definition 1** (Optimal attack).: _The optimal attack outputs an encoder that maximizes \(P(T)\):_ \[T_{A}^{opt}=\arg\max_{T\in\mathcal{F}}P(T). \tag{2}\] Fig. 2: Alice (data-owner) transmits her labeled encoded data to Bob (ML developer). Eve (adversary) attempts to identify information about Alice’s raw data beyond their labels. When the solution of (2) is not unique, Eve would randomly choose one of them. The optimal attack is possible for Eve when she has access to the actual probability distribution \(P(T)\) in (1). The _sub-optimal attack_ refers to the case when Eve uses a mismatched distribution \(Q(T)\) rather then \(P(T)\) to obtain the encoder that has the highest likelihood to be Alice's encoder: **Definition 2** (Sub-optimal attack).: _The sub-optimal attack outputs an encoder that maximizes the mismatched distribution:_ \[T_{A}^{\text{sub-opt}}=\arg\max_{T\in\mathcal{F}}Q(T).\] There are several ways to measure the privacy of an encoding scheme. In this paper, we use Shannon entropy to quantify Eve's average uncertainty about Alice's encoder: **Definition 3** (Privacy score against optimal attack).: _The privacy score of Alice's scheme is defined as:_ \[S_{\text{privacy}}(\mathbf{T_{A}})=\mathrm{H}[\mathbf{T_{A}}\mid\mathbf{\mathcal{O}_{T_{A }}}(\mathbf{\mathcal{X_{A}}}),\;\mathcal{K}_{e}].\] A higher privacy score is better as it is equivalent to a higher uncertainty of Eve about Alice's encoder, and consequently a more private encoding scheme. By definition, we have \[S_{\text{privacy}}(\mathbf{T_{A}})=\] \[\sum_{\mathcal{O}_{T_{A}}(\mathcal{X_{A}})}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The distribution of Eve on Alice's encoder given her observations and knowledge can be described in terms of \(\mathrm{Pos}[T_{A}]\): **Theorem 1**.: _The probability distribution \(P(T)\) can be stated as_ \[P(T)=\Pr[\mathbf{T_{A}}=T\mid\mathcal{O}_{T_{A}}(\mathcal{X}_{A}),\mathcal{K}_{e}]= \frac{\Pr[\mathbf{T_{A}}=T]}{\Pr[\mathbf{T_{A}}\in\mathrm{Pos}[T_{A}]]},\] _if \(T\in\mathrm{Pos}[T_{A}]\). \(P(T)=0\), otherwise._ **Corollary 1**.: _The optimal attack can also be characterized in terms of \(\mathrm{Pos}[T_{A}]\) as follows:_ \[T_{A}^{\text{opt}}=\arg\max_{T\in\mathrm{Pos}[T_{A}]}\Pr[\mathbf{T_{A}}=T].\] ### _Architectural Lines to Improve Privacy_ We start from a family of functions \(\mathcal{F}\) from which Alice chooses her encoder according to \(\Pr[\mathbf{T_{A}}=T]\). We wish to understand what kind of operations Alice can perform to improve the privacy of her encoding scheme. To this end, we explore several ways to grow \(\mathcal{F}\). In our next proposition, we show that adding arbitrary functions to \(\mathcal{F}\) might actually worsen the privacy score. **Proposition 1**.: _There exists cases where growing the family of functions that Alice randomly chooses her encoder from lowers the privacy score._ However, as we now show, composing families of functions can only preserve or increase the privacy score. **Theorem 2**.: _Let \(\mathcal{F}\) and \(\mathcal{F}^{\prime}\) be two families of encoders from which Alice samples her encoder from according to \(\Pr[\mathbf{T_{A}}=T_{A}]\) and \(\Pr[\mathbf{T_{A}^{\prime}}=T_{A}^{\prime}]\), respectively. Consider a new family of encoders \(\mathcal{F}^{\prime\prime}=\mathcal{F}^{\prime}\circ\mathcal{F}=\{T^{\prime} \circ T:T^{\prime}\in\mathcal{F}^{\prime},T\in\mathcal{F}\}\), and the associated distribution_ \[\Pr[\mathbf{T_{A}^{\prime\prime}}=T_{A}^{\prime\prime}]=\sum_{T^{\prime}\circ T=T_ {A}^{\prime\prime}}\Pr[\mathbf{T_{A}^{\prime}}=T^{\prime}]\Pr[\mathbf{T_{A}}=T].\] _Then, the privacy score of the new encoding scheme is at least equal to the privacy score of each initial one, i.e.,_ \[S_{\text{privacy}}(\mathbf{T_{A}})\leq S_{\text{privacy}}(\mathbf{T_{A}^{\prime \prime}}),\quad S_{\text{privacy}}(\mathbf{T_{A}^{\prime}})\leq S_{\text{privacy}}(\mathbf{T _{A}^{\prime\prime}}).\] Theorem 2 shows that composing families of encoders cannot reduce the privacy score. Indeed, as we show in the following example, it can potentially increase it. **Example 1**.: _Let consider the universe of all samples is \(\{(x,L(x))\}_{x\in\mathcal{X}}=\{(1,+),(2,+),(3,-),(4,-)\}\), and consider these two families of encoders, from which Alice uniformly chooses her encoder:_ \[\mathcal{F}=\begin{cases}T_{1}:&(1,2,3,4)\\ T_{2}:&(2,1,3,4)\end{cases},\quad\text{and}\quad\mathcal{F}^{\prime}=\begin{cases} T_{1}^{\prime}:&(1,2,3,4)\\ T_{2}^{\prime}:&(1,2,4,3)\end{cases}.\] _Now, we consider the composition of these two families of encoders:_ \[\mathcal{F}^{\prime}\circ\mathcal{F}=\begin{cases}T_{1}^{\prime}\circ T_{1}:& (1,2,3,4)\\ T_{1}^{\prime}\circ T_{2}:&(2,1,3,4)\\ T_{2}^{\prime}\circ T_{1}:&(1,2,4,3)\\ T_{2}^{\prime}\circ T_{2}:&(2,1,4,3)\end{cases}\] _We can verify that_ \[\mathrm{H}[\mathbf{T_{A}}\mid\mathbf{\mathcal{O}_{T_{A}}}(\mathbf{\mathcal{X}_{A}}), \mathcal{K}_{e}]=\mathrm{H}[\mathbf{T_{A}^{\prime}}\mid\mathbf{\mathcal{O}_{e}}(\mathbf{T _{A}^{\prime}}),\mathcal{K}_{e}]=1.\] _However, \(\mathrm{H}[\mathbf{T_{A}^{\prime}}\circ\mathbf{T_{A}}|\mathbf{\mathcal{O}_{T_{A}^{\prime} }\circ\mathbf{T_{A}}}(\mathbf{\mathcal{X}_{A}}),\mathcal{K}_{e}]=2\), which is a higher privacy score than the one for the individual schemes._ We leverage our theoretical results on function composition to guide the design of our encoding scheme. Starting with a weak encoder, a random linear transform, we iteratively enrich the privacy of our scheme through function composition (e.g by adding with additional non-linear and linear layers), to build a random deep neural network. The non-linearity is a necessary component to happen between the composed layers, otherwise the composed linear layers reduce to just a one-layer linear encoder that is weak. More details on the encoding scheme are presented in Section IV. ### _Approximating the Optimal Attack_ The probability distribution used by the optimal attack is given in (1) and is revisited here, \[P(T)=\Pr[\mathbf{T_{A}}=T\mid\mathcal{O}_{T_{A}}(\mathcal{X}_{A}),\;\mathcal{K}_{e }].\] In practice, Eve does not have access to \(P(T)\) to perform the optimal attack. However, it has access to public dataset \(\mathcal{P}\subset\mathcal{X}\) with samples with the same distribution as Alice's samples. The public samples and Alice's samples have the same distribution even after the transformation via \(T_{A}\). Thus, a tractable solution to approximate the optimal attack, i.e., a sub-optimal attack, is to identify an encoder that results in a similar distribution to distribution of Alice's encoded data when applied to an available public dataset \(\mathcal{P}\subset\mathcal{X}\). There are a variety of ways to quantify the mismatch between two distributions, e.g., Kullback-Leibler (KL) divergence and Kolmogorov-Smirnov distance (K-S) [23, 24]. Here, we use the generic form \(\mathrm{dist}[\mathcal{Z},\mathcal{Z}^{\prime}]\), where samples of \(\mathcal{Z}\) and \(\mathcal{Z}^{\prime}\) are drawn from distributions \(p\) and \(q\), to represent the mismatch between two distributions \(p\) and \(q\). Thus, the sub-optimal attack is, \[T_{A}^{\text{sub-opt}}\triangleq\arg\min_{T\in\mathcal{F}}\mathrm{dist}[T_{A} (\mathcal{X}_{A}),T(\mathcal{P})]. \tag{4}\] In fact, the mismatch between distributions of Alice's encoded data \(T_{A}(\mathcal{X}_{A})\) and public encoded data via the current estimated encoder, can be used as a loss function for training an ML model that approximates \(T_{A}\). Let assume \(T_{\theta}\) represents all possible choices that can be obtained by the attack model of Eve, where each realization of \(\theta\) corresponds to one possible choice for the weights of the model. Thus, \[T_{A}^{\text{sub-opt}}=T_{\theta^{*}},\] \[\theta^{*}=\arg\min_{\theta}\mathrm{dist}[T_{A}(\mathcal{X}_{A}), T_{\theta}(\mathcal{P})].\] One popular practical method to measure the distribution mismatch \(\mathrm{dist}[T_{A}(\mathcal{X}_{A}),T_{\theta}(\mathcal{P})]\) is the maximum mean discrepancy (MMD) which has recently received significant attention [25, 26]. By definition, \[\text{MMD}[p,q]=\sup_{h\in\mathcal{H}}(E_{x\sim p}[h(x)]-E_{x\sim q}[h(x)]).\] If \(\mathcal{H}\) is the space of bounded continuous functions on \(\mathcal{X}\), the MMD measure is zero if and only if \(p=q\)[25]. An estimate of the above measure is obtained by replacing the expectations with empirical average. For having a rich family of functions \(\mathcal{H}\), we use the functions in the unit ball of a reproducing kernel Hilbert space (RKHS), as suggested in [27, 28]. The attack (4) is a tractable approximation of optimal attack, and can be considered as if Eve uses another distribution such as \(Q\) rather than \(P\) for her attack. We remind that the privacy score against the optimal attack \(S_{\text{privacy}}(\mathbf{T_{A}})\) and the privacy score against the sub-optimal attack \(S_{\text{privacy}}(\mathbf{T_{A}})\) measure the average uncertainty about Alice's encoder when Eve has access to the actual probability distribution \(P\) and the approximated distribution \(Q\), respectively, see Section II-B. In fact, \[\mathrm{CE}(P,Q)- \mathrm{H}[\mathbf{T_{A}}\mid\mathcal{O}_{\mathcal{T}_{A}}(\mathcal{ X}_{A}),\;\mathcal{K}_{e}]=\sum_{T\in\mathcal{F}}P(T)\log\frac{P(T)}{Q(T)}\] \[=D_{\text{KL}}(P||Q).\] Here, \(D_{\text{KL}}(P||Q)\geq 0\) is the KL divergence. Therefore, \[\tilde{S}_{\text{privacy}}(\mathbf{T_{A}})-S_{\text{privacy}}(\mathbf{T_ {A}})=\] \[\sum_{\mathcal{O}_{\mathcal{T}_{A}}(\mathcal{X}_{A})}\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! vectors into an RNN which includes a hidden state and a Tanh non-linearity. The initial state of the RNN is randomly chosen and plays the role of the private key. The encoded output can be either the sequence of RNN outputs, or the final value of its hidden state. The first type of output results in a sequence of encoded words per sample, and along with the labels, can be used to train a downstream model with memory, e.g., another RNN or Long Term Short Memory (LSTM) models. The second type of output results in a single vector which is basically the encoded context, and can be used for training memoryless models such as a dense neural network. We highlight that using a random RNN for encoding the text data is a benchmark here, and one can use more sophisticated NLP models with randomly initialized states for encoding. Besides, although only the initial value of the hidden state is chosen randomly, the next values of the hidden state depend on the previous states and the incoming words. This results in a sequence of pseudo random values for the hidden state. This is similar to a self-synchronising stream cipher [33] with the difference that the ciphertext does not need to be decoded to be useful. The unfolded version of a randomly-initialized RNN can be seen as a deep neural network with pseudo-random values for its weights. ### _Private Collaborative Learning_ It is known that a richer training dataset results in better predictive utility of ML models. In fact, the current bottleneck in training competent predictive models in many sensitive applications, such as healthcare, is lack of training data, e.g., [34, 35] among many others. Sufficient and diverse data for training often does not belong to a single data-owner, and multiple cohorts are encouraged to collaboratively provide data for the training job. Let assume we have \(D\) data-owners. We wish to enable each data-owner with index \(d\), \(d\in\{0,\ldots,D-1\}\), to publish their labeled dataset \(\mathcal{X}_{d}\) while hiding its sensitive information. Given the dataset \(\{(x,L(x))\}_{x\in\mathcal{X}_{d}}\), the data-owner randomly samples a private encoder \(T_{d}\) according to \(\Pr[\mathbf{T_{A}}=T_{d}]\), and uses \(T_{d}\) to produce labeled encoded samples \(\{(T_{d}(x),L(x))\}_{x\in\mathcal{X}_{d}}\). The data-owner can then deposit the labeled encoded dataset publicly for untrusted third parties for collaborative learning. In this section, we show that multiple data-owners can seamlessly collaborate to develop joint models by publishing datasets on the same task while using independently-sampled encoders, see Fig 1 (c). From information theoretical view, the average uncertainty of Bob about the labeling function goes down with enriching the training data, i.e., for \(d\in\{0,\ldots,D-1\}\), \[\mathrm{H}[\mathbf{L}|\{\mathbf{(x,L(x))\}_{x\in\mathcal{X}_{0}},\ldots, \{(x,L(x))\}_{x\in\mathcal{X}_{D-1}}]\\ \leq\mathrm{H}[\mathbf{L}|\{\mathbf{(x,L(x))\}_{x\in\mathcal{X}_{d}}],}\] and in fact, if the combined data is equivalent to the set of all samples, i.e., \(\mathcal{X}_{0}\cup\cdots\cup\mathcal{X}_{D-1}=\mathcal{X}\), the entropy of the labeling function given the combined labeled data is zero. The same argument holds for training on combined encoded data of multiple data-owners, \[\mathrm{H}[\mathbf{L}\circ\mathbf{T}_{d}^{-1}|\cup_{d^{\prime}}\{(\mathbf{T _{d^{\prime}}(x),L(x))\}_{x\in\mathcal{X}_{d^{\prime}}}]\\ \leq\mathrm{H}[\mathbf{L}\circ\mathbf{T}_{d}^{-1}|\{(\mathbf{T_{d}(x),L(x)}) \}_{x\in\mathcal{X}_{d}}],\] Thus, the utility score obtained by the combined encoded data, is at least equivalent to the utility score when the model is trained with the encoded data of one data-owner. This is because the ML developer can just ignore the data of other data-owners and treat them individually to train \(D\) disjoint classifiers. Next, we show as long as knowing the optimal classifier for one data-owner contains useful information for classifying the data of other data-owners, the utility score can theoretically increase by using combined encoded data. **Proposition 3**.: _The utility score of a scheme that uses combined encoded data of multiple data-owners for training is lower bounded by the utility score of the scheme when only encoded data of one data-owner is used for training. The bound is sharp if and only if_ \[\mathrm{H}[\cup_{d^{\prime}\neq d}\{\mathbf{(T_{d^{\prime}}(x),L(x))\}_ {x\in\mathcal{X}_{d^{\prime}}}|\{\mathbf{T_{d}(x)\}_{x\in\mathcal{X}_{d}}},\mathbf{L} \circ\mathbf{T}_{d}^{-1}]\\ =\mathrm{H}[\cup_{d^{\prime}\neq d}\{\mathbf{(T_{d^{\prime}}(x),L(x)) \}_{x\in\mathcal{X}_{d^{\prime}}}|\{\mathbf{T_{d}(x),L(x)\}_{x\in\mathcal{X}_{d}} ]}.\] Our experiments in Section V also show improvements in the predictive utility of an ML model when trained with combined individually-encoded data of two data-owners (each one samples its encoder randomly and independently) compared to the case where the model is trained only with encoded data of one data-owner. ## V Experimental Results To evaluate the impact of our randomized encoding scheme on downstream modeling performance, we compared models that are trained with our encoded data to standard architectures trained on raw data. For each classification task and training setting, we report the average AUC of such models. For the case of multiple data-owners, we wished to evaluate the impact of leveraging independently-sampled encoders on modeling accuracy. As a result, we evaluate both model performance when leveraging a single encoder across both data-owners (Combined-Clear), and model performance when leveraging two independent encoders (Combined-Randomized). We note that prediction accuracy in the Combined-Clear setting acts as an upper bound for the Combined-Randomized. To compare the privacy of multiple encoding schemes, we considered a computational attack that aims to estimate the encoder realization \(T_{A}\). We assumed that the attacker has access to a labeled public dataset \(\{(x,L(x)),S(x)\}_{x\in\mathcal{P}}\), and Alice's labeled encoded samples \(\{(T_{A}(x),L(x))\}_{x\in\mathcal{X}_{A}}\). Given this information, the attacker tries to learn a \(T^{\text{sub-opt}}\) such that Fig. 4: Architecture of text encoder. The encoding task resembles an inference task using an untrained RNN. \(T_{A}\approx T^{\text{sub-opt}}\). To estimate \(T_{A}\), we sampled an initial encoder \(T_{\theta}\) with the same architecture as \(T_{A}\), and trained it to minimize the MMD between \(Z^{*}=T_{\theta}(\mathcal{P})\) (generated ciphertext) and \(Z=T_{A}(\mathcal{X}_{A})\) (real ciphertext). We also consider a scenario where an attacker may try to learn a sensitive attribute classifier on the encoded domain using the estimated encoder and the public data. The attacker can use this classifier on Alice's encoded data \(T_{A}(\mathcal{X}_{A})\) to obtain sensitive information that is not released by Alice. ### _Chest X-Ray Data_ For these experiments, we utilized two benchmark datasets of chest x-rays, MIMIC-CXR ([17]) and CheXpert ([18]) from Beth Israel Deaconess Medical Center and Stanford, respectively. The MIMIC-CXR and CheXpert datasets are available under the PhysioNet Credentialed Health Data License 1.5.0 license and Stanford University School of Medicine CheXpert Dataset Research Use Agreement, respectively. The samples of each dataset are labeled with up to \(5\) abnormalities, i.e., Edema, Pneumothorax, Consolidation, Cardiomegaly and Atelectasis. For each medical condition as a classification task, we excluded exams with an uncertain disease label, i.e., the clinical diagnosis did not explicitly rule out or confirm the disease, and randomly split the remaining data \(60{-}20{-}20\) for training, development and testing, respectively. The were down sampled to \(256{\times}256\) pixels. All experiments were repeated \(3\) times across different seeds. Evaluating modeling utilityFor each diagnosis task and training setting, we report the average AUC across the MIMIC-CXR and CheXpert test sets. Our encoding split each image into several patches with size \(16{\times}16\), and leveraged a model with depth of \(7\) and a hidden dimension of \(2048\). This model had \(\sim 22.9M\) parameters and mapped \(256{\times}256\) pixel images to \(256{\times}2048\) vectors. Because of the patch-shuffling component of our encoding scheme, the encoded patches are unordered. As a result, we trained Vision Transformers (ViT) [31], a self-attention based architecture that is invariant to patch ordering. Across all experiments, we used a one-layer ViT with a hidden dimension of \(2048\) for the utility evaluation, and we compared the classification performance with a raw-sample baseline. We trained all models for \(25\) epochs using the Adam optimizer [36], an initial learning rate of \(1{\rm e}{-}04\), weight decay of \(1{\rm e}{-}03\) and a batch size of \(128\). We report our results in predicting various medical diagnoses from chest x-ray datasets in Table I. The models trained on encoded data obtained competitive AUCs to our raw-sample baseline across all training settings. In the multi-hospital setting, we found that trained on encoded data was effectively able to leverage the larger training set to learn an improved classifier, despite using separate encoders for each dataset. Our scheme obtained an average AUC increase of 2 and 3 points compared to training only on the MIMIC-CXR and CheXpert datasets, respectively. Moreover, our scheme demonstrated to achieve equivalent performance in the Combined-Clear and Combined-Randomized settings, showing that multiple institutions do not pay a significant performance cost to collaborate via publishing their randomly-encoded data. Evaluating modeling privacyWe performed the attack on convolutional architectures with a depth of \(7\). As a baseline, we also performed the attack when using a simple linear encoder, implemented as single convolutional layer. Across all experiments, we used a hidden dimension of \(2048\) and trained \(T^{\text{sub-opt}}\) for \(25\) epochs. We performed a grid search over different learning rates and weight decay values for each attack. We recorded the validation MMD, as the loss of the MMD attack, against linear encoding scheme and our encoding scheme. Moreover, we evaluated the attack by measuring the normalized MSE between generated (\(Z^{*}\)) and real ciphertext (\(Z\)) for some held-out plaintext images. For normalization, we use the MSE of a naive estimation of Alice's encoder that maps every sample to the average of Alice's encoded samples. We report the performance of our adversarial attack in Table II. To compare the performance of multiple encoding schemes against the second type of attack, we began with the best estimated \(T^{\text{sub-opt}}\) from our adversarial attack experiments, and built a new classifier to predict a sensitive feature of an encoded sample. To build the training data, we use a public data and \(T^{\text{sub-opt}}\) to obtain \(Z^{*}\). We report the ROC AUC of this classifier on both \(Z\) and \(Z^{*}\). We performed this attack for both an encoding with a depth of \(7\) and a hidden dimension of \(2048\), as leveraged in the modeling utility experiments, and when using a linear encoding. For each experiment, we trained a ViT for \(25\) epochs using the Adam optimizer, an initial learning rate of \(1{\rm e}{-}04\) and a batch size of \(128\). We report the performance of our sensitive feature attack in Table III. As expected, using a linear encoding is not robust to both attacks. Verification of Equality (5)Here, we empirically validate that for the described encoding scheme, we have \(\mathbb{H}[\mathbf{T_{A}}\mid\mathbf{\mathcal{X}_{A}},\mathbf{\mathcal{O}_{T_{A}}(\mathbf{ \mathcal{X}_{A}})},\mathcal{K}_{e}]\approx 0,\) and thus Alice's data and Alice's encoder have the same average uncertainty for the attacker. For this purpose, we show that an attacker who has access to \(\mathcal{X}_{A}\), \(\mathcal{O}_{T_{A}}(\mathcal{X}_{A})\), and \(\mathcal{K}_{e}\) can recover \(T_{A}\). We realize this goal in two steps. Using the un-ordered datasets \(\mathcal{X}_{A}\) and \(\mathcal{O}_{T_{A}}(\mathcal{X}_{A})\), we first demonstrate that an attacker can use her knowledge \(\mathcal{K}_{e}\) to recover the pairs \(\{(x,T_{A}(x))\}_{x\in\mathcal{X}_{A}}\). Then, using a sufficient number of matching pairs \(\{(x,T_{A}(x))\}_{x\in\mathcal{X}_{A}}\), we recover \(T_{A}\). For the first step, we leverage a model-based attacker \(M\), trained using the following procedure: 1. Sample an encoder \(T\in\mathcal{F}\) according to \(\Pr[\boldsymbol{T_{A}}=T]\). 2. Create an encoded dataset \(T(\mathcal{X}_{A})\). 3. Update the weights of the model \(M\) such that for every \(x,y\in\mathcal{X}_{A}\), where \(x\neq y\), \(M(x,T(x))\) is high (high similarity), while \(M(y,T(x))\) is low (low similarity). 4. Repeat from 1) until convergence. We emphasize that at each iteration of the algorithm, a new encoding function \(T\) is sampled. Hence, \(M\) learns to generalize across the family of encoding functions for the fixed dataset \(\mathcal{X}_{A}\). As a result, \(M\) also generalizes to Alice's specific encoder \(T_{A}\) and the attacker obtains pairs of plaintexts and their corresponding ciphertexts. For the second step, we utilize the corresponding pairs to carry a plaintext attack using gradient descent on the weights of a candidate encoder \(T\), to minimize the loss between \(T(x)\) and \(T_{A}(x)\) for every \(x\in\mathcal{X}_{A}\). For an encoder architecture with depth \(7\), in the first step (the matching game), we observed the re-identification AUC \(99.93\) and re-identification accuracy \(89.44\), respectively. As for the second step, we measured the success of recovering Alice's encoder by mean square error (MSE) metric. We observed a validation MSE of 0.0435 between pairs of samples in \(\{T_{A}(x),T(x)\}_{x\in\mathcal{X}_{A}}\). In comparison, the MSE between \(\{T_{A}(x),T_{R}(x)\}_{x\in\mathcal{X}_{A}}\), where \(T_{R}\) is a random encoder with the same architecture as \(T_{A}\) is 0.780. ### _SMS Text Data_ The dataset contains labeled English SMS messages for spam classification [19]. A word-level tokenizer was used to tokenize the samples (messages). Then, samples with less than \(5\) tokens were removed, in order to train only on texts with discernible meaning. The filtered dataset is composed of \(4916\) samples. Next, we used GloVe \(200\) dictionary as the word embedding [37]. GloVe \(200\) is trained on Wikipedia and GigaWord \(5\) data to transform words into a vector representations, such that some similarity metrics between the words are preserved. We simulate two data-owners by equally splitting this dataset into two - Spam Dataset 1 and Spam Dataset 2. We randomly split each dataset \(45-45-10\) for training, development and testing, respectively. Our encoder is a randomly initialized RNN model with \(200\) features in its hidden state. The final value of the hidden state represents the encoded sample we use for training a classifier as described next. Evaluating modeling utilityThe resulting encoded representations associated with their labels are sent to a downstream densely connected neural network for spam classification. We evaluate the utility of our encoder by computing the AUC of spam classification trained on encoded data, compared to a raw-sample baseline that is trained on original uncoded data. We train the models using early stopping with a patience of \(10\) points to avoid overfitting, and we use the Adam optimizer and learning rate of \(1\mathrm{e}{-03}\). Evaluating modeling privacyTo test the privacy of the encoder, we conduct an adversarial MMD attack to obtain an estimated encoder, and recorded the validation MMD as well as AUC of obtaining sensitive feature from Alice's encoded data using the estimated encoder. For the sensitive feature attack, we used the same label (spam or not) as both public label and private label since we did not have access to a text dataset with two features suitable for our experiments. We trained a classifier on public data encoded via the estimated encoder, and used the trained classifier to estimate the label of Alice's encoded data. The results are given in Table V and show that linear encoding is notably weaker than our RNN-based encoding, against both attacks. ## VI Discussion Since the leading results of using a single (potentially randomized) transform to encode samples of an ML training dataset, also known as instance encoding, there were a couple of works that support or challenge its privacy promises. We conclude our paper by briefly going through a subset of these works, and argue how PEOPL stands in line of these theoretical and experimental arguments. From the information theoretical view, a scheme is perfectly private, if the transformed and original datasets have zero mutual information. However, perfect privacy is not helpful for data sharing, since the encoded data cannot be used for training a classifier which is the authorized use. Recently, in [38, 39], the notion of perfect sample privacy and perfect subset privacy were considered, which are shown to be attainable using instance encoding. These papers demonstrated the possibility of ensuring zero mutual information between the encoded samples and any subset of original samples with a constrained cardinality, while preserving the learnibility of the encoded dataset. In this paper, we took an alternative approach and changed the focus of privacy question from the original data to the random choice of the encoder taken by the data-owner (i.e., the private key). Thus, the privacy was evaluated against an adversary who is interested to break the encoder given the sensitive encoded data and some relevant uncoded public data. The arguments that discourage using instance randomized encoding schemes for privacy purposes either targeted a specific encoding scheme, such as linear schemes, or are based on assumptions that do not apply to the setting of this paper. After the introduction of Instahide [5], where the sensitive samples are randomly and linearly mixed together and with some public samples before training, it was shown experimentally in [40] that this random mixing can be un-done. Further, it was argued in [41] that it is theoretically impossible to achieve a form of privacy (called distinguishing privacy) by means of mix-up type encodings. This impossibility argument does not apply to PEOPL, as it is neither a linear encoding nor a mix-up type scheme. The results in [41] do not consider encoding schemes that use private keys, which are focus of this paper. Later in [42], an attack was proposed to re-identify correspondences of transformed data (encoded via the scheme presented in Section V-A) and raw data, from shuffled datasets [43]. The attacker in this task has access to matching (unordered) original and encoded datasets, and intends to break the encoder. This is explicitly not the setting we considered for the encoding schemes we presented in Section IV, where there is no data overlap. In fact, we proved in Proposition 2 that if such an assumption holds, the key and the original data have the same average uncertainty in eyes of an adversary. The theoretical impossibility result of [42] which argues the impossibility of achieving ideal privacy for non-trivial encoding schemes is not valid for our setting for two reasons: (i) It is assumed in [42] that the encoding scheme has some auxiliary information about the labeling function, which does not apply to our setting where the encoder is drawn from a distribution that is independent of the data distribution. (ii) We do not target the perfect or ideal privacy in this paper. Instead, we quantify the privacy obtained via a key distribution, using the average uncertainty of the adversary about the encoder chosen by the data-owner given observed encoded data and some public un-encoded data. Lastly, we note that this paper does not claim proposing an encoding scheme that guarantees the encoded data cannot be used for an un-authorized task (i.e., anything beyond the designated ML training task). Rather, it sheds light on this important privacy-utility problem. In particular, we theoretically showed why the random non-linear transforms, and especially the random deep neural networks, are a viable candidate for this problem. We provided empirical experiments to test the robustness of instances of our encoding scheme against adversarial attacks, following standard practice in security [44, 45]. While our empirical attacks are not sufficient to guarantee any privacy, they are insightful to demonstrate trade-offs and measure improvements in the scheme design. Our results validated our presented theoretical argument that composing random functions can offer improved privacy over linear approaches, while maintaining competitive accuracy to raw-sample baselines. Moreover, we demonstrated that multiple independent random encoders can be combined to achieve improved overall utility. While, PEOPL should not be directly applied for private data sharing today, it's promising empirical properties show that randomized neural networks warrant further study, as they offer a new direction of private encoding scheme design. ## VII Conclusions and Future Work This paper combined the idea of key-based encoding and training on the latent representation of data. The encoded data does not need to be decoded to be used for the downstream training task. Thus, effectively the key does not need to be shared among parties. With the introduced notions of privacy and utility scores of a random encoding scheme, we proposed how one can improve the randomized scheme. While the proposed approach does not guarantee zero information leakage beyond the labeling data, our analysis provided a useful machinery for the design of competent encoding schemes. On two benchmark chest x-ray datasets, MIMIC-CXR and CheXpert, and a spam text dataset, we found that models trained on our encoded data obtained competitive performance to our raw-sample baselines. In the multi-institutional setting, where each site leverages an independently chosen encoder, we demonstrated that models trained on combined data of multiple cohorts could effectively leverage the larger training data to learn improved classifiers. Developing randomized coding schemes which facilitate collaborative training among data-owners with different types of data is a promising future direction to pursue. Another future direction is to develop hybrid collaborative schemes, which allow data-owners to share either their raw data, randomly encoded data, or their model updates. Last but not least, investigating other information measures for quantifying the privacy and utility performance of an encoding scheme remains for the future research.
2309.16801
Test-Case Quality -- Understanding Practitioners' Perspectives
Background: Test-case quality has always been one of the major concerns in software testing. To improve test-case quality, it is important to better understand how practitioners perceive the quality of test-cases. Objective: Motivated by that need, we investigated how practitioners define test-case quality and which aspects of test-cases are important for quality assessment. Method: We conducted semi-structured interviews with professional developers, testers and test architects from a multinational software company in Sweden. Before the interviews, we asked participants for actual test cases (written in natural language) that they perceive as good, normal, and bad respectively together with rationales for their assessment. We also compared their opinions on shared test cases and contrasted their views with the relevant literature. Results: We present a quality model which consists of 11 test-case quality attributes. We also identify a misalignment in defining test-case quality among practitioners and between academia and industry, along with suggestions for improving test-case quality in industry. Conclusion: The results show that practitioners' background, including roles and working experience, are critical dimensions of how test-case quality is defined and assessed.
Huynh Khanh Vi Tran, Nauman Bin Ali, Jürgen Börstler, Michael Unterkalmsteiner
2023-09-28T19:10:01Z
http://arxiv.org/abs/2309.16801v1
# Test-case quality - understanding practitioners' perspectives ###### Abstract Background: Test-case quality has always been one of the major concerns in software testing. To improve test-case quality, it is important to better understand how practitioners perceive the quality of test-cases. Objective: Motivated by that need, we investigated how practitioners define test-case quality and which aspects of test-cases are important for quality assessment. Method: We conducted semi-structured interviews with professional developers, testers and test architects from a multinational software company in Sweden. Before the interviews, we asked participants for actual test cases (written in natural language) that they perceive as good, normal, and bad respectively together with rationales for their assessment. We also compared their opinions on shared test cases and contrasted their views with the relevant literature. Results: We present a quality model which consists of 11 test-case quality attributes. We also identify a misalignment in defining test-case quality among practitioners and between academia and industry, along with suggestions for improving test-case quality in industry. Conclusion: The results show that practitioners' background, including roles and working experience, are critical dimensions of how test-case quality is defined and assessed. Keywords:software testing natural-language test case test-case quality. ## 1 Introduction Testing plays an important role in software quality assurance, which has been one of the main concerns in the software development life cycle. The fundamental artefacts in testing are test cases. Grano et al. have shown in their study that good test cases in terms of being simple and readable make it easier for developers to maintain them and to keep up with fast software development life cycle [11]. A study by Athanasiou et al. also showed that high quality of test code could also increase the performance of development teams in fixing bugs and implementing new features [1]. Therefore, good test cases increase the confidence in testing, and thereby assist product release decisions. Hence, assuring the quality of test cases is an important task in quality assuring software-intensive products. There have been studies which focused on different test-case quality attributes such as performance, readability, and effectiveness [3, 7, 10, 11, 13, 17, 19, 20, 24]. Some studies adapted the ISO standard for software quality to define test-case quality [18, 23]. Those studies provided researchers' perceptions of test-case quality. Though the contributions from academia are important, it is necessary to verify how knowledge could be transferred between academia and industry. The first step would be to investigate how test-case quality is understood by practitioners. However, there is currently a lack of empirical studies on the topic. To reduce this gap, we conducted an exploratory study to investigate how test-case quality is defined and assessed in practice. Our focus was manual test cases written in natural language. This type of test cases is still required for testing levels such as system testing, acceptance testing, and for a testing approach such as exploratory test. Hence, studying how the quality of natural-language tests is perceived in practice is as important as of code-based test cases. The contributions of the study are as follows: * Descriptions of test-case quality attributes identified by practitioners. * Reasons for the difference in defining and assessing test-case quality among practitioners with different roles, and between academia and practice. * Context factors to consider when defining test-case quality. * Suggestions to improve test-case quality by practitioners. * Sources of information for understanding and assessing test-case quality suggested by practitioners. The remainder of the paper is structured as follows: Section 2 describes related work. Section 3 describes the study design, followed by Section 4 which discusses threats to validity. Section 5 discusses our findings. Our conclusions and future work are summarised in Section 5. ## 2 Related Work We identified nine studies which involved practitioners in their work focusing on test-case quality [1, 2, 4, 6, 8, 9, 12, 15, 22]. We organised them into three groups. The first group includes two studies which integrated practitioners' knowledge into the studies' results regarding test-case quality [22, 2]. Adlemo et al. [22] introduced 15 criteria for good test cases. There was no specific focus on types of test cases. Of those criteria, ten were inspired by the literature while five came from practitioners' suggestions. The criteria were ranked by 13 Swedish practitioners with experience in software testing and software development. _Repeatability_, meaning that a test case should produce the same result whenever it receives the same input, had the highest votes from practitioners. Bowes et al. [2], focused on test code in unit testing. The authors identified 15 testing principles collected from three sources: a workshop with industrial partners, their software testing teaching materials, and practitioners' books. _Simplicity_ in terms of test-code size, number of assertions and conditional test logic, is considered as the most important principle, and is the foundation for the other ones. The second group contains four studies which had practitioners evaluate their hypotheses relating to some test-case quality attributes [1, 4, 6, 15]. Jovanovikj et al. [15] introduced an approach and a tool to evaluate and monitor test-case quality. They presented eight quality characteristics based on Zeiss et al.'s work [23], which relied on the ISO/IEC 25010 (ISO/IEC, 2011) software quality models. To verify their approach's applicability, they conducted a case study in the context of natural-language tests, and had interviews with two quality managers and some testers. Similarly, Athanasiou et al. [1] proposed a model to assess three test-code quality attributes, namely Completeness, Effectiveness, and Maintainability with associated metrics. To verify if the model was aligned with practitioners' opinions, they compared its results from two software systems with the evaluations of two experts via focused interviews. They concluded that there is a strong correlation between test code quality, throughput, and productivity of issue handling. In another study, Daka et al. [6] introduced a model of unit test readability which could help to generate test suites with high coverage and high readability. Their model involved human judgement, but there was no clear indication on their selection criteria. Focusing on only test-case effectiveness, Chernak [4] proposed an evaluation and improvement process for test cases in general. The process was used by one project team, including three testers and 10 developers who worked on a banking system. The third group includes three studies which discussed test smells [8, 9, 12]. Hauptman et al. [12] presented seven test smells in natural-language tests, which were collected based on their experiences with evaluating natural-language system tests. Their study was claimed as the first work on test smells in the context of natural-language tests. For smells in test code, Garousi et al. [9] conducted a systematic'multivocal' literature mapping and developed a classification for 196 test smells. The authors included their descriptions of top 11 most discussed test smells in a subsequent study [8]. The related works show that practitioners' perceptions of test-case quality have not been well studied. Particularly, we have not identified any study focusing on eliciting first-hand data from practitioners on their perceptions of test-case quality in the context of natural language tests. ## 3 Research Method The objective of the study is to gain a better understanding of practitioners' perceptions towards test-case quality. We conducted an exploratory study with a multinational telecommunication company in Sweden. This type of study was chosen since the research focus on eliciting practitioners' genuine perspective on test-case quality has not been well studied. The exploratory study helped us to get more familiar with the research topic, to determine what the study design(s) should be for our subsequent studies on the same topic. In this study, we used semi-structured interviews to explore the practitioners' perspectives on the topic. According to Robson and McCartan [21], this interview approach allows researchers to flexibly modify the interview questions depending on the interviewees' answers. Since the interviews were about discussing test-case quality, the same strict questionnaire would not be applicable to all interviewees. Also, the interviews were based on real test cases provided by the interviewees. Thanks to the explicit test cases, our approach makes it easier for interviewees to refer to instances of quality aspects instead of vague, generic or abstract ideas. ### Research Questions * _RQ1. How do practitioners describe test-case quality?_ The research question directly connects to our study's objective. Without defining quality criteria upfront, we first want to elicit information on how practitioners perceive test-case quality. * _RQ2. How well is the understanding of test-case quality aligned among practitioners in a company?_ Test-case quality might be assessed differently depending on how it is perceived by the assessors. That could affect testing-related activities such as test-case design, and test-case maintenance. Therefore, we want to understand whether practitioners perceive test-case quality differently; if so, then we want to identify the potential reasons. * _RQ3. What context factors do practitioners consider when assessing test-case quality?_ The context factors could be testing level, testing purpose, characteristics of the software system under test, etc. Test-case quality might be context-dependent. Hence, we want to identify the potential context factors or aspects which could influence how test-case quality is assessed. * _RQ4. What are potential improvements suggested by practitioners for improving test-case quality?_ Answers to this research question would help us to understand practitioners' needs regarding test-case quality, which could give us and researchers potential research directions. * _RQ5. What information sources guide practitioners' opinions of test-case quality?_ Identifying such information sources could helps us to understand why practitioners perceive test-case quality in certain ways. ### Data Collection The data was collected from the interviews which included test cases provided by the interviewees. Before conducting the study, we had a meeting with the company's representatives to present our study's design and to obtain basic understanding of the company's structure, and potential interviewees. #### 3.2.1 Interview Design Before conducting the interviews, we asked each participant to provide us three test cases with their quality classification (good, bad or normal). They could choose any test case from the company's test suites that they are familiar with. We also asked for a written rationale for the classification, since we not only wanted to see whether other interviewees would rate them similarly, but also whether they would provide similar reasons. We intentionally did not define quality criteria upfront in order to elicit the genuine perceptions of the interviewees. We swapped the test cases between two participants who work in the same team. Before the interviews, we informed the interviewees which test cases they had to review extra. Hence, in the interviews, the swapped test cases were also judged by the interviewees so that we could gauge their alignment. We used the pyramid model [21] for our interview session. Hence, each interview starts with specific questions followed by more open questions. More specifically, the interview session is divided into three phases. * Part 1: Background Information: we focused on obtaining information about the interviewee's testing experience. * Part 2: Test Case Classification: we asked the interviewee to clarify his reasons for his test-case quality classification and to discuss some test cases given by another participant. * Part 3: General Perception of Test-Case Quality: we had a more generic discussion with the interviewee about his or her perception of test-case quality. To mitigate flaws in our interview design, we conducted a pilot interview with a colleague whose research interest includes software testing and has been working with test cases for years. The interview questionnaire could be found at [https://tinyurl.com/y6qakcjc](https://tinyurl.com/y6qakcjc). #### 3.2.1 Participants Selection Our selection criteria were that (1) a participant should be a tester and/or a developer; (2) the participant has at least one year of working experience relating to software testing. Our selection is convenience sampling [16] as we involved those who meet our criteria and are willing to participate in the interviews. At the end, we had six participants from three different teams working in different projects. Their information is described in Table 1. Even though four of them are test architects, their responsibilities still involve working with test cases. Hence, having them participate in the study did not affect our study design. #### 3.2.2 Interview Execution The interviews were conducted by two researchers each. One researcher asked questions while the other took notes and added extra questions for clarification if needed. Each interview took around one hour, and was audio-recorded with the participant's consent. #### 3.2.3 Test Cases In total, we collected 17 manual natural-language test cases as not all practitioners followed the instruction of providing three test cases each. They were extracted from the company's test suites for functional testing. We focused on the following information of a test case in our analysis: ID, name, description, and steps. Even though there is no strict format for the test case's description, it often includes, but does not require, the following information: purpose, preconditions, additional information, references, and revision history. Additionally, we also received the quality classification (Good/Bad/Normal) and the written explanations before the interviews. Nonetheless, we could not report the actual test cases' content due to confidentiality reasons. ### Data Analysis #### 3.3.1 Interview Data Before analysing the data, the first author transcribed and anonymised all audio recordings of the interviews. The transcribed data were coded using a thematic coding approach [5]. More specifically, we applied an integrated approach, which allows codes to be developed both inductively from the transcribed data and deductively from the research questions and researchers' understanding of test-case quality in general. The main themes which were inspired by the research questions are as follows: * Practitioners' Background Information: contains information such as roles, testing experience; * Test-Case Quality Description: contains information about how practitioners described test-case quality and their selection of the top three quality indicators or attributes of a good test case and of a bad one; * Test-Case Quality Assessment: contains information about practitioners' classification of test-case quality and their reasoning; * Test-Case Quality Alignment: contains information about differences and similarities in practitioners' perceptions of test cases and their reasoning; * Test-Case Quality Improvement: contains information about practitioners' suggestions to improve test-case quality; * Source of Information: contains information about sources that practitioners refer to when they need to assess or get a better understanding of test-case quality. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **ID Role** & \multicolumn{3}{c}{**Exp\({}^{1}\)Make Design Review Report Maintain Execute TC**} \\ & & **TP\({}^{2}\)** & **TCs\({}^{3}\)** & **TCs** & **TR\({}^{4}\)** & **TCs** & **TCs** & **ID** \\ \hline P1 test architect 6 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & P1.1-3 \\ \hline P2 tester & 14 & & & & & ✓ & ✓ & ✓ & P2.1-4 \\ \hline P3 test architect 6 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & P3.1-2 \\ \hline P4 tester, test architect, consultant & & & & & & & ✓ & P4.1-3 \\ \hline P5 developer & 5 & & ✓ & & & & & ✓ & P5.1-2 \\ \hline P6 test architect 15 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & P6.1-3 \\ \hline \hline \end{tabular} * Exp: number of years of working experience in testing \({}^{3}\) TC: test case * TP: test plan * TP: test results \end{table} Table 1: Participants’ Experience, Roles, Tasks and Test Cases Provided For each interview, we followed the following steps: Step 1: Starting from the beginning of the interview, mark a segment of text which is relevant to the pre-defined themes with a code and assign it to a corresponding theme. For the Test-Case Quality Description theme, relevant codes could be test-case quality attributes such as _understandability_, _effectiveness_, _traceability_, etc. Some of those attributes were named and explained explicitly by the practitioners while the others were generated based on their discussions during the interviews. Step 2: Find the next relevant segment of text. Mark it with an existing code or with a new code and assign it to a relevant main theme. If the information is related to test-case quality but does not belong to any main theme then a new theme is created for that new information. It helps us to capture emerging concepts related to our study's focus. Step 3: Repeat Step 2 until no relevant information is found. Step 4: Create sub-themes under every main theme to cluster related codes together. During the process, codes, themes, and their descriptions were continuously refined to fit the data. We used a commercial tool to complete this coding process, which allows us to maintain traceability between the transcribed data and the related codes and themes. To mitigate bias and increase the reliability of the coding, the first set of codes and themes were discussed by two researchers, and the coding scheme was refined. Furthermore, the final set was reviewed by all researchers. All disagreements regarding the coding were resolved in a meeting by discussion. To obtain an overall ranking of the top quality indicators and attributes of a good test case and of a bad one, each of them gets three points if it was ranked first by a practitioner, two points if it was ranked second, and one point otherwise. We wanted to get a general picture of which quality attributes or indicators are normally considered more important than the other by practitioners. Hence, we did not consider the contextual factors identified by RQ3 in the ranking. #### 3.2.1 Test Case Data To analyse the collected test cases, we extracted the quality classifications and reasons from practitioners' written notes. The information was coded in the same manner as the interview data (see previous section). To compare practitioners' opinions with the literature, before the interviews, we searched for test smells in those test cases based on test smells' descriptions from two studies [8, 12]. This step did not only give us another assessment angle but also helped us to better understand the test cases' quality. We selected those studies for reference for two reasons. The first study [12] is the most recent work on test smells of natural-language tests. The second study [8] provides us descriptions of the top 11 most discussed smells of test code. There are common characteristics between natural language test cases and unit test cases such as testing logic, issues in test steps, dependencies between test cases, test behaviour when executing, etc. Hence, the study of Garousi et al. [8] is a relevant reference. Even though that study was based on a former work of Garousi et al. [9], the former one did not provide definitions of test smells, hence not chosen as a reference. ## 4 Threats to Validity #### 4.0.1 Construct Validity Construct validity is concerned with the reliability of operational measures to answer the research questions. Our interviews were semi-structured with follow-up questions which gave us opportunities to clarify practitioners' answers and reduce misunderstandings during the interview. Their written explanations for the test cases' quality assessment reduced the risk of misinterpreting their answers. The test cases were selected subjectively by the practitioners to demonstrate their perspective of good/bad/normal test cases in terms of their perceived quality. Since our study's type is exploratory and attempts to capture practitioners perspective, this selection method is not considered a threat to the validity of our results. Additional information about practitioners such as whether they were ISTQB1-certified might influence their perspective on test-case quality. Since we did not collect this information, it is a limitation of the study. Nevertheless, we collected important information (their testing experience, roles, and working tasks relating to test cases) which would be still sufficient to describe the participants' background information. Footnote 1: [https://www.istqb.org/](https://www.istqb.org/) #### 4.0.2 Internal Validity Internal validity is about causal relations examined in the study. Even though we identified possible aspects which should be considered when defining and assessing test-case quality, our focus was not to generate a complete list of such aspects. By not eliminating one aspect or another, this type of threat is not of concern. #### 4.0.3 External Validity External validity is concerned with the generalisability of the study's findings. In general, with the "convenience sampling" [16], the sample might not represent the population, which could potentially affect the findings' generalisability. However, as our study is exploratory, not confirmatory, this sampling method is not considered as a validity threat. Our study's context is characterised by the type of the company, which is a global company working on embedded software systems, the practitioners' documented working background and the nature of the natural-language tests. That is the context to which the findings can be potentially applied. #### 4.0.4 Reliability Reliability is about the reliability of the results. Our study's design was discussed among all authors of the paper. The interviews were conducted by two researchers and the findings were discussed by all researchers to mitigate the bias from one researcher. The data collection process and interview questions were clearly documented to increase the reliability. ## 5 Results and Discussion In this section, we present and discuss our findings in relation to each of the research questions stated in Section 3.1. ### Test-Case Quality Definition (RQ1) To answer the first research question, we asked practitioners to define test-case quality and explain how they would assess such quality (the interview question Q7-11). Table 2 contains a list of 11 test-case quality attributes that we collected. It also includes the practitioners' authentic terms and phrases used to describe the attributes. It is worth mentioning that the use of specific test cases, chosen by the participants from the organization's test suites, triggered more in-depth reflections. The insights from practitioners regarding these test cases identified as many unique test-case quality attributes as a discussion in abstract of what constitutes test-case quality. Overall, we could see that the quality attributes could be placed into two groups. The first group, including _understandability_, _step cohesion_, _completeness_, _efficiency_, and _flexibility_, is oriented around quality attributes of a test case which could be relevant for practitioners when executing it. The second group includes _understandability_, _simplicity_, _completeness_, _homogeneity_, _issue-identifying capability_, _repeatability_, _traceability_, _effectiveness_, and _reasonable size_. The latter group of attributes relates to general concerns, namely the design, the maintenance, and the objective of testing in general. _Understandability_ is the most common attribute, and discussed by all practitioners. A reason for this could be the nature of the discussed test cases, which were written in natural language. Hence, it makes sense that ambiguity in test cases is considered as a major concern. We could also see an alignment between practitioners' perceptions and the literature. _Understandability_ is directly connected to three test smells, namely _ambiguous tests_ in natural-language tests [12], _long/complex/verbose/obscure test_, and _bad naming_ in test code [8]. Even though the last two smells are for test code, according to their definitions, which are "It is difficult to understand the test at a glance. The test usually has excessive lines of code" and "The test method name is not meaningful and thus leads to low understandability" respectively, those smells could also occur in natural-language tests. The other connection is between the quality attribute _simplicity_ and the test smell _eager test_, which is described as "The test is verifying too much functionality in a single test method" [8]. Apart from identifying test-case quality attributes, practitioners also listed the top characteristics and indicators of a good test case and of a bad one. The outcome is a mixture of specific quality indicators: _clear objective_ (the purpose of a test case), _clear precondition_ (how to set up the testing environment), _clear steps with clear expected results_, and general quality attributes: _understandability_, _completeness_, _effectiveness_. According to our ranking scheme, _understandability_ is rated as the most important attribute. This is consistent with the most commonly discussed quality attributes in the general discussion. The second place goes to the quality indicator _clear objective_. One of the reasons given by one practitioner was that "the objective of each test case or of each component of the test scope is the most important thing because those are combined to make sure that all the requirements of each of the projects are met." \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Quality Attribute** & **Description** & **Practitioners’ phrases** & **N1** \\ \hline Understandability & The information of a test case (name, objective, precondition, steps, terms) should be easy to understand by both testers, and developers & straightforward, understandable description, how and what to test, clear steps, clear objective, clear precondition & 6 what to test, clear steps, clear objective, clear precondition \\ Simplicity & A test case should not combine different test cases together nor contain so many steps & a big story for many test, not so many steps cases & 4 big story for many test, not so many steps cases \\ Step cohesion & Steps in a test case should be well connected. The test case should not contain redundant steps or miss necessary steps & unnecessary step, mandatory steps & 3 steps \\ Completeness & A test case should contain all relevant information for its execution & all information needed to perform the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers & 2 form the test, all kind of information that developers \\ Homogeneity & Test-case design should follow the same rules & homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, harmony & 2 form the test, all kind of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous & 1 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 1 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 1 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, mentioned issue, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, unity with the same rules, function category, ISO attributes category & 2 form the test, all kinds of information that developers and testers need homogeneous, and the same rules, mentioned issue, function category, ISO attributes category & 2 ### Alignment in Understanding of Test-case Quality (RQ2) We asked practitioners to classify test cases given by the others into _good_, _bad_ or _normal_ in terms of their quality (Section 2 of the interview questionnaire). Due to the interviews' time constraint, only seven out of 17 test cases, were analysed by more than one practitioners as shown in Table 3. Half of them, P1.3, P2.4, P3.1, and P3.2, had the same quality classification while the other half, P1.2, P3.1, P4.1, and P5.2, received a mixed assessment. In general, we could see that test-case _understandability_ was always the first concern. For the test cases having the same quality assessment (P1.3, P2.4, P3.1, P3.2), a test case's quality is considered as absolutely bad if the practitioners could not understand what they are supposed to do, especially when both the test objective and other details like steps, precondition, expected results of steps are unclear. If the test case's objective is sufficiently clear enough that the practitioners could get some idea about its purpose, they would consider its quality as acceptable or normal, though other details like preconditions are missing. By analysing test cases which had different quality classification results (P1.2, P3.1, P4.1, and P5.2), we could see that the difference is strongly associated with the practitioners' responsibilities relating to test cases. If one of their responsibilities is to execute test cases, then they are more concerned about whether they have all relevant information to run the test cases. If they are responsible for broader tasks, in this case mainly about test-case maintenance and test results analysis such as what faults to fix, then they would have other concerns such as the test cases' complexity or their traceability to issues, bugs. Our observation aligns with the perceptions of practitioners as they explained that they might have different concerns regarding test cases depending on their responsibilities. Those responsible for executing test cases prioritise _understandability_ and _completeness_ of test cases, that is, whether they have all relevant and clear information for executing the test cases. Those responsible for broader tasks like test architects do not only care about how test cases execute but also about the outcome of the test cases and the test suites in general. Hence, they have extra expectations such as whether the test cases cover the requirements, or whether it is easy to maintain the test cases. They also explained that the difference in working styles might have an impact on the test-case quality assessment. If they have different approaches in designing test cases, they would have different requirements on how to assure the test-case quality. To provide a different perspective on the test-case quality assessment, the lead author used the list of test smells from the literature (see Section 3.3) to identify test smells in those seven test cases. As a result, there is a considerable overlap between the practitioners' concerns and the identified test smells (_ambiguous test_[12], _conditional tests_[12], _long/complex/verbose/obscure test_[8], and _eager test_[8]) (shown in Table 3). It is shown that the concerns about _understandability, ambiguity, cohesion_ of test cases match with the test smells _ambiguous test_ and _long/complex/verbose/obscure test_. Likewise, the concerns about the _complexity_ of test cases directly relate to the test smells _eager test_. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} **TC** & **Concerns from ID** & **Classification (G/B/N)** & **Concerns from Assessor 2** & **Literature [12,8]** \\ \hline P1.2 & -Understandability: explained objective, links to specs/requirements, unclear precondition -Complexity: combination of multiple TCs -Traceability to bugs: not clear due to the complexity & Assessor1 [P1]: N Assessor2 [P4]: G conditions -Complexity: combination of multiple TCs & -Ambiguity: not well written preconditions -Complexity: combination of multiple TCs & -Ambiguous test [12] \\ P4.1 & -Ambiguity: unclear terms, missing expected results of steps, missing pre-conditions & Assessor 2 [P1]: N terms -Repeatability: can be run anytime & -Ambiguous test [12] \\ P5.2 & -Ambiguity: unclear terms -Complexity: combination of multiple TCs -Traceability to bugs: not clear due to the complexity & Assessor 1 [P4]: B -Ambiguity: unclear terms & -Ambiguous test [12] \\ P5.2 & -Ambiguity: unclear complexity & Assessor 2 [P2]: N terms & VOLC test [8] \\ P5.2 & -Ambiguity: unclear terms -Complexity: combination of multiple TCs -Traceability to bugs: not clear due to the complexity & -Caditional test [12] \\ P5.2 & -Ambiguity: unclear complexity & Assessor 1 [P3]: N -Ambiguity: unclear assessment 2 [P2]: B terms due to poor English & -Ambiguous test [12] \\ P5.2 & -Ambiguity: unclear objective, unclear terms, unclear test [12] \\ P5.2 & -Ambiguity: unclear type-conditions -Traceability to bugs: established & -Caditional test [12] \\ P5.2 & -Ambiguity: unclear objective, unclear terms, unclear expected results for multiple steps -Complexity: combining several TCs & -Caditional test [12] \\ P5.2 & -Ambiguity: unclear type-conditions -Assessor2 [P5]: B -Ambiguity: unclear pre-conditions & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing objective -Cohesion: missing steps & Assessor1 [P3]: B step & -Ambiguity: unclear step [12] \\ P5.2 & -Ambiguity: unclear type-conditions -Assessor2 [P5]: B -Ambiguity: unclear type-conditions -Assessor2 [P5]: B -Ambiguity: unclear type-conditions -Cohesion: missing steps & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor2 [P5]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5.2 & -Ambiguity: missing pre-conditions & Assessor1 [P3]: B step & -Ambiguity: unclear type-conditions -Complexity: combining several TCs & -Ambiguous test [12] \\ P5. However, the concerns about two quality attributes, _traceability_ and _repeatability_, have no corresponding smells according to our list of test smells. One potential reason is that those quality concerns could be the consequences of some other test smells. _Traceability_ could be affected by the test smells _eager test_, _ambiguous test_ and _long/complex/verbose/obscure test_. As pointed out by practitioners, if a test case contains multiple test cases, it becomes complex. Hence, it is harder to understand which part the test case leads to found issue(s). Ambiguity in a test case's description could also make the test execution non-deterministic [12], which potentially affects the traceability to found issue(s). Likewise, _repeatability_ might not be possible if there are dependencies among the test cases. Indeed, there are test smells due to dependencies in testing [8]. However, they were not in our list as they were not the top discussed smells [8]. ### Quality-related factors (RQ3) By answering our interview questions (Q4-9), the practitioners described factors which could influence how they assess test-case quality. In general, practitioners believe that the test-case quality depends on the test case's context. For example, the assessment could depend on whether the practitioner knows how the code was written. He or she might have a different opinion on how to design test cases for testing that code compared with those who do not know the code. Another context factor is the maturity level of the software system under test (SUT). According to three practitioners, to save their time, they could combine multiple test cases into one when the SUT is more or less working properly as those test cases hardly fail at that state. Hence, in that case, a test case is not considered as bad even though it contains different test cases. Two practitioners mentioned that the testing level also has an impact on how test-case quality is defined. For example, for exploratory tests, practitioners whose responsible is execute test cases prefer to have flexibility in executing test cases. They would rather not to follow steps so closely as that might not help them to identify new issues. Therefore, if an exploratory test case's execution instructions are restrictive, that test case could be perceived as bad. Hence, practitioners' pre-knowledge of the test-case context has a strong influence on their test-case quality perceptions. ### Improvement (RQ4) With the interview question Q14, we identify several suggestions for improving test-case quality. In general, a homogeneous directive or procedure for test-case design could improve the quality as it could guarantee test cases are designed systematically. A uniform quality policy could also help to ensure the quality is met and aligned among practitioners. More specifically, to enhance test-case understandability in the test-case design phase, it was suggested that each test case should contain all necessary information. Importantly, the information should be relevant to both testers and developers. That will help to avoid a situation in which testers or developers have to look for information of related test cases in order to understand their assigned test cases. For test-case maintenance, the most common suggestion was that test cases should be reviewed regularly as they could become obsolete due to the evolution of the SUT. Updating test cases so that they contain all relevant information for execution and removing no-longer-needed test cases are important steps in this phase. Apart from improvements in test-case design and maintenance, practitioners also suggested that developers and testers should have active communication in order to mitigate misunderstanding in executing and analysing test results. ### Source of Information (RQ5) With the interview question Q15, we collected information sources that practitioners refer to for a better understanding of test-case quality. The most common source is from colleagues like testers and developers working on the same projects, especially seniors who have experience in similar tasks. It is consistent with the previous research on information sources consulted by practitioners [14]. Regarding test-case design, product specifications are considered the most relevant internal source of information. Other types of internal sources include software architect documents, test cases in previous projects, guidelines and templates for writing test cases, rules and policies from test architects, and test plans. The practitioners also refer to external sources such as guidelines provided by the ISTQB and ISO standards. Apart from those common sources, one practitioner also mentioned that he or she learns about test-case quality by attending industrial seminars and workshops on related topics. Some practitioners also said that they rely on their own experience when assessing test-case quality. ## 6 Conclusions and Future Work We conducted an interview-based exploratory study involving six practitioners, working in three different teams in a company to understand practitioners' perceptions of test-case quality. We identified 11 quality attributes for test cases, of which _understandability_ was perceived as most important. That could be due to the nature of the studied test cases, which were written in natural language. Nevertheless, the study of Garousi et al. [8] also reported the related test smell _long/complex/verbose/obscure test_ as the main concern in test code, which means that _understandability_ is also important in test code. We also found that there is a misalignment in practitioners' perceptions of test-case quality. The explanation is that, depending on the practitioners' responsibilities, they have different quality requirements. For practitioners whose responsibility is to run test cases, the focus is more on acquiring relevant information for test-case execution. Hence, their priority is the _understandability_ of test cases. For those who need to design and maintain test cases like test architects and developers, their concerns are more about test-case maintenance and outcomes of test suites. Therefore, they require other quality attributes such as _traceability_ to other artefacts, _efficiency, effectiveness, repeatability_, etc. The context factors of test cases, such as code-related knowledge, the maturity level of software under test, testing types such as exploratory test potentially also impact how practitioners define test-case quality. We also identified suggestions for improving test-case quality. The most common suggestion is a homogeneous procedure for test-case design, with focus on completeness of test cases, meaning that a test case should contain all relevant information for execution by any involved party. Reviewing test cases and regular communication between developers and testers were also highly recommended by practitioners. Practitioners also discussed different sources of information they refer for a better understanding of test-case quality. In general, their information comes from external sources such as ISTQB and ISO standards. For specific test cases, they rely on the internal sources, such as product specifications, and discussion with other colleagues. Even though our findings were based on a few data points, we had a sound, repeatable strategy to identify them. They are not generic, but for a specific context. For more general findings, we plan to interview more practitioners in different contexts. We will also compare our findings of the quality attributes and quality definition(s) with other existing studies. Another planned future work is to have a broader investigation on differences and similarities between the industry and the literature on defining and assessing test-case quality. #### 5.0.1 Acknowledgment This work has been supported by ELLIIT, a Strategic Area within IT and Mobile Communications, funded by the Swedish Government, and by the VITS project from the Knowledge Foundation Sweden (20180127).
2306.17342
Developing and implementing an Einsteinian science curriculum from Years 3 to 10 : Part A Concepts, rationale and learning outcomes
There has been a growing realisation that school science curricula do not adequately reflect the revolutionary changes in our scientific understanding of the 20th century. This discrepancy between current school education and our modern scientific understanding has led to calls for the modernisation of the science curriculum. Although there have been attempts to introduce topics of Einsteinian physics (i.e., quantum physics and relativity) to school education, often at the secondary level, we still lack a seamless curriculum in which modern science concepts are gradually introduced in primary and middle schools. Guided by the Model of Educational Reconstruction and following a mixed-methods research design, the Einstein-First project aims to address this gap. Einstein-First has developed and implemented an Einsteinian curriculum from Years 3 to 10 (students aged 7- 16) that resolves the disconnect between science in schools and the modern world. This paper presents the concepts, rationale, and learning outcomes of the curriculum implementation in six Australian schools with 315 students across Years 3 to 10. Our findings lay the foundation for informed curriculum development towards a school education that can enhance students' understanding and appreciation of the fundamental concepts of modern science and its impact on our society.
Tejinder Kaur, Magdalena Kersting, David Blair, Kyla Adams, David Treagust, Jesse Santoso, Anastasia Popkova, Shon Boublil, Marjan Zadnik, Li Ju, David Wood, Elaine Horne, Darren McGoran
2023-06-30T00:06:18Z
http://arxiv.org/abs/2306.17342v2
Developing and implementing an Einsteinian science curriculum from Years 3 to 10 - Part A: Concepts, rationale and learning outcomes ###### Abstract There has been a growing realisation that school science curricula do not adequately reflect the revolutionary changes in our scientific understanding of the 20th century. This discrepancy between current school education and our modern scientific understanding has led to calls for the modernisation of the science curriculum. Although there have been attempts to introduce topics of Einsteinian physics (i.e., quantum physics and relativity) to school education, often at the secondary level, we still lack a seamless curriculum in which modern science concepts are gradually introduced in primary and middle schools. Guided by the Model of Educational Reconstruction and following a mixed-methods research design, the Einstein-First project aims to address this gap. Einstein-First has developed and implemented an Einsteinian curriculum from Years 3 to 10 (students aged 7- 16) that resolves the disconnect between science in schools and the modern world. This paper presents the concepts, rationale, and learning outcomes of the curriculum implementation in six Australian schools with 315 students across Years 3 to 10. Our findings lay the foundation for informed curriculum development towards a school education that can enhance students' understanding and appreciation of the fundamental concepts of modern science and its impact on our society. **Keywords:** Einsteinian physics; curriculum development; model of educational reconstruction ## Introduction _The need for modernising the science curriculum_ There is broad agreement about the need to modernise the science curriculum and educate a population that is scientifically literate (Blandford & Thorne, 2020). One key reason is that current science curricula in primary and middle schools in Australia and many other nations predate the early 20th-century revolution in our scientific worldview and do not adequately reflect the current state of scientific knowledge (Kaur et al., 2018). Einsteinian physics (EP), which encompasses quantum physics and the theory of relativity, has significantly shaped modern society and advanced technological progress. With its strong connection between fundamental principles and applications, EP is central to contemporary scientific disciplines, including quantum computing, materials science, biotechnology, multi-messenger astronomy, and climate science (Blandford & Thorne, 2020). Nevertheless, Einsteinian concepts are currently not taught to students beyond a superficial level. This discrepancy between current school education and our modern understanding can create confusion and roadblocks to science learning (Kersting & Blair, 2021). Additionally, research has shown that students are motivated and excited to learn modern science concepts and are aware that the new concepts in the curriculum are meaningful to them (Choudhary et al., 2020; Kersting et al., 2018; Maxwell et al., 2021). Indeed, traditional science curricula often struggle to transmit a feeling of personal relevance and disciplinary authenticity (Kapon et al., 2018; Kaur et al., 2018; Kersting, Schrocker, et al., 2021), and student performance in science has decreased throughout the OECD nations in recent years (Kjaernsli & Lie, 2011; OECD, 2016). Furthermore, significant achievement gaps between students with different backgrounds and a decreasing number of students who choose science at the upper-secondary level give rise to concern (Treagust et al., 2015). It is widely suggested that this decline can be abated by introducing science that reflects public discourse, news items and modern technology (Foppoli et al., 2019; Sheldrake et al., 2017). A modernised science curriculum that includes Einsteinian Physics (hereafter abbreviated as EP) concepts can improve student attitudes and make science more engaging and relatable: not least because topics of EP can illustrate the nature of science and demonstrate the power and limitations of scientific reasoning and experimentation (Park et al., 2019; Woithe, Boselli, et al., 2022). The widely recognised need to promote science diversity is another reason the school science curriculum needs to be modernised. Historically, females have been underrepresented in the natural sciences, and traditional curricula often reflect societal prejudices that presume science is the purview of men (Baram et al., 2011; Kaur et al., 2020; Ross & Gordon, 2018). Research has shown that topics of modern physics, which are of interest to girls and emphasise the relevance of science to everyday life, can increase the potential for students to identify with physics and help improve girls' attitudes and motivation towards science (Kaur et al., 2020; Kersting, Schroocker, et al., 2021; Woithe, Muller, et al., 2022; Zoechling et al., 2022). In summary, modernising the science curriculum is vital because it can align instruction with our current-best scientific understanding, improve student attitudes towards science, and promote gender equity in science education. ### Previous research on EP education Many studies have been conducted with upper-secondary and undergraduate students and have found that these students often struggle with the counterintuitive nature of EP (e.g., Ayene et al., 2019, Velentzas & Halkia, 2013). Students have difficulties connecting Einsteinian concepts to their knowledge of classical physics and everyday experiences of the world (e.g., Kamphorst et al., 2019; Krijtenburg-Lewerissa et al., 2017; Steier & Kersting, 2019). There is limited research on learning EP at an early age - when one would expect that students are not yet burdened with a prior scaffold of classical concepts (Kersting & Blair, 2021). In 2011, the Einstein-First team trialled the first program with Year 6 students, and the results were emboldening: the students were not bewildered, but rather took the ideas in stride, indicating that EP can be taught to primary school students (Pitts et al., 2014, Adams et al., 2021, Blair, 2021). Other trials conducted with primary and middle-school students suggest that the teaching of EP concepts is largely independent of student aptitude or prior knowledge and has a lasting impact on the students' memory retention of EP (Choudhary et al., 2020, Haddad & Pella, 2015, Kaur et al., 2017b, Kaur et al., 2018, Pitts et al., 2014, Ruggiero et al., 2021, Adams et al., 2021). According to Metz, 'what students of any age are able to learn depends heavily on what they've already learned, failure to support the scientific capabilities of elementary school children will seriously handicap science learning at higher grade levels' (Metz, 2011, p. 71). Thus, the conceptual challenges that older students encounter may be due to having to switch ontological categories from a Newtonian to an Einsteinian conception of science because of their classical school education (Kaur et al., 2018). Overall, there is a growing body of literature on the teaching and learning of EP. However, there are still gaps, particularly in terms of understanding how students make connections between different EP concepts and how instruction can be designed to promote understanding of these concepts at an early age. Although there have been attempts to introduce specific topics of EP to school education, often at the secondary school level, we still lack a seamless curriculum in which modern science concepts build on each other meaningfully in primary and middle schools. ## Research questions This paper takes a significant step towards reconstructing science education from the ground up to introduce young learners to the contemporary Einsteinian paradigm that best describes our world and underpins contemporary scientific discourse (Treagust, 2021). We present the development, implementation, and evaluation of a seamless Einsteinian curriculum from Years 3 to 10 (students aged 7- 16 years) that resolves the disconnect between science in schools and the science of the modern world. This paper is structured in line with three research questions that unpack our educational reconstruction: 1. What instructional approaches and pedagogical principles are needed to effectively promote learning of EP at the primary and middle-school level? 2. What key science concepts and relationships should be emphasised in the EP curriculum for students in Years 3 to 10 to learn EP concepts? 3. To what extent are Year 3-10 students learning the main concepts of the Einstein-First curriculum? ### Educational background and theoretical framework #### The Einstein-First project This study is conducted within the Einstein-First project that brings together experts in physics, science education, curriculum development, and teacher training from three Australian universities in collaboration with several Australian teacher and school associations. Einstein-First aims to trial and evaluate a seamless progression of EP education through primary and middle school with research-informed teaching and learning materials and assessment instruments (Maxwell et al., 2021). By clarifying concepts, finding unifying principles, and using simple language to teach EP at an early age, the Einstein-First team is replacing obsolete classical concepts currently taught in school with Einsteinian concepts. In our project, we first introduce the concepts and language of Einsteinian physics in the early Years 3-6 (the last four years of Australian primary school) and then revisit them at an increasingly sophisticated level between Years 7 and 10 (the first four years of Australian secondary school). Year 10 was chosen as the endpoint because it is Australia's last year of compulsory science. At this point, students make career choices. We wanted to influence students' future choices of STEM subjects while recognising that, for many students, year 10 is the last opportunity for the education system to offer students our best understanding of physical reality. ### The Model of Educational Reconstruction Our research approach is informed by the Model of Educational Reconstruction (MER), a widely used framework for science education research and development (Duit et al., 2012). Several studies in Einsteinian physics education have drawn on the model (e.g., Boublil et al. 2023, Kamphorst et al 2019, Kersting et al. 2018, Woithe et al., 2021), which is a good fit for our study since 'it has been developed as a theoretical framework for studies as to whether it is worthwhile and possible to teach particular content areas of science' (Duit et al., 2012, p. 19). The MER connects research on the science content structure to its educational importance, thereby building on empirical studies of student understanding and initial testing of pilot learning resources in authentic classroom settings. This interplay between research and development is reflected in the three strands of the model: 1) research on teaching and learning, 2) clarification of content, 3) evaluation of learning resources (Figure 1). These strands correspond to our three research questions which structure the following sections. Research Question 1: What Instructional approaches and pedagogical principles are needed to effectively promote learning of EP at the primary and middle-school level? In line with the MER, research on teaching and learning is key to identify instructional approaches and pedagogical principles that can promote learning of EP. We have adopted spiral learning as our overarching approach and synthesised existing research and our experiences from ten years of empirical trials in the form of pedagogical principles. Spiral learning is based on the idea that learning is a continuous process and that students should be exposed to new information at a Figure 1: Our research design has followed the three components of the MER: clarification of content, research on teaching and learning, and development and evaluation of learning resources. developmentally appropriate level while revisiting and integrating previously learned material (Bruner, 1960; Harden, 1999). The move from simple to complex, from concrete to abstract, and from fundamental ideas to sound comprehension is a crucial benefit of spiral learning (Harden, 1999; Yumusak, 2016), and one that aligns with the goal of the Einstein-First project. We wish to introduce students to the Einsteinian paradigm in a controlled way and at a level in which they can master the subject before continuing to build new on prior knowledge. We acknowledge that the early years are crucial for language development, particularly in the context of modern science. It is imperative to gain scientific vocabulary during this time in order to foster the growth of scientific thinking (Lemke, 1990; Vygotsky, 1962). Thus, a crucial part of introducing Einsteinian physics in line with the spiral-learning approach is introducing Einsteinian language and, in places, selecting appropriate child-friendly terminology to describe Einsteinian phenomena. The following six pedagogical principles further underpin the development of our Einsteinian curriculum: 1. _Introduce concepts through activity-based group learning_ We draw on an activity-led instructional approach that uses hands-on activities in small groups to introduce abstract concepts. Activity-based group learning is widely recognised to promote inclusivity, benefit female learners, and deepen conceptual understanding in Einsteinian physics (Alstein et al., 2021; Dare & Roehrig, 2016; Kaur et al., 2017; Kersting et al., 2018; Labudde et al., 2000). Presenting concepts through activities makes it easy for students to understand the core concepts of Einsteinian physics. For example, our activities that introduce the concept of curved space consist of creating triangles on a curved surface (Image 1) and using toy car trajectories to map straight lines and see if parallel lines can cross (Image 2). Our students work in small groups, learn from each other, and generally report back in a teacher-led whole class discussion. 2. _Use toys, models, and analogies_ All children understand toys and can distinguish easily between the toy and reality. For example, toy atoms, toy photons, and toy spacetime are appreciated by children and provide a powerful learning environment. The key to our approach is to foster intuitive insights by using children's innate ability to learn from toys and through play (Vygotsky, 2004). Besides, toys are in themselves models and analogies which have been shown to help students better understand abstract concepts (Aubusson et al., 2006). By providing simplified representations or comparisons to familiar ideas, models and analogies can become a powerful tool in Einsteinian physics education (Garcia-Carmona, 2021; Kersting & Steier, 2018). 3. _Promote whole-body learning_ When the activities involve whole-body learning, it provides students with tangible physical connections between scientific concepts and embodied experiences (Davis et al., 2019; Kersting et al., 2021; Rollinde, 2019). For example, at the Year 3 level, we introduce the concept of heat as jiggling molecules. We ask students to pretend to be molecules and to transfer their jiggling from person to person to enact thermal conduction. At the Year 8 level, we introduce the concepts of energy through activities involving the lifting of substantial masses to maximise the embodied experience where a thumping heart (or measured heart rate) indicates energy expenditure. 4. _Use appropriate language and keywords for unifying concepts_ We recognise that the historically derived language of Einsteinian physics is often opaque or inappropriate for young learners. By focusing on unifying principles and replacing traditional terminology with more descriptive terms with clearer meaning, we simplify the language of Einsteinian physics (Kersting & Blair, 2020). One unifying principle is the concept of binding energy which applies to interactions on all scales from nuclei to galaxies. We make binding energy tangible by using toy tennis-ball atoms containing powerful magnets that allow students to feel the binding, experience the work required to break the magnetic binding, and to play scattering games in which a toy photon can break apart a diatomic tennis-ball molecule. Another example for simplified terminology is the wave-particle duality that we introduce with an aphorism: _'Everything has waviness and everything has bulletiness'_. We make this idea concrete by having students work in groups to create 'pet photons' with wavy and bullety components (Image 3). These examples show how our pedagogical principles work together to provide students with engaging learning experiences. Image 3: Students representing the spectrum of photons from red to blue are learning about photons of different wavelengths. They are using pipe cleaners with plasticine masses to illustrate the varying amount of bulletiness. 5. _Draw on role plays for representing conceptual change and human endeavour_ This principle relates to science as a story of conceptual changes and human endeavour over many centuries, involving successive struggles that have provided better and better understanding of physical reality. These stories are highly motivating for students (Kersting et al., 2018; Lin-Siegler et al., 2016) and can be emphasised using a dramatic device such as a time machine to bring the key historical figures together with questioning students. In a typical role play, students playing Euclid, Gauss, Riemann and Einstein debate with modern students, in a light-hearted context. Songs and chants make the plays memorable. 6. _Use only inexpensive consumer-level equipment_ From balls and toy cars to solar panels and laser pointers, all apparatus in our program is simple, and all aspects of the program are easy to source and within school budgets. Using everyday materials makes the activities more relevant to primary schools, where specialised equipment is often not available (Palmer, 2006). From an educational point of view, doing science with everyday objects rather than highly specialized or expensive equipment has the added advantage of connecting science to students' lives (Ashbrook, 2003). Since many students perceive common school science to be disconnected from their everyday experiences, our principles aim to present science as being personally relevant and significant to the students (Kapon et al., 2018). In summary, our pedagogical principles are powerful and effective because they connect abstract concepts to real-world experiences, uses toys, models, and analogies that students can relate to, promote both group learning and whole-body learning, simplify language where needed, and reduce the intellectual burden through the identification of unifying concepts. Inexpensive equipment is used for activity-led teacher education, making the program easily accessible, while role-plays engage students in the broad stories of paradigm change. _Research Question 2. What key science concepts and relationships should be emphasised in the EP curriculum for students in Years 3 to 10 to learn EP concepts?_ To clarify the science content, the second strand of the MER, a panel of physicists and physics educators determined the initial content sequencing for teaching in line with the spiral curriculum approach. Our challenge was to find appropriate ways to identify the core Einsteinian concepts and modernise the school curriculum without drastic reformulation. Therefore, we identified the main thematic content and enhanced each theme in the context of modern science relevant to current scientific discourse. Crucial to this modernisation was the identification of the following: a) invalid and obsolete concepts, b) key concepts and subject areas essential for understanding modern science and technology, c) unifying concepts that can contribute to understanding a broad range of science, d) appropriate models and toys that can enable activity-based learning, e) progressions of learning that allow spiral learning to reinforce understanding over eight years of science education. We are presenting the Einstein-First curriculum in two formats. The first format uses a spiral learning approach, as shown in Figure 2, where the core concepts are introduced first, followed by more advanced derived concepts. The second format provides a more detailed presentation of the curriculum that also outlines links between concepts in science and mathematics (Figure 3). Both figures illustrate the progression from core concepts to derived concepts. In parallel with clarifying the content, we developed lesson plans based on previous research on teaching and learning and with a central focus on the activities and the key concepts that are revealed by the activity. The activities, toys, and models included in the lessons were developed and tested through an iterative process based on MER and in close collaboration with teachers: we invited primary and secondary school teachers to informal training sessions, in which the teachers participated in the lessons to identify those activities that were most or least effective in connecting to critical concepts (Boublil and Blair, 2023; Choudhary and Blair, 2021; Kaur et al., 2017). The modules were refined based on this initial informal teacher feedback and revised again after formal school trial implementation1. Footnote 1: [https://github.com/faceface/face/face-learning-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-and-learning-and-learning-and-and-learning-and-learning-and-and-learning-and-and-learning-and-and-learning-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-and-learning-and-and-learning-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-and-learning-and-](https://github.com/faceface/face/face-learning-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-learning-and-and-learning-and-learning-and-and-learning-and-learning-and-and-learning-and-and-learning-and-and-learning-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-learning-and-and-and-learning-and-and-learning-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-learning-and-and-and-and-learning-and-) Figure 3: An outline of the 8-year Einstein-First program from Year 3 to Year 10. On the far left is a list of themes and concepts. The red boxes display core concepts with yellow, green to blue, indicating a progression to derived concepts. Connectors and repetitions show how the concepts introduced in early years are developed in later years. The pale purple boxes indicate where key mathematical concepts are first introduced and then developed in future years according to the pale purple arrows. ## Methods To evaluate our modules empirically, we trialled the lesson plans in primary and middle schools (Table 2). Trials for Years 3-5 were presented by teachers who had recently trained in 1-2-day workshops, while Einstein-First personnel were available to advise the teachers who presented the trials for years 7, 8 and 10 programs. The Year-9 program was delivered by one of the researchers, with classroom science teachers in attendance during the trials. The trials varied in length from eight to ten lessons. Each primary module had eight lessons delivered over a term: one 60-minute lesson every week, and secondary school modules (except the Year 9) had 10-12 lessons spread out over a term, with two 60-minute lessons per week on average. The Year 9 trial consisted of eight lessons with one 45-minute lesson per week. All trials used a standard set of interactive activities designed for each year group, similar lesson plans2 and a consistent pre- and post-tests procedure for each year group. After each trial, the tests were revised based on teacher feedback and test results. This procedure is consistent with the MER framework: the revisions identified unintended ambiguities in the wording but did not change items in such a way to invalidate the concept being measured. Footnote 2: Both primary and secondary school lessons rely on hands-on activities. The lesson plans suggest two or more activities, and the teacher may implement one or all of them depending on the available class time. The team created PowerPoint slides for the teachers to introduce the topic, and lessons included worksheets that students can complete during or after the activities. ## Data collection Each trial consisted of three phases of data collection. The first phase consisted of pre-test instruments. These instruments were used to obtain a baseline understanding of student knowledge of and attitudes towards EP. The second phase consisted of post-test instruments. These provided a measure of the student knowledge and attitudes at the end of the trial period. Both phase 1 and phase 2 data was conducted in the usual classroom context. The research team then marked the knowledge tests using a pre-determined and evaluated marking guide. The third phase consisted of teacher feedback. This took the form of formal interviews of the teachers who presented or participated in the trials and their written feedback on sheets at the end of professional development sessions. Teacher interviews were conducted using questions designed to probe the interviewee's confidence in teaching EP and gain insight into their experience of the teaching and learning approaches used at the different year levels. These interviews helped shape alterations to the lesson content and guided the focus of the analysis. A detailed analysis of teachers' experiences is presented in an accompanying paper, which provides more in-depth information (Kaur et al, submitted). \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline **Year level** & **Modules reported in this paper** & **Number of lessons** & **Student sample of lessons** & **Number of classes** & **Teacher Interview** \\ 3 & Hot stuff & 8 & 57 & 3 & Yes \\ 4 & May the forces be with you & 8 & 66 & 3 & Yes \\ 5 & Fantastic photons & 8 & 30 & 1 & Yes \\ 7 & Warp spacetime & 12 & 80 & 3 & Yes \\ 8 & From \(\,\mathrm{E}=\mathrm{mc}^{2}\) to renewable energy & 10 & 22 & 1 & Yes \\ 9 & Quantum world & 8 & 42 & 3 & Yes \\ 10 & Cosmology & 8 & 18 & 1 & No \\ \hline \end{tabular} \end{table} Table 2: Summarises the trial information with various year levels discussed in this paper. Pre- and post-knowledge tests were designed to provide a measure of student understanding of relevant core and derived concepts. The tests consisted of open-ended and multiple-choice questions. The language of the questions was carefully chosen based on the year level and to ensure that understanding of the desired concept was going to be measured. The majority of the questions in the pre- and post-tests were the same; however, the post-test often had extra questions as a result of introducing concepts to year levels where there could reasonably be no baseline measure expected. Attitudinal tests were also administered but will not be reported here as they are outside the scope of this paper. #### Data analysis Pre- and post- knowledge tests were administered by the teachers but marked by members of the research team. The team recorded students' responses and scores on an excel spreadsheet. The mark guide typically had the following structure: * No mark for a blank or incorrect response, or if the response is not in Einsteinian language. * Half marks for a partially correct and Einsteinian-language-consistent response. * Full marks for a correct and Einsteinian-language-consistent response. Once the tests were marked, the total score was converted to a percentage to account for the different question types and numbers between pre- and post-test, and to make relative comparisons across year levels manageable. Student data was anonymised. The student scores for each module were arranged by increasing pre-test score. The post-test score for the same student is then reported next to the relevant pre-test score. The data presented is a combination of trials for each module (i.e., not just a single class or teacher trial). The pre-tests provide a powerful means of assessing pre-knowledge (to ensure that most material is new to the student cohort), and the level of outside-school learning. Reporting the post-test score alongside the pre-test allows a comparison of the relative increase in student scores. By studying individual question scores, it is possible to identify problems, sometimes due to insufficient teacher in-service education, and sometimes due to lack of emphasis in lesson plans. These observations enable appropriate modification of the lesson plans and teacher in-service education. It is beyond the scope of this paper to discuss results at the individual question level, but in the discussion below we will give examples of individual questions that were used to identify areas of difficulty. In each case, the results were used to improve lesson plans, the teacher instruction and in a few instances to improve the wording of tests. Our main focus is on the raw pre-test/post-test scores, because they allow understanding of both the school demographic, and the learning outcomes, as discussed further in response to research questions 3. **Research Question 3: To what extent are Year 3-10 students learning the main concepts of the Einstein-First curriculum?** In this work, we measure learning as a relative change in test scores between the pre- and post-tests. Data from each trial set identified in Table 2 is presented in Figures 4a - 4g. An analysis of the test outcomes for questions relating to the identified main concepts presented in Figures 2 and 3 allows us to address the identification of successful teaching approaches and detection of issues that cause confusion or limit learning of EP concepts. Identifying these issues then aids in the revision and improvement of instructional approaches and concept relations when using the MER framework described above. Figures 4 a-g show pre and post-test scores of Years 3 - 5 and Years 7 - 10 students in ascending order of their pre-test score. Each year level result is broken down below into a few key concepts from that module. Selected relevant questions and the student test score outcomes are discussed to highlight the changes in student scores after the instruction of EP. Detailed analyses of student learning are left the subject of future work. _Year 3 (age 8):_ Figure 4a shows results for three classes of Year 3 students who were taught the "Hot stuff" module. In response to the question "Describe what you think an atom is?" students gave answers such as, "very tiny objects", "invisible things that make everything". More than 70% of students were able to describe atoms. In response to the question "Describe what you think a molecule is?", approximately, 50% of the participants could describe a molecule. Some key phrases were: "group of atoms", "molecules are bigger than atoms". During our testing, we identified phonological difficulties with the words, photon, phonon, and proton, which are discussed under Identification of Problems below. In response to the difficulties, we created new songs to help students disentangle the similar words with very different meanings. _Year 4 (age 9)_: Figure 4b shows results for three classes of Year 4 students who were taught the "May the forces be with you" module. One of the questions in the tests asked to assess students' understanding of distance, speed, time, acceleration, inertia, and friction. Students were given multiple answer options and instructed to select the correct response for each statement. After the program, nearly 98% of students demonstrated comprehension of distance, speed, and time concepts, and 78% of students understood acceleration. For the remaining two statements, students were aware that the answers would be either friction or inertia but appeared confused as to which option to select. Only 55% of students were able to comprehend the meanings of inertia and friction. Results from Year 4 are rather anomalous compared with other years implying a need for improved assessment items. Students with highest pre-knowledge showed small improvement while students with low pre-knowledge showed substantial improvement. The results were used to identify overly complex test questions and misconceptions caused by confusion between traditional learning about push-pull forces and our Einsteinian activities about gravity and electrical forces. This led to substantial revisions of the teacher training and the course content. _Year 5 (age 10)_: Figure 4c shows results for one class of Year 5 students who were taught the "Fantastic photons" module. After participating in the program, every student in the class was able to correctly identify photons as the correct response to the question "A little over 100 years ago Einstein told us that light comes as little particles. What are they called"? In response to the question, "What do you think is the fastest thing in this universe?", all the students cited light. A core concept for understanding the photoelectric effect is the fact that photons have momentum and by colliding with electrons, they transfer energy to them. Several class activities focussed on this. A key to teaching this concept is an association of the word momentum and its effect: the transfer of energy from one thing to another. One question "Can light push things? Please explain your answer" was asked to assess students' understanding of momentum. We were surprised that only 23% of students were able to explain that light is made up of photons and that they have momentum, allowing light to push things, because previous studies with interventions led by members of the project team much higher results were obtained (Choudhary et al, 2020). In this case, the low score on momentum was used to improve the lesson plans to make a stronger connection between the word momentum and an activity involving momentum transfer where insufficient emphasise on language was identified as the cause of the low score. _Year 7(age 12)_: Figure 4d shows results of 80 students of Year 7 (3 classes) who were taught the "Warp spacetime - Gravity from the Earth to black holes" module. This program mostly explored the concepts of curved geometry, free-fall, motion, and curved spacetime. Teachers did not have enough time to explore all the activities using the spacetime simulator and the activities on gravitational waves. Nevertheless, we observed a large improvement in students individual scores after the program, indicating that the program positively affected all students irrespective of their pre-knowledge or ability. In this program we discovered two areas of confusion that had not been apparent in previous teaching of the same material. We learnt from the comparison of previous programs (Kaur et al, 2018) that it is particularly important to emphasise the break between the Newtonian/Euclidean paradigm and the Einsteinian paradigm. In this program, the teacher in-service material failed to emphasise the measured and demonstrated invalidity of Newtonian gravity, leaving students clearly confused, as demonstrated by their answers to questions about gravity and geometry. Their concurrent mathematics program on Euclidean geometry conflicted with the curved space geometry taught in this program, accentuating the confusion. Further elaboration on these confusions can be found in the following section. _Year 8 (age 13)_: Figure 4e shows results for one class of Year 8 students who were taught the "Einsteinian Energy" module. The Year 8 program begins by confronting students with the two equations of Einsteinian energy, E=mc\({}^{2}\) and E=hf. This is students first exposure to equations in physics, and although only equations of proportionality, they involve huge and tiny constants. The equations and their units are connected to the historical discovery of energy concepts by James Watt. The program goes on to cover renewable energy, photon physics and binding energy. The results demonstrate that students have a moderate understanding of Einsteinian energy topics prior to the program and make substantial progress after the series of lessons. There were a few maths questions asked to the students. One of them was based on the equation E=mc\({}^{2}\), and the other on the E=hf. Students were given the sun's power output and asked to calculate the mass lost by the sun using the equation E=mc\({}^{2}\). Only 43% of students solved the problem after the program. In another question, students were asked to calculate the energy in joules of an ultraviolet photon of frequency 10\({}^{16}\) Hertz using E = hf, where Planck's constant h = 7 x 10\({}^{\text{-34}}\)JHz\({}^{\text{-1}}\). After the program, 77% were able to solve it correctly. _Year 9 (age 14)_: Figure 4f presents the pre- and post-test results for three Year 9 classes from two different schools who were taught the "Quantum world" module. The focus of the module was the quantum description of light. The results of the study indicate that students were generally able to use appropriate EP language when describing photons as quantum objects with both wave and particle characteristics has improved. In response to the question "What are photons?" 50% of students demonstrated their comprehension by providing answers consistent with EP, whereas 33% of responses were partially consistent. 74% of students were able to define waves, and 76% of students were able to explain the relationship between photons and waves using EP language. To assess students' EP learning relating to solar panels, the question "How do solar panels generate electricity?" was asked. Nearly 50% of students were able to describe how solar panels generate electricity. _Year 10 (age 15)_: Figure 4g shows results for one class of Year 10 students' results who were taught cosmological concepts to give students the foundations to understand gravitational waves and how they can be used to determine the Hubble Constant. They examined how Hubble's Constant can be used to give a clearer understanding of the beginning and future of our universe. The data shows that the students had very little knowledge of cosmology before beginning the program. They were able to make significant gains in their understanding of the concepts covered in the lessons. ## Discussion In the previous section, we have mentioned how we have learnt from the trials of our programs. Here we highlight key issues that we are addressing in subsequently revised module lessons to improve the lessons - more attention to terminology used and essential differences between 3-D and 2-D space. First, more attention needs to be given to the terminology used. Many Year 3 students struggled because of the level of their writing and reading competencies. Students at this age typically have a limited vocabulary and little phonological awareness (Townend 2000). This was reflected in their answers about "photons", "phonons", and "protons". Although the program improved students' comprehension of the concepts of inertia and friction they still struggled to distinguish between the two terms. This could be due to a lack of attention on the terminology or shortcomings in lesson design. In response we created new songs to help students disentangle the similar words with very different meanings. Year 3 and Year 4 teachers also that the student tests were too long for this age group. Future work includes a qualitative analysis or student written responses to fully describe the level of student conceptual understanding. Another area of confusion with terminology identified in the Year 7 results pertains to the use of the term "laws" in the context of Kepler's laws. Previous trials had only briefly touched upon Kepler's laws, focusing mainly on the period-radius relationship. However, in a recent Year 7 trial, Kepler's laws were introduced in a somewhat more formal manner. The results of this trial revealed that students continued to exhibit a high level of confusion between Newtonian and Einsteinian concepts. This confusion arises from the fact that the term "law" is typically used in physics to denote a fundamental and widely applicable principle. However, Kepler's 'laws refer to relationships only, not a wider principle. When we elevate relations to the status of laws, we implicitly endorse their general applicability. Given this conventional usage, it becomes even more crucial to highlight cases where the law is violated, especially when such violations can be readily observed using a spacetime simulator. Second, more attention needs to be given to essential differences between 3-D and 2D spaces. Two areas of confusion that became clear during the analysis of Year 7 students results. We discovered that students experienced confusion between what they were taught in our science lessons and what their mathematics teacher was currently teaching. In our lessons, they were taught that parallel lines can meet on a curved surface, and a few activities like releasing two pull back cars on a spacetime simulator, images of gravitational lensing, and digital learning resources were used to reinforce this concept (Kaur et al. 2017, Kersting et al. 2018). On a post-test, many students wrote that parallel lines can never intersect, which was in line with the Euclidean geometry they were concurrently studying in mathematics. It is evident that there must be a stronger emphasis on the differences between the geometry of 3-D real space taught on woks and Euclidean geometry taught on a 2-D flat surface. We also discovered that the concept of gravity could be easily confused if there was not very careful and consistent messaging. This concept is first introduced to students in Year 4, and then revisited in Year 7. In both year levels, students used a spacetime simulator analogy to understand the concept. Students in the Year 4 trials were able to comprehend how space becomes curved when masses are added, but they were unable to explain what this activity tells us about gravity. Their most common response is "gravity is a pushing force". In addition, we discovered that Year 7 students also struggled to explain gravity as curvature of spacetime. Our findings are consistent with previous work in Norway with high school physics students, in which educators reported that spacetime simulator activities are problematic due to students' reliance on classical gravity to comprehend a new interpretation of gravity (Kersting et al. 2018). #### Limitations The research design employed in teaching Einsteinian physics to primary and high school students using hands-on activities has several limitations. Firstly, relying solely on pre and post questionnaires or tests to collect data may not provide a comprehensive understanding of the students' learning outcomes. While questionnaires or tests can assess changes in students' knowledge, they may not capture the depth of conceptual understanding or the ability to apply the learned concepts. Alternative methods such as interviews, observations, or performance-based assessments could be incorporated to gain a more holistic view of the students' learning progress. Secondly, the reliance on teacher-delivered programs may introduce potential biases in the data collected. Despite providing inservice education for the teachers, variations in their instructional skills, teaching styles, and implementation fidelity may influence the outcomes. This variation can impact the consistency and validity of the results obtained. Furthermore, the research focused on primary and high school students, which may limit the generalizability of the findings. The effectiveness of teaching Einsteinian physics using hands-on activities might differ across different age groups or educational settings. Therefore, caution should be exercised when applying the findings to other contexts. Another limitation relates to the timeframe of data collection. The use of pre and post questionnaires or tests provides insights into short-term changes in students' understanding. However, the long-term impact and sustainability of the intervention on students' learning outcomes may not be adequately captured. Follow-up assessments conducted over an extended period could provide more insights into the durability of the effects. Addressing these limitations and considering alternative data collection methods, diverse student populations, long-term assessments, and individual differences can strengthen the validity and generalizability of future research in this area. ## Conclusion We have presented a response to the urgent need for fostering science-literate students based on the evidence that introducing young learners to the contemporary Einsteinian paradigm provides an excellent way of boosting student enthusiasm for science. To develop the Einsteinian curriculum, we followed Treagust's advice that for introducing "a new school curriculum such as Einsteinian physics [...] it is important to take a broad perspective and examine new theories and ideas about teaching and learning that have been introduced in education and the ways to measure the success or failure of this introduction using educational research' (Treagust, 2021, p. 17). Consequently, we have shown how the MER framework for modernising the science curriculum was used with trials and teacher training followed by using classroom results to identify weaknesses, enabling us to undertake another cycle of trials. Inservice education of teachers and teacher responses is described in the accompanying paper (Kaur et al., in preparation). This second paper gives further insights into student learning through the observations of the classroom teachers. Also, we have shown how the Einstein-First program was developed based on the principles of spiral learning, in which key concepts and instructional activities are gradually introduced and built upon over eight years. Six pedagogical principles that underlie the program development have been presented, along with results of trials for all years. The trial results across a range of classes and school years provide strong evidence of the effectiveness of our Einsteinian curriculum. They support the idea that modern physics concepts can be meaningfully included in science education from an early age. However, our results do not test the full spiral learning approach because there has been insufficient time to test the 8-year learning progression proposed. However we have provided a strong theoretical case that spiral learning over 8-years will be an effective way to introduce and build upon these concepts. The early years of the spiral learning curriculum have given strong positive results. Students demonstrate rapid acceptance of the Einsteinian vocabulary, except for an area in which we observed phonological confusion. This confusion was addressed by developing songs specifically designed to assist students with the confusion of similar words. Our study is timely and contributes substantially to the science education research literature for two reasons. First, we offer a holistic approach to curriculum development that emphasises the importance of including current scientific knowledge in a coherent and integrated fashion. While previous research has suggested educational reconstructions of individual Einsteinian physics topics for secondary-school students (e.g., Boublil et al., 2023; Kamphorst et al., 2021; Kersting et al., 2018; Laherto, 2010; Woithe & Kersting, 2021), our curriculum is designed to display a seamless and logical progression of ideas that build upon one another from years 3 to 10. Second, we offer empirical evidence about the learning outcomes and performance of primary and middle school students in the learning domain of Einsteinian physics. While previous research has predominantly had a focus on upper secondary and undergraduate education (e.g., Alstein et al., 2021; Krijtenburg-Lewerissa et al., 2017; Stadermann & Goedhart, 2017), there is only limited research on the teaching and learning of Einsteinian physics at an early age (Kaur et al., 2018) Future work in this area will focus on expanding the implementation of the Einsteinian curriculum in more schools and gathering more data on the long-term impacts of the curriculum on students' learning outcomes. The implications of this study for science education research and practice are clear: it is vital to modernise science curricula to include the most recent scientific concepts and theories. Our results show that it is possible. By doing so, we can better prepare students to understand and engage with the world around them and, ultimately, to pursue careers in science. A curriculum that teaches the fundamental principles of our current scientific worldview can become the educational engine for future innovation and discovery. Further, longer-term research will be needed to understand how this curriculum affects students' attitudes towards physics, and particularly their future career choices. #### Acknowledgments We would like to thank the Einstein-First team, the ARC (2019-2024) for funding this work, our ARC Centre of Excellence for Gravitational-Wave Discovery (OzGrav) colleagues as well as our EPER collaborator for their enthusiastic support. We are grateful to our participating schools, principals, teachers, and students for allowing us to conduct the program and for granting us permission to use their photographs and data for research purposes. Our program is supported by numerous companies listed on our website www. einsteinianphysics.com. #### Ethics statement All of the participating schools' principals, teachers, students, and parents provided their informed consent. In addition, students were informed that participation in the Einstein-First project was voluntary and would not affect their school grades. The research design followed ethical standards compliant with the Research Ethics Committee at the University of Western Australia (2019/RA/4/20/5875), the Department of Education Western Australia, Association of Independent schools of Western Australia and the Catholic Education Western Australia. All the data are stored securely on the Institutional Research Data Store (IRDS) system at the University of Western Australia.
2309.09921
A Heterogeneous Graph-Based Multi-Task Learning for Fault Event Diagnosis in Smart Grid
Precise and timely fault diagnosis is a prerequisite for a distribution system to ensure minimum downtime and maintain reliable operation. This necessitates access to a comprehensive procedure that can provide the grid operators with insightful information in the case of a fault event. In this paper, we propose a heterogeneous multi-task learning graph neural network (MTL-GNN) capable of detecting, locating and classifying faults in addition to providing an estimate of the fault resistance and current. Using a graph neural network (GNN) allows for learning the topological representation of the distribution system as well as feature learning through a message-passing scheme. We investigate the robustness of our proposed model using the IEEE-123 test feeder system. This work also proposes a novel GNN-based explainability method to identify key nodes in the distribution system which then facilitates informed sparse measurements. Numerical tests validate the performance of the model across all tasks.
Dibaloke Chanda, Nasim Yahya Soltani
2023-09-18T16:35:30Z
http://arxiv.org/abs/2309.09921v2
# A Heterogeneous Graph-Based Multi-Task Learning for Fault Event Diagnosis in Smart Grid ###### Abstract Precise and timely fault diagnosis is a prerequisite for a distribution system to ensure minimum downtime and maintain reliable operation. This necessitates access to a comprehensive procedure that can provide the grid operators with insightful information in the case of a fault event. In this paper, we propose a heterogeneous multi-task learning graph neural network (MTL-GNN) capable of detecting, locating and classifying faults in addition to providing an estimate of the fault resistance and current. Using a graph neural network (GNN) allows for learning the topological representation of the distribution system as well as feature learning through a message-passing scheme. We investigate the robustness of our proposed model using the IEEE-123 test feeder system. This work also proposes a novel GNN-based explainability method to identify key nodes in the distribution system which facilitates informed sparse measurements. Numerical tests validate the performance of the model across all tasks. Distribution system, explainability, fault event diagnosis, heterogeneous multi-task learning, smart grid. ## I Introduction fault diagnosis is a crucial task for the operation and maintenance of power systems, particularly in distribution networks due to the nature of complex interconnectivity and scale of the network. Failure to take proper action during a fault event can result in a cascading outage of the distribution system [1, 2]. For uninterrupted operation in the case of a fault occurrence grid operators need to identify the precise location of the fault in addition to the type of the fault. Furthermore, knowing the fault resistance and fault current before taking action for fault isolation and fault clearance guarantees the implementation of appropriate safety measures. This additional information allows grid operators to make more informed decisions and also to plan for necessary repairs or equipment replacements. Not only that, but it also provides insight into post-fault analysis to identify if all the protection systems performed as intended. Due to the advancement in deep learning in recent years, most fault diagnosis systems are utilizing a data-driven approach. However, there is a lack of unified methods that take into account the challenges associated with real-world deployment and many research provides analysis based on theoretical assumptions only. In this work, we propose a unified heterogeneous MTL-GNN architecture that is capable of performing fault detection, fault localization, fault type classification, fault resistance estimation and fault current estimation. All the tasks are performed in a simultaneous manner as opposed to a sequential manner which ensures the decoupling of tasks. We call it a heterogeneous MTL in contrast to a homogeneous MTL due to the fact that the proposed model performs both classification and regression tasks. We take into account all \(5\) types of short circuit faults that can occur in a distribution system [3]. This includes asymmetrical faults consisting of line-to-ground faults (LG), line-to-line faults (LL), line-to-line-to-ground faults (LLG) and symmetrical faults consisting of line-to-line-to-line-to-ground faults (LLLG), line-to-line-to-line faults (LLL). To address the challenges associated with real-world deployment, our analysis takes into account measurement error, variable fault resistance, small dataset and sparse measurements. To make our contribution clear in the following section, the related literature in this domain and the drawbacks and scope for development have been reviewed. ## II Literature Review The literature on fault event diagnosis is very extensive [4, 5, 6]. This includes different kinds of faults such as over-voltage, insulator, voltage sag, arc, and short-circuit faults. They can be broadly structured into two categories. One category are data-driven approaches which utilize deep learning models [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] and another category includes traditional methods [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] using statistical measures, signal processing and the physics of fault events. The latter category can be further broken down into separate categories like traveling wave-based [22, 23, 24, 25], impedance-based [26, 27, 28, 29], morphology-based [30, 31, 32, 33, 34], and voltage sag [35, 36, 37] methods. As regards the data-driven approaches, there is a multitude of methods and architectures but we broadly divide them into two categories which are GNN-based approaches [15, 16, 17, 18, 19, 20, 21], multi-layer perceptron (MLP) and convolutional neural network (CNN) based approaches [7, 8, 9, 10, 11, 12, 13, 14]. ### _Traditional Methods_ The traveling wave-based methods [22, 23, 24, 25] analyze the characteristics of traveling waves generated when a fault event occurs. During a fault event, the sudden change in voltage and current around the fault location initiates a transient disturbance that propagates along the distribution line. This transient wave and its reflected counterpart are picked up by fault recorders installed at a substation or two substations depending on whether it is a single-end method or a double-ended method. Based on the time of arrival of the transient wave and the reflected wave it is possible to deduce the location of the fault. In impedance-based methods [26, 27, 28, 29], current and voltage signals are measured along different places on the distribution line. In the case of a fault event, these measured signals are used to isolate the fundamental frequency to estimate the apparent impedance. This apparent impedance is then used to locate the fault event. Morphology-based methods [30, 31, 32, 33, 34] make use of mathematical morphological operations like dilation, erosion, closing and opening on the waveform generated during a fault event to extract features that correspond to a fault event. After feature extraction is complete, the extracted features are passed to different classifier algorithms like decision trees [31], recursive least square stage [32], and random forest [34]. Voltage sag methods [35, 36, 37] use the characteristic of the reduction of voltage magnitude in the case of a fault event to isolate the fault's exact location. When a fault event occurs there is a sudden dip in the voltage magnitude. This sudden dip occurs only at the location of the fault event which can isolated based on the characteristics of the voltage sag. ### _Deep Learning Based Methods_ #### I-B1 MLP and CNN Based Methods These methods use historical or software-simulated data relating to fault events in distribution systems and use them to train MLP and CNN architectures to do prediction tasks like fault detection, fault localization and fault classification. The work in [7] is one of the early papers that use a MLP. The input to their proposed model is the current measures of distributed generation units (DGs) and substation and they perform fault localization as output. Another similar work that uses MLP is [8] where the authors use the IEEE-13 bus system to perform both fault classification and localization. In [12] the authors use MLP but with an additional fuzzy layer and their analysis is on the IEEE-37 bus system. In [9] the CNN architecture is adopted for fault localization. First, the authors use a continuous wavelet transform (CWT) algorithm to convert current phasors to images which are fed to a CNN model to localize the fault. The work in [10] also uses a CNN to localize faults but considers partial observability of the grid on IEEE-39 and IEEE-68 bus systems. In [11] the authors take a slightly different approach by using 1-D convolutions with double-stage architecture. The first stage extracts the features and the second performs fault identification. Similar to [9], the work in [13] also uses CWT to convert time-domain current signals to image domain and use the transformed data for fault classification and localization on the IEEE-34 bus system. A different approach by using a capsule-based CNN is proposed in [14] to do fault detection, localization and classification. #### I-B2 GNN Based Methods The first prominent work to use GNN for fault localization is in [15]. In this work, IEEE-123 and IEEE-37 are used as the test feeder system and the authors consider a range of factors like metering error, changes in topology, etc. for their analysis. The architecture employed is CayleyNets [38] which is a graph convolutional neural network (GCN) based on spectral theory. The feature vector considered as the input to their model consists of both voltage and current phasors measured from the buses in the distribution system. The subsequent notable work is [16] which utilizes not only node features but also link features that include branch impedance, admittance and regulation parameters of the distribution lines. The authors validate their approach on a self-designed \(6.6\) KV system with \(12\) buses and \(8\) loads. Two other related research works are in [17] and [18]. For the first one, the authors consider a gated GNN [39] architecture. However, they only consider single-line-to-ground fault as opposed to [18] which considers all three types of asymmetrical short circuit fault. Not only that, but the authors also consider limited observability and limited labels in their implementation. Their proposed method has a two-stage architecture with only voltage phasors as input. In [19] there is a more recent work that uses a different variant of GNN, a graph attention neural network (GAT) [40] to do both fault localization and classification in IEEE-37 feeder system. In their analysis, the authors consider a constant fault resistance and the fault localization prediction is dependent on the fault classification task. These above-mentioned works consider instantaneous current and/or voltage phasors meaning that they require measurement at the fault time, no pre-fault or post-fault measurement is required. But the method proposed in [20] requires a fault waveform sampled at \(1\) KHz. Similar to [19] their analysis considers constant fault resistance. One important contribution of this research work is that they use MTL to do both fault classification and fault localization at once in contrast to [19]. Another similar recent work employs spatial-temporal recurrent GNN to do three tasks simultaneously which are fault detection, classification and localization [21]. The authors report numerical results tested on a microgrid and IEEE-123 bus system. Similar to the previous approach their analysis considers constant resistance and due to the temporal nature of their proposed method, it requires high time resolution of fault waveshape. The summary of major technical differences between our proposed model with the existing GNN-based models is outlined in Table I. We make the argument that the models that are only trained for fault localization and/or classification [15, 16, 17, 18, 19, 20] will require a separate method to first distinguish between fault event and other events (non-fault) in distribution system like load change. Also, the models that consider constant resistance [19, 20, 21] will only perform well as long as the actual fault resistance is similar to the fault resistance considered during the generation of the training dataset. Even though these research work report their model performance with different resistance values, these reported values are only applicable to that specific resistance value. In contrast, [15, 16, 17, 18] takes a more practical approach and train their model considering a range of possible fault resistance values. As long as the fault resistance is within that range, the model is expected to hold its performance. The drawback of [20, 21] is that their proposed model is dependent on temporal characteristics. To perform well the dataset needs to have high temporal resolution. As the scale of the distribution system grows data acquisition, storage and training overhead for this approach becomes progressively more demanding. In addition, there is a requirement for perfect time synchronization which further complicates the system design. Considering these drawbacks the key contributions of our work are outlined as: 1. We propose a unified heterogeneous MTL-GNN architecture to perform \(5\) different tasks simultaneously for a fault event which are fault detection, fault localization, fault type classification, fault resistance estimation and fault current estimation. 2. Our proposed model works in the presence of measurement error, variable resistance, small dataset and sparse measurements which are factors that should be considered in real-world deployment. 3. We utilize an explainability algorithm specific to GNN to identify key nodes in the grid which provides the opportunity for informed sparse measurement. 4. As all multi-task prediction/estimation shares a common backbone GNN for feature extraction, it significantly reduces the parameter overhead and computational complexity in addition to providing modularity for task-specific layer. The remaining parts of the paper are organized as follows. In section III, a brief overview of the test feeder system and the process involving the dataset generation is provided. The mathematical framework for the overall methodology and architecture used is given in section IV. In the following section V, we outline the details of the architecture, training procedure and hyperparameters assumed during training. Numerical results and discussions on them are presented in section VI. Finally, section VII concludes the paper. ## III Dataset Generation This section briefly covers the details of the IEEE-123 node feeder system and the dataset generation process as well as the underlying assumptions considered in the generation process. ### _IEEE-123 Node Feeder System_ The IEEE-123 node feeder system [41] shown in Fig. 1 operates at a nominal voltage of \(4.16\) KV and consists of both overhead and underground lines. It has three-phase, two-phase and single-phase lines and a couple of open and closed switches, voltage regulators and a transformer. There are a total of \(85\) nodes that have loads connected to them and most of them are connected to single-phase buses and the rest are connected to three-phase buses. For simulating all \(5\) types of short circuit faults including asymmetrical and symmetrical faults all three phases need to be considered. Therefore, in our analysis, similar to [15, 18] we only consider three-phase nodes which tally up to \(68\) nodes including the \(2\) three-phase regulators \(150\)r and \(160\)r. Also, similar to [15, 18] we also make the assumption that some specific pairs of nodes are connected which are (\(149\), \(150\)r), (\(18\), \(135\)), (\(13\), \(152\)), (\(60\), \(160\)r,), (\(61\), \(61\)s), (\(97\), \(197\)), (\(9\), \(9\)r), (\(25\)r). The reason behind this assumption is the pairs (\(18\), \(135\)), (\(13\), \(152\)), (\(61\) Fig. 1: Diagram of IEEE-123 node feeder system. The highlighted blue blocks represent the node pairs that are considered connected. The number of voltage regulators, transformers, and switches is mentioned in (\(\cdot\)) and the number of buses, their phases and load connectivity are mentioned in the table. The active substation (source bus) is connected to the node \(150r\). 61s), (97, 197), (60, 160) are connected via closed switches and the pairs (\(149\), \(150\)r), (9, \(9\)r), (\(25\), \(25\)r) consists of buses and their corresponding regulators. This is shown in Fig. 1 via highlighted blue sections. One important thing to note here, in actuality, considering all the \(4\) regulators as separate nodes the total number of nodes in the feeder system results in \(128\) nodes. ### _Dataset Description and Generation Procedure_ For dataset generation, we use OpenDSS [42], open-sourced by the electric power research institute (ERPI) as a power flow equation solver engine and use _py_dss_interface_ module in Python to interface with it. We opt to utilize only voltage phasors as measurements based on [15], where the authors showed that the performance of their model is almost identical with or without current phasors. Therefore, there is no incentive to use current phasors as features because that would just double the amount of computation needed at the expense of no performance increase. For the three-phase buses shown in Fig. 1 for all three phases ( Phase A, Phase B, Phase C) voltage amplitude and angle (in radian) can be measured. For single-phase and two-phase buses values for the missing buses are padded with zero. As our proposed model does \(5\) tasks simultaneously, for each data point we generate \(5\) labels as fault detection, fault location, fault resitance, and fault current labels. The first three labels are discrete values whereas the last two labels are continuous values. As mentioned before for practical consideration we assume a range of fault resistance instead of a single fault resistance value. Fault resistance values are sampled from a uniform distribution (rounded up to \(2\) decimal point) which is \(f_{res}\sim\mathcal{U}(\min_{res},\max_{res})\), where \(f_{res}\) is the sampled fault resistance value and \(\min_{res}\) and \(\max_{res}\) are the lower and upper bound of the uniform distribution. For our initial analysis, we assume a lower bound of \(0.05\)\(\Omega\) and an upper bound of \(20\)\(\Omega\). The practical consideration claim we make is justified by Fig. 2 which clearly shows that for constant resistance analysis [19, 20, 21] the fault diagnosis becomes trivial. For each fault type, we generate \(20400\) data samples. This results in a total of \(20400\times 5=102000\) data points for the fault events where there are \(300\) samples per bus. For non-fault event data generation, we vary all \(91\) loads connected to the \(85\) buses according to another uniform distribution given by \(f_{load}\sim\mathcal{U}(\min_{load},\max_{load})\). We assume \(\min_{load}=20\) KW and \(\max_{load}=80\) KW based on the typical load range associated with the IEEE-123 node feeder system. For non-fault events, the number of data points is also \(20400\). All the values in the feature vector are standardized by subtracting the mean value and dividing by the standard deviation. ## IV Mathematical Framework GNN uses message passing between nodes in the graph for the propagation of features which allows feature representation learning in addition to topological representation learning. GCN is a specific variant of GNN, first introduced by [43]. The proposed model consists of GCN layers as a common backbone followed by dense layers as prediction heads. GCN perform message passing between a given target node \(u\) in a particular layer \(l\) and the neighbors of the target node \(v\in\mathcal{N}(u)\) according to the following equation:- \[\mathbf{h}_{u}^{(l)}=\sigma\left(W^{(l)}\sum_{v\in\mathcal{N}(u)\cup\{u\}} \frac{\mathbf{h}_{v}}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}}\right) \tag{1}\] where, \(\frac{1}{\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}}\) is the normalization factor, \(\mathbf{h}_{v}\) is the hidden representation of the neighboring nodes, \(W^{(l)}\) is the weight matrix consisting trainable parameters, \(\sigma\) is the non-linear activation function and \(\mathbf{h}_{u}^{(l)}\) is the hidden representation of the target node. This equation gives an intuitive understanding of how the message passing algorithm works, but for implementation (2) is more common in literature. \[\mathcal{H}^{(l+1)}=\sigma\left(\tilde{D}^{-\frac{1}{2}}\tilde{\mathcal{A}} \tilde{D}^{-\frac{1}{2}}\mathcal{H}^{(l)}W^{(l)}\right) \tag{2}\] where, \(\tilde{\mathcal{A}}=\mathcal{A}+I\) is the adjacency matrix with self-loops and \(\mathcal{A}\) is the original adjacency matrix without self-loops, \(\tilde{D}\) is the degree matrix considering the modified adjacency matrix \(\tilde{\mathcal{A}}\) and \(\mathcal{H}^{(l)}\) is the \(l^{th}\) GCN layer containing all the feature representation of all the nodes for that particular layer. The node feeder system is represented by a graph \(\mathcal{G}:=(\mathcal{V},\mathcal{E},\mathcal{A})\) where \(\mathcal{V}\) represents the set of nodes in the feeder system which can be represented as the union of three disjoint sets as shown in the following equation: \[\mathcal{V}:=\mathcal{V}_{1p}\cup\mathcal{V}_{2p}\cup\mathcal{V}_{3p} \tag{3}\] where \(\mathcal{V}_{1p}\), \(\mathcal{V}_{2p}\) and \(\mathcal{V}_{3p}\) respectively represent the nodes associated with three-phase, two-phase and single-phase buses and regulators. With the inclusion of the voltage regulators as nodes, \(|\mathcal{V}_{3p}|=68\), \(|\mathcal{V}_{2p}|=4\), \(|\mathcal{V}_{3p}|=56\) which equate to \(|\mathcal{V}|=128\). The set of edges is denoted by \(\mathcal{E}\) where \(|\mathcal{E}|=127\) and \(\mathcal{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) is the symmetric adjacency matrix. As \(\mathcal{G}\) is a sparse graph, the adjacency matrix is replaced by a coordinate list format (COO) representation denoted by \(E_{c}\in\mathbb{R}^{2\times 2|\mathcal{E}|}\). \(E_{c}\) holds the node pair that is connected by an edge. Owing to the fact that the graph under consideration is undirected in nature, if there is an edge between node pair \((p,q)\) then both \((p,q)\) and \((q,p)\) are included in \(E_{c}\). Fig. 2: t-Distributed Stochastic Neighbor Embedding (t-SNE) visualization of all the data points. (**Left**) shows the dataset generated with a variable range of fault resistance sampled from a uniform distribution \(\mathcal{U}(0.05\Omega,20\Omega)\). (**Right**) shows the dataset g with a constant fault resistance \(20\Omega\). From this point, we use superscript \(k\) to denote the index associated with a data point, subscript \(i\) to denote the index of a specific node and subscript \(t\) to denote the task index. The generated dataset contains a feature vector associated with each node in the graph. That is represented by \(\mathbf{z}_{i}^{k}\in\mathbb{R}^{6}\) which is the feature vector for \(i^{th}\) node of the \(k^{th}\) data point. Each feature vector holds the value of voltage phasor meaning voltage amplitude (\(V_{i}\)) and angle (\(\theta_{i}\)) for three phases. This can be mathematically represented by (4). \[\mathbf{z}_{i}^{k}:=[V_{i}^{A},\theta_{i}^{A},V_{i}^{B},\theta_{i}^{B},V_{i}^{B },\theta_{i}^{C}]^{k} \tag{4}\] where, the superscript \(A,B,C\) represent respectively the value associated with Phase-A, Phase-B and Phase-C. It is worth noting that since for the nodes in \(\mathcal{V}_{2p}\) and \(\mathcal{V}_{1p}\), values for some specific phases do not exist, they are replaced with zeros. Stacking all the feature vectors for all \(|\mathcal{V}|\) nodes in a graph \(\mathcal{G}^{k}\) results in a feature matrix \(\mathcal{X}^{k}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) associated with that graph. A particular row \(i\) in the feature matrix \(\mathcal{X}^{k}\) corresponds to the feature vector for the \(i^{th}\) node. It should be noted the COO representation of the adjacency matrix is the same for all the data points _i.e._\(E_{c}^{k}=E_{c},\forall k\). Therefore, the \(k^{th}\) input data point of our dataset can be represented by \(X^{k}:=(\mathcal{X}^{k},E_{c})\). For each input data point, there are \(5\) labels which are represented respectively by \(y_{\text{detect}}\), \(y_{\text{loc}}\), \(y_{\text{type}}\), \(y_{\text{res}}\), \(y_{\text{current}}\). These signify the fault detection label, fault classification label, fault type label, fault resistance label and fault current label. For ease of representation, the subscripts are replaced by the corresponding task index. The first three labels are for classification type prediction and the last two labels are for regression type prediction. Now we define \(Y\) as an ordered list of all the labels which can be represented as the following:- \[Y^{k}:=(y_{1}^{k},y_{2}^{k},y_{3}^{k},y_{4}^{k},y_{5}^{k}) \tag{5}\] Therefore, our dataset can be succinctly written as \[\mathcal{D}^{k}:=\{(X,Y)^{k}\} \tag{6}\] where a single data point and the corresponding labels are indexed by \(k\). The size of the dataset is denoted by \(|\mathcal{D}^{k}|=N\) and the training and testing part of the dataset is represented by \(\mathcal{D}^{k}_{tr}\) and \(\mathcal{D}^{k}_{test}\) respectively, each of which has the size \(|\mathcal{D}^{k}_{tr}|=N_{tr}\) and \(|\mathcal{D}^{k}_{test}|=N_{test}\). Similarly, the inputs associated with the training and testing dataset are expressed as \(X_{tr}\) and \(X_{test}\) and labels associated are expressed by \(Y_{tr}\) and \(Y_{test}\). Now in implementation, we modify the labels of fault current because they can have a varying range of magnitude up to order of \(10^{3}\) or more. This can make the entire optimization process unstable and result in exploding gradients during the training phase. Hence we first normalize the labels to get \(\tilde{y}_{t=5}\) followed by taking the negative log which can be expressed as \[y_{5}^{k}:=-\ln(\frac{y_{5}^{k}}{\sum_{k=1}^{N}y_{5}^{k}}) \tag{7}\] This makes the fault current labels in the same range as fault resistance and hence results in much more effective training. We define our heterogeneous MTL model as \(f_{\theta}\), parameterized by \(\theta\). The model can be sectioned into two parts. The first part is the common backbone GNN represented by Fig. 3: Architecture of the proposed heterogenous MTL-GNN. The input features go through the common backbone GNN to generate graph embeddings. For visual clarity message passing across the layers for a couple of nodes (highlighted in green) is shown, where the blue highlighted sections signify nodes included in the message passing process (**Left**). The embeddings generated by \(128\) nodes are flattened and concatenated together to convert to a one-dimensional vector (**Middle**). The concatenated feature vector is passed to \(5\) heads, three classification heads and two regression heads. The corresponding loss is computed based on the predicted output (\(\tilde{y}_{t}^{k}\)) and ground truth label (\(y_{t}^{k}\)). The computed loss for each task is weighted and summed together (**Right**). where \(\theta^{sh}\) are the parameters of the common backbone \(g\). Now for each task \(t\), there is a sequence of separate dense layers which can be represented by \(h_{\theta^{t}}\) where \(\theta^{t}\) represent parameters associated with a specific task \(t\). As there are a total of \(T=5\) tasks, we can represent \(t\) as \(t\in\{1,2,\cdots,T\}\) which means the network parameters can be expressed as \(\theta:=\{\theta^{sh},\theta^{1},\theta^{2},\cdots,\theta^{T}\}\). Now the predicted output \(\hat{y}_{t}^{k}\) of a task \(t\) for \(k^{th}\) data point can be expressed as (8) \[\hat{y}_{t}^{k}:=h_{\theta^{t}}\circ g_{\theta^{sh}}(X_{tr}^{k}) \tag{8}\] For each task, we define a loss function \(\mathcal{L}_{t}\) which takes in \(X_{tr}^{k}\) and the parameters of the proposed model and computes the loss for that particular task. Each loss is weighted by a weighting factor \(w_{t}\). The overall objective function including L2 regularization can be expressed as (9) where \(\lambda\in\mathbb{R}\) is the regularization hyperparameter. \[\min_{\theta}\sum_{k=1}^{N_{tr}}\sum_{t=1}^{T}w_{t}\mathcal{L}_{t}(\theta,X_{ tr}^{k})+\lambda\|\theta\|_{2} \tag{9}\] For the classification layers jointly expressed as \(h_{\theta^{cls}}=\{h_{\theta^{1}},h_{\theta^{2}},h_{\theta^{3}}\}\) the negative log-likelihood (NLL) loss function is used and for the regression layers jointly expressed as \(h_{\theta^{\neg{\text{reg}}}}=\{h_{\theta^{\bullet}},h_{\theta^{5}}\}\) the mean squared error (MSE) loss is used. For generality, we express the classification losses as \(\mathcal{L}_{cls}\) and regression losses as \(\mathcal{L}_{reg}\) and they are defined as following over entire training data:- \[\mathcal{L}_{cls}=-\sum_{k=1}^{N_{tr}}y_{t}^{k}\log\frac{\exp(h_{\theta^{t}} \circ g_{\theta^{sh}}(X_{tr}^{k}))}{\sum_{j=1}^{m}\exp(h_{\theta^{t}}\circ g_ {\theta^{sh}}(X_{tr}^{k}))} \tag{10}\] where \(t\in\{1,2,3\}\) which are the task index for classification task and \(m\) is the number of classes for task \(t\). \[\mathcal{L}_{reg}=\frac{1}{N_{tr}}\sum_{k=1}^{N_{tr}}(\hat{y}_{t}^{k}-y_{t}^{k })^{2} \tag{11}\] in this case \(t\in\{4,5\}\) which are the indices for regression tasks. ## V Architecture Detail and Model Training The GNN backbone of the proposed model consists of \(3\) GCN layers which are \(\mathcal{H}^{1},\mathcal{H}^{2},\mathcal{H}^{3}\) and \(\mathcal{H}^{0}\) is the input layer. ``` Input:\(\mathcal{D}_{tr},\alpha,\lambda,c,w_{t}\)\(\triangleright\)Training set and hyperparameters Output:\(\theta\)\(\triangleright\)Learned parameters of the model 1\(X_{tr}\):\(Y_{tr}\leftarrow\mathcal{D}_{tr}\) 2while\(<\)\(\sim\) number of epochs do 3for\(k\) in \(N_{tr}\)do 4for\(t=1\) to \(T\)do 5\(\hat{y}_{t}^{k}\leftarrow h_{\theta^{t}}\circ g_{\theta^{sh}}(X_{tr}^{k})\) 6if\(t=1\) and \(\hat{y}_{t}^{k}=1\)then\(\triangleright\)Non-fault event 7\(\mathcal{L}_{s}\leftarrow\sum_{k}\mathcal{L}_{\{t,1\}},w_{t}\mathcal{L}_{cls}\) 8else 9\(\mathcal{L}_{s}\leftarrow\sum_{k}\sum_{t}w_{t}\mathcal{L}_{s}\) 10 11\(\nabla_{\theta}\mathcal{L}_{s}\leftarrow\left\{\frac{\nabla_{\theta}\mathcal{L }_{s}}{\|\nabla_{\theta}\mathcal{L}_{s}\|}\cdot\nabla_{\theta}\mathcal{L}_{s}, \right.\) otherwise 12\(\theta_{s}\leftarrow\)AdamW(\(\theta_{s},\alpha,\lambda,\nabla_{\theta}\mathcal{L}_{s}\)) 13\(\theta\leftarrow\theta_{s}\) ``` **Algorithm 1** Training Algorithm of MTL-GNN The number of layers is restricted to \(3\) to avoid over smoothing issue [44] which is a common problem for deep GNNs. For normalization of the features flowing through the layers, layer normalization is used [45]. In Fig. 3 the entire forward propagation through the network is shown. The forward propagation through GNN allows feature representation learning through message passing. In addition, topological information is captured in the learned representations. After that, learned embedding is extracted from all \(128\) nodes which are then flattened and concatenated together to generate a feature vector of dimension \(128\times 32=4096\). This feature vector is passed through the different heads for different prediction tasks. It is crucial to note that for non-fault events we restrict all the losses except for \(\mathcal{L}_{1}\) and \(\mathcal{L}_{3}\), as all the other losses correspond to a fault event. This allows gradient flow (during backpropagation) through the network only for \(\mathcal{L}_{1}\) and \(\mathcal{L}_{3}\) loss for non-fault data samples. Without this mechanism, the parameters associated with fault events (\(\theta_{2},\theta_{4},\theta_{5}\)) would get updated also. The total number of parameters per layer and output shapes are mentioned in Table II. The model was trained for \(500\) epochs with a batch size of 32 with AdamW [46] optimizer on an NVIDIA A100 80GB GPU. For stability of training gradient clipping and L2 regulariz dropout is used (\(D_{r}=0.2\)) to prevent overfitting. The hyperparameters are outlined in Table III. The training procedure is shown in Algorithm 1. For training the model the training dataset \(\mathcal{D}_{tr}\) and the hyperparameters : initial learning rate (\(\alpha\)), regularizer hyperparameter/weight decay (\(\lambda\)), gradient clipping threshold (\(\mathcal{C}\)), loss weighting factors (\(w_{t}\)) need to be specified. To determine the \(w_{t}\) hyperparameters a trial run with the same weight was conducted to get a gauge about which task is easier to learn for the model and after that, these values are set taking into account the importance of the task. We specify the epoch index as \(s\) and the corresponding overall weighted loss associated with that epoch is \(\mathcal{L}_{s}\). After a forward pass through the network, \(\mathcal{L}_{s}\) is calculated followed by the gradient of the loss with respect to model parameters which is \(\nabla_{\theta}\mathcal{L}_{s}\). Gradient clipping is applied if necessary and finally, the AdamW optimizer updates the model parameters based on the defined hyperparameters. ## VI Numerical Tests In this section, we first introduce the evaluation metrics used to assess the model performance, followed by the regression and classification performance of the model. Then we evaluate the model performance on sparse measurements where the sparse node-set is strategically chosen based on an explainability algorithm. ### _Metrics Used for Evaluation of the Model Performance_ For fault detection which is a binary classification task, we report balanced accuracy and f1-score. The reason for reporting balanced accuracy as opposed to accuracy is the class imbalance for fault detection. For fault localization, we report the location accuracy rate (LAR) and f1-score. The LAR\({}^{h}\) (\(h\) indicate the number of hops) is a common metric to evaluate the performance for fault localization. It quantifies the percentage of correctly identified fault locations within a certain \(h\)-hop distance from the actual fault location. LAR\({}^{0}\) capture the accuracy for exact fault location. In contrast, LAR\({}^{1}\) measures the performance of the identified fault locations within \(1\)-hop distance from the actual fault location. LAR\({}^{2}\) computes the same thing but for \(2\)-hop distance. The reason for computing this metric is to evaluate the model's ability to provide an estimation of the fault's approximate position, even if the model cannot precisely identify the exact fault location. For fault type classification we report accuracy and f1-score. In addition, we also use the confusion matrix which provides insight into the model's performance specific to a fault type. For the regression tasks, fault resistance and fault classification estimation we report MSE and mean absolute percentage error (MAPE) for the test data set. These two metrics are given by the following equation. \[\mathrm{MSE}_{test}=\frac{1}{N_{test}}\sum_{k=1}^{N_{test}}(\hat{y}_{t}^{k}-y_ {t}^{k})^{2} \tag{12}\] \[\mathrm{MAPE}_{test}=\frac{1}{N_{test}}\sum_{k=1}^{N_{test}}\left|\frac{\hat{y }_{t}^{k}-y_{t}^{k}}{y_{t}^{k}}\right|\times 100 \tag{13}\] where \(t=4\) indicates the metrics for fault resistance and \(t=5\) indicates the metrics for fault current. The reason for reporting MAPE in addition to MSE is the sensitivity to scale for MSE. As we report performance for different ranges of fault current and resistance, this varying range needs to be taken into account. All results are summarized in Table IV. For simulating possible measurement errors, we introduce noise \(n\) sampled from a zero mean gaussian distribution \(\mathcal{N}(0,\sigma_{noise})\) with a variance value set to \(\sigma_{noise}\). The variance \(\sigma_{noise}\) is respectively set to \(0.0001\), \(0.001\) and \(0.01\) for different levels of noise. We also consider the model performance under varying ranges of fault resistance values which are \(0.05\Omega-20\Omega\), \(20\Omega-100\Omega\) and \(100\Omega-500\Omega\). Furthermore, we consider the fact there might be a lack of historical data for fault events. To imitate this, we decrease the size of the dataset and evaluate the model performance on the decreased dataset. The smallest size we consider is \(15\%\) of the original which results in only \(45\) samples per node. ### _Performance of the Model for Classification Tasks_ From Table IV it is apparent that the model can easily distinguish between a load change event and a fault event which is intuitive given that there are only two classes. In addition, the voltage fluctuation in the case of a fault event differs significantly from that of a load change event. In the case of fault localization, despite the different variations, the model is robust enough to maintain relatively high LAR\({}^{1}\) and LAR\({}^{2}\). When we consider measurement error, first we evaluate the performance with an out-of-distribution (OOD) test set meaning the training dataset didn't contain noisy samples. For low and moderate noise, even with OOD samples the fault localization performance holds. For high levels of noise, the performance goes down but if the noisy samples are included in the training set the location accuracy improves significantly. Also, the model manages to sustain relatively high accuracy despite a broad range of resistance values. Similar conclusions can be made for varying dataset sizes. One important thing to note here, even when the model is not able to localize the exact fault point, it can approximate the location with high accuracy. For fault-type classification, it is important to outline per-class performance as the probability of all fault types is not the same and varies according to fault type; LG (70% - 80%), LLG (17% - 10%), LL (10% - 8%), LLL, LLLG (3% - 2%) [3]. This means it is more important the model is able to classify the asymmetrical faults compared to the symmetrical faults. The confusion matrix shown in fig. 4 summarizes the performance for each fault type. The misclassified fault types belong to the LLL and LLLG fault class which is further corroborated by the t-SNE visualization of the last layer features of \(h_{\theta}^{3}\) shown in Fig. 5. Samples from other fault types are clearly separable even when they are projected into a 2-dimensional feature space. ### _Performance of the Model for Regression Tasks_ For both regression tasks, the performance of the model remains consistently high across all the variations considered. Even though for resistance level \(100\Omega-500\Omega\) because of the scale sensitivity, MSE is high, MAPE remains low similar to other resistance levels. Fig. 6 shows the distribution plot for the ground truth and predicted value for fault resistance on the test set. For the most part, the predicted distribution closely resembles the ground truth distribution. Similarly, Fig. 7 shows the ground truth distribution and predicted distribution for the fault current. One important thing to mention here, this plot is for the transformed fault current label mentioned in (7) rather than the actual fault current. To get the actual fault current prediction and ground truth labels, the inverse operation of (7) can be performed which is given by the following equation: \[y_{5}^{k}:=\sum_{k=1}^{N}y_{5}^{k}\times\exp(-y_{5}^{k}) \tag{14}\] ### _Using Explainability to Identify Key Nodes_ Out of all the tasks, fault localization is the most important given that the model performs really well for fault detection. Considering practical implications, for the remaining three Fig. 4: Confusion matrix for fault type classification task. The first three classes are asymmetric faults, the next two classes are symmetric faults and the final class corresponds to non-fault events. Fig. 5: t-SNE visualization of last layer features of fault type classification head \(h_{\theta}^{3}\) for the test set \(\mathcal{D}_{test}\). Except for a few samples in the LLL fault type and LLLG fault type, most features are clearly separable based on fault type. Fig. 6: Ground truth (\(y_{4}\)) and predicted (\(\hat{y}_{4}\)) fault resistance distribution on the test set (\(\mathcal{D}_{test}\)). The grey section is the correctly predicted part of the distribution. tasks, it is acceptable if the model can approximate the prediction. So far in analysis, we assumed complete observability which implies all node voltage phasors can be measured. As the scale of the distribution system grows it becomes more and more harder to maintain complete observability considering cost and system complexity. Therefore, we propose a novel approach to locate key nodes using an explainability algorithm GNNExplainer proposed by [47]. GNNExplainer uses an optimization that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures to identify important subgraphs. The most notable characteristic of this algorithm is it does not require ground truth labels. Using this algorithm we generate two sparse node sets \(\mathcal{V}_{50\%}\), \(\mathcal{V}_{75\%}\) where the subscript denotes the percentage of nodes out of \(128\) which has data available. The feature vector of the rest of the nodes is set to \(0\). ``` 0:\(X_{test,epoch_{GE},h_{d}^{2}\circ g_{gh}(trained),w_{th}}\) 0:\(V_{75\%}\) 1:for\(k=1\) to \(N_{test}\)do 2:\(E_{k}^{k}=\Omega\)\(\Omega\)\(\mathcal{V}_{50\%}\) 3:for\(p=10\) to \(|E|\)do 4:if\(E_{k}^{k}(p)>\mu_{th}\)then\(\triangleright\) Thresholding with \(w_{th}\) 5:\(E_{w_{th}}^{k}(p)\gets 1\) 6:else 7:\(E_{w_{th}}^{k}(p)\gets 0\) 8:\(\{\mathcal{V}_{sparse}\}^{k}\leftarrow\)GetConnectedNodeParis\((E_{w_{th}}^{k})\) 9:\(\mathcal{V}_{w\%}\leftarrow\bigcup_{k}(\mathcal{V}_{sparse})^{k}\) ``` **Algorithm 2** Sparse Node-Set Generation Algorithm To generate these sets we pass each test data sample \(X_{test}^{k}\) and the trained model with localization head \(h_{\theta^{2}}\circ g_{\theta^{th}}\) to the GNNExplainer algorithm which generates an edge importance vector given by \(E_{\mathcal{I}}\in\mathbb{R}^{|\mathcal{E}|}\), indexed by \(p\). For each edge in \(E_{c}\) a corresponding weight is generated which signifies the importance of that edge. A weight value closer to \(1\) means a more important edge and a value closer to \(0\) means a less important edge. We threshold the values in \(E_{\mathcal{I}}\) with a threshold \(w_{th}\). After thresholding, the transformed edge importance vector is expressed by \(E_{w_{th}}\in\mathbb{R}^{|\mathcal{E}|}\). Edges that have an importance score of more than \(w_{th}\) are kept and the rest are disregarded. The nodes connected to the edges in \(E_{w_{th}}\) after thresholding are regarded as the important nodes for the \(k^{th}\) data point. In this way, for each data point, we get a sparse node set given by \(\{\mathcal{V}_{sparse}\}^{k}\). Then the union of these sparse important node sets generates the final sparse node set which is given by the following equation:- \[\mathcal{V}_{x\%}:=\bigcup_{k=1}^{N}\{\mathcal{V}_{sparse}\}^{k} \tag{15}\] where the \(x\%\) value depends on the threshold value. We set the value of threshold \(w_{th}\) to respectively \(0.44\) and \(0.487\) to generate \(\mathcal{V}_{75\%}\) and \(\mathcal{V}_{50\%}\). The entire process is described in the Algorithm 2. In addition to other parameters mentioned above, an epoch number needs to be specified for the internal optimization of the GNNExplainer. To validate this approach we also randomly sample \(50\%\) nodes several times and train the model to generate fault localization metrics on the test set \(\mathcal{V}_{50\%}^{random_{avg}}\) represent the average of these metrics and \(\mathcal{V}_{50\%}^{random_{min}}\) represent the minimum of these metrics from the samples generated. The results of this analysis are summarized in Table V. It is apparent that the model is robust enough to maintain the LAR\({}^{1}\) and LAR\({}^{2}\) irrespective of which sparse node-set is used. But for LAR\({}^{0}\) and F1-Score, the sparse set generated with GNNExplainer is comparatively better. \(\mathcal{V}_{50\%}\) has a \(4.78\%\) increase in LAR\({}^{0}\) compared to \(\mathcal{V}_{50\%}^{random_{avg}}\) and a \(7.9\%\) increase compared to \(\mathcal{V}_{50\%}^{random_{min}}\) which means the sparse node-set generation algorithm was able to identify the important node set for fault localization. ## VII Conclusion In this paper, we proposed an MTL-GNN capable of performing \(5\) different tasks simultaneously even with a sparse node-set. This sparse node-set is generated with a novel algorithm and to the best of our knowledge this is the first work that uses a GNN explainability algorithm for an informed node selection.
2309.11808
Unlocking massively parallel spectral proper orthogonal decompositions in the PySPOD package
We propose a parallel (distributed) version of the spectral proper orthogonal decomposition (SPOD) technique. The parallel SPOD algorithm distributes the spatial dimension of the dataset preserving time. This approach is adopted to preserve the non-distributed fast Fourier transform of the data in time, thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is implemented in the PySPOD (https://github.com/MathEXLab/PySPOD) library and makes use of the standard message passing interface (MPI) library, implemented in Python via mpi4py (https://mpi4py.readthedocs.io/en/stable/). An extensive performance evaluation of the parallel package is provided, including strong and weak scalability analyses. The open-source library allows the analysis of large datasets of interest across the scientific community. Here, we present applications in fluid dynamics and geophysics, that are extremely difficult (if not impossible) to achieve without a parallel algorithm. This work opens the path toward modal analyses of big quasi-stationary data, helping to uncover new unexplored spatio-temporal patterns.
Marcin Rogowski, Brandon C. Y. Yeung, Oliver T. Schmidt, Romit Maulik, Lisandro Dalcin, Matteo Parsani, Gianmarco Mengaldo
2023-09-21T06:28:07Z
http://arxiv.org/abs/2309.11808v2
# Unlocking massively parallel spectral proper orthogonal decompositions in the PySPOD package ###### Abstract We propose a parallel (distributed) version of the spectral proper orthogonal decomposition (SPOD) technique. The parallel SPOD algorithm distributes the spatial dimension of the dataset preserving time. This approach is adopted to preserve the non-distributed fast Fourier transform of the data in time, thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is implemented in the PySPOD library and makes use of the standard message passing interface (MPI) library, implemented in Python via mpi4py. An extensive performance evaluation of the parallel package is provided, including strong and weak scalability analyses. The open-source library allows the analysis of large datasets of interest across the scientific community. Here, we present applications in fluid dynamics and geophysics, that are extremely difficult (if not impossible) to achieve without a parallel algorithm. This work opens the path toward modal analyses of big quasi-stationary data, helping to uncover new unexplored spatio-temporal patterns. keywords: Spectral proper orthogonal decomposition, SPOD, Parallel, Distributed, MPI, Modal decomposition, Dynamical systems + Footnote †: journal: Computer Physics Communications **PROGRAM SUMMARY/NEW VERSION PROGRAM SUMMARY** _Program Title:_ PySPOD _CPC Library link to program files:_ (to be added by Technical Editor) _Developer's repository link:_ [https://github.com/MathEXLab/PySPOD](https://github.com/MathEXLab/PySPOD) _Code Ocean capsule:_ (to be added by Technical Editor) _Licensing provisions(please choose one):_ MIT License _Programming language:_ Python _Nature of problem:_ Large spatio-temporal datasets may contain coherent patterns that can be leveraged to better understand, model, and possibly predict the behaviour of complex dynamical systems. To this end, modal decomposition methods, such as the proper orthogonal decomposition (POD) and its spectral counterpart (SPOD), constitute powerful tools. The SPOD algorithm allows the systematic identification of space-time coherent patterns. This can be used to understand better the physics of the process of interest, and provide a path for mathematical modeling, including reduced order modeling. The SPOD algorithm has been successfully applied to fluid dynamics, geophysics and other domains. However, the existing open-source implementations are serial, and they prevent running on the increasingly large datasets that are becoming available, especially in computational physics. The inability to analyse via SPOD large dataset in turn prevents unlocking novel mechanisms and dynamical behaviours in complex systems. _Solution method:_ We provide an open-source parallel (MPI distributed) code, namely PySPOD, that is able to run on large datasets (the ones considered in the present paper reach about 200 Terabytes). The code is built on the previous serial open-source code PySPOD that was published in [https://joss.theoj.org/papers/10.21105/joss.02862.pdf](https://joss.theoj.org/papers/10.21105/joss.02862.pdf). The new parallel implementation is able to scale on several nodes (we show both weak and strong scalability) and solve some of the bottlenecks that are commonly found at the I/O stage. The current parallel code allows running on datasets that was not easy or possible to analyse with serial SPOD algorithms, hence providing a path towards unlocking novel findings in computational physics. _Additional comments including Restrictions and Unusual features (approx. 50-250 words):_ The code comes with a set of built-in postprocessing tools, for visualizing the results. It also comes with extensive continuous integration, documentation, and tutorials, as well as a dedicated website in addition to the associated GiHub repository. Within the package we also provide a parallel implementation of the proper orthogonal decomposition (POD), that leverages the I/O parallel capabilities of the SPOD algorithm. ## 1 Introduction Data that depends on both space and time, also referred to as spatio-temporal data, is ubiquitous. It can represent the Earth's atmosphere, the flow past an aircraft, and the ocean's dynamics, among many other phenomena and processes. Usually, spatio-temporal data is high-dimensional and its interpretation non obvious. Yet, spatio-temporal data, especially the one related to physical processes, contain coherent patterns. These, if uncovered, may provide a better understanding of critical aspects of the underlying physical processes of interest. Hence, tools to analyze and make sense of this type of data are of paramount importance. Over the last several years, many tools have been proposed to mining information from spatio-temporal data, such as the proper orthogonal decomposition (POD) [1; 2], the dynamic mode decomposition (DMD) [3], and the spectral proper orthogonal decomposition (SPOD) [4; 5; 6; 7; 8]. Yet, the majority of these tools comes with limited parallel capabilities for handling large datasets. In this paper, we propose a first parallel (distributed) SPOD algorithm, that allows large-scale modal decomposition analyses. The algorithm has been tested on both geophysical and fluid mechanics data, reaching 199 terabytes (TB) of size. Weak and strong scalability have been thoroughly assessed for the entire algorithm as well as for its main constituting steps, including input/output handling, discrete Fourier transform, and eigenvalue computations. The efficiency of the novel parallel algorithm proposed in this paper allows analysis of data at unprecedented scales, opening the path towards uncovering new physics in large datasets, that existing SPOD packages were unable to tackle [9; 10]. This paper is organized as follows. In section 2, we introduce the SPOD method. In section 3, we detail our parallelization strategy. In section 4, we present the results obtained on two large datasets, one related to fluid mechanics and the other to geophysics (in particular atmospheric physics). In section 5, we present the strong and weak scalability analysis of the parallel SPOD algorithm. In section 6, we provide some concluding remarks. ## 2 The spectral proper orthogonal decomposition ### A note on suitable data The spectral proper orthogonal decomposition, also referred to as spectral POD or simply SPOD, extracts coherent structures from statistically stationary data. The data considered have spatial and temporal dependence, and describe a stochastic (also referred to as random) process denoted by \(\mathbf{q}(\mathbf{x},t)\), where \(\mathbf{x}\) represents the spatial coordinate, and \(t\) the time. Usually, \(\mathbf{x}=(x,y,z)\in\mathbb{R}^{3}\) for the processes of interest, where \(x,y,\) and \(z\) are Cartesian coordinates. In practice, both two- and three-dimensional data may be used. We now define in more detail the concept of stationarity and the assumptions that allow us to use ensemble and time averages interchangeably. The _stationarity_ assumption under which the SPOD operates is typically intended in the _weak sense_, also referred to as _wide-sense_ or _covariance stationarity_. This implies that \(\mathbf{q}(\mathbf{x},t)\), where we dropped the probability parametrization \(\xi\), has first- and second-order moments (_i.e._, average and autocovariance) that do not vary with time. The mathematical formalization of wide-sense stationarity is as follows. Let \(\mathbf{q}(\mathbf{x},t)\) be a continuous-time stochastic process, \(E[\cdot]\) be the expectation operator, and \(\mathbf{\mathcal{C}}\) be the covariance. If * the expectation operator is independent of \(t\): \[E[\mathbf{q}(\mathbf{x},t)]=\mu(\mathbf{x}),\] (1) * the covariance depends only on the difference between two times, \(t-t^{\prime}\): \[\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},t,t^{\prime})=\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},\tau),\quad\text{where}\;\;\tau=t-t^{\prime},\] (2) * the average 'power' is bounded, and does not go to infinity: \[E[|\mathbf{q}(\mathbf{x},t)^{2}|]<\infty,\] (3) then \(\mathbf{q}(\mathbf{x},t)\) is said to be _wide-sense stationary_. Many fields in computational physics, including fluid dynamics and geophysics, give rise to wide-sense stationary problems. In section 4 we will apply SPOD to examples of wide-sense stationary problems in these two disciplines. As a final yet important note in this section, we add that the stochastic processes considered are also ergodic in addition to being wide-sense stationary. This means that the expectation, \(E[\mathbf{q}]\), coincides with the ensemble average of different realizations of \(\mathbf{q}(\mathbf{x},t)\), that in turn is equal to the long-time average, \(\overline{\mathbf{q}(\mathbf{x},t)}\). Therefore, when we say zero-average stochastic process, we refer to a process where \(\bar{\mathbf{q}}=E[\mathbf{q}]\) has been removed. Removal of the mean facilitates the interpretation of the SPOD eigenvalues as perturbation energy or variance. In the following, without loss of generality we will always assume that \(\bar{\mathbf{q}}=0\). In sections 2.2 and 2.3, we will use some of the notions introduced here to derive the continuous and discrete SPOD approaches. ### Theory For the sake of reader's convenience, we report here the theory behind SPOD, closely following [5]. The task of the SPOD is to identify a deterministic function \(\mathbf{\phi}(\mathbf{x},t)\) (or a set of functions) that best approximates the weak-sense stationary and zero-average process \(\mathbf{q}(\mathbf{x},t)\). In mathematical terms, this translates into finding the function \(\mathbf{\phi}(\mathbf{x},t)\) that maximizes the expected value of the normalized projection of the stochastic function \(\mathbf{q}(\mathbf{x},t)\), that is, \[\lambda=\frac{E\big{[}|\langle\mathbf{q}(\mathbf{x},t),\mathbf{\phi}(\mathbf{x},t)\rangle_{\bm {x},t}|^{2}\big{]}}{\langle\mathbf{\phi}(\mathbf{x},t),\mathbf{\phi}(\mathbf{x},t)\rangle_{\bm {x},t}}. \tag{4}\] In equation (4), we assume that any realization of \(\mathbf{q}(\mathbf{x},t)\) belongs to a Hilbert space, \(H\), with a space-time inner product, \(\langle\cdot,\cdot\rangle_{\mathbf{x},t}\), and expectation operator, \(E[\cdot]\), here taken to be the ensemble average. The inner product in equation (4), \(\langle\cdot,\cdot\rangle_{\mathbf{x},t}\), between two generic variables, \(\mathbf{u}\) and \(\mathbf{v}\), is defined as \[\langle\mathbf{u},\mathbf{v}\rangle_{\mathbf{x},t}=\int_{-\infty}^{\infty}\int_{\Omega} \mathbf{u}^{*}(\mathbf{x},t)\;\mathbf{W}(\mathbf{x})\;\mathbf{v}(\mathbf{x},t)\,\mathrm{d}\mathbf{x}\, \mathrm{d}t, \tag{5}\] where \(\Omega\) denotes the spatial domain, \(\mathbf{W}(\mathbf{x})\) the spatial weighting, and the asterisk superscript represents the conjugate transpose. By invoking the Karhunen-Loeve (KL) theorem [11; 12], we know that there exists a set of mutually orthogonal deterministic functions that form a complete basis in \(H\). This can be defined as \(\hat{\mathbf{q}}(\mathbf{x},f)=\sum_{k=1}^{\infty}a_{k}(f)\mathbf{\phi}_{k}(\mathbf{x},f)\), where \(\hat{(\cdot)}\) denotes the Fourier transform in time. The eigenfunctions, \(\mathbf{\phi}_{k}\), and their associated eigenvalues, \(\lambda_{k}\), are solutions to the eigenvalue problem in the frequency domain: \[\int_{\Omega}\hat{\mathbf{\mathcal{C}}}(\mathbf{x},\mathbf{x}^{\prime},f)\;\mathbf{W}(\mathbf{x} ^{\prime})\;\mathbf{\phi}(\mathbf{x}^{\prime},f)\,\mathrm{d}\mathbf{x}^{\prime}=\lambda(f )\mathbf{\phi}(\mathbf{x},f), \tag{6}\] where \(\hat{\mathbf{\mathcal{C}}}(\mathbf{x},\mathbf{x}^{\prime},f)\) is the cross-spectral density tensor, i.e., the Fourier transform of \(\mathbf{\mathcal{C}}\): \[\hat{\mathbf{\mathcal{C}}}(\mathbf{x},\mathbf{x}^{\prime},f)=\int_{-\infty}^{\infty}\mathbf{ \mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},\tau)e^{-\mathrm{i}2\pi f\tau}\mathrm{d}\tau. \tag{7}\] In equation (7), the two-point space-time correlation tensor \(\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},\tau)\) is defined as \[\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},\tau)=E[\mathbf{q}(\mathbf{x},t)\,\mathbf{q}^{*}( \mathbf{x}^{\prime},t^{\prime})], \tag{8}\] where we used equation (2), under the assumption of wide-sense stationarity of the stochastic process \(\mathbf{q}(\mathbf{x},t)\) and \(\tau=t-t^{\prime}\) is the difference between the two times \(t\) and \(t^{\prime}\). This implies that the covariance depends only on the difference between two times, \(t\) and \(t^{\prime}\) and, therefore, we can write, \(\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},t,t^{\prime})\to\mathbf{\mathcal{C}}(\mathbf{x},\mathbf{x}^{\prime},\tau)\). The last step allows reformulating the problem in the spectral (also referred to as frequency) domain, as per equation (6). This significant result was first presented in [4; 13] and revisited in [5]. It provides eigenmodes at each frequency that inherit the same properties of the more traditional (non-spectral) POD. The SPOD formulation just summarized leads to monochromatic SPOD modes that optimally characterize the second-order space-time moments of the continuous-time stochastic process considered [5]. ### Practical implementation In practice, the continuous-time stochastic process \(\mathbf{q}(\mathbf{x},t)\) introduced in section 2.1 and used in section 2.2 is provided as discrete data. The discrete data consist of snapshots of the wide-sense stationary time series, \(\mathbf{q}(\mathbf{x},t_{i})\), \(t_{i}=1,\ldots,N_{t}\), from which we subtract the temporal mean, \(\bar{\mathbf{q}}\). Each snapshot \(\mathbf{q}(\mathbf{x},t_{i})\) is sampled at \(M_{\text{space}}\) spatial points with coordinates \(\mathbf{x}\in\mathbb{R}^{M_{\text{space}}\times d}\) (usually two- or three-dimensional, that is, \(d=2\) or \(3\), respectively), and records \(M_{\text{vars}}\) variables. To derive the SPOD algorithm, we recast each snapshot of the discrete multidimensional data into a vector of dimension \(\mathbf{q}(\mathbf{x},t_{i})=\mathbf{q}_{i}\in\mathbb{R}^{M}\), where \(M=M_{\text{space}}M_{\text{vars}}\). We can then assemble the data matrix (also referred to as snapshot matrix), \[\mathbf{Q}=[\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{N_{t}}]\in \mathbb{R}^{M\times N_{t}}. \tag{9}\] The data described by equation (9) can arise from simulations and observations of a wide-sense stationary stochastic system. We assumed the data to be composed exclusively of real numbers. However, this assumption is not strictly necessary, as the data can, in principle, be complex. The first step to obtaining the discrete analog of the frequency-domain eigenvalue problem in equation (6) consists of segmenting the data along the time direction into \(L\) (possibly overlapping) blocks \[\mathbf{Q}^{(\ell)}=[\mathbf{q}_{1}^{(\ell)},\ldots,\mathbf{q}_{N_{f}}^{(\ell )}]\in\mathbb{R}^{M\times N_{f}},\quad\ell=1,\ldots,L\implies\begin{cases} \mathbf{Q}^{(1)}=[\mathbf{q}_{1}^{(1)},\ldots,\mathbf{q}_{N_{f}}^{(1)}]\in \mathbb{R}^{M\times N_{f}},\\ \mathbf{Q}^{(2)}=[\mathbf{q}_{1}^{(2)},\ldots,\mathbf{q}_{N_{f}}^{(2)}]\in \mathbb{R}^{M\times N_{f}},\\ \ldots\\ \mathbf{Q}^{(L)}=[\mathbf{q}_{1}^{(L)},\ldots,\mathbf{q}_{N_{f}}^{(L)}]\in \mathbb{R}^{M\times N_{f}}.\end{cases} \tag{10}\] Each data block \(\ell\) in equation (10) contains \(N_{f}\) time snapshots, overlaps by \(N_{\text{overlap}}\) time snapshots with the adjacent block, and is regarded as equally representative of the whole data by the ergodic assumption. Indeed, it is a realization of the stochastic process described by the discrete data \(\mathbf{Q}\) in equation (9). This approach of partitioning the time series into overlapping data blocks is the well-known Welch periodogram method [14; 7]. In the following, we will use the terms _realization_ and _data block_ interchangeably. The second step consists of applying the discrete Fourier transform (DFT) in time to each data block or realization in equation (10): \[\mathbf{Q}^{(\ell)}\underbrace{\longrightarrow}_{\text{DFT}}\hat{\mathbf{Q} }^{(\ell)}=[\hat{\mathbf{q}}_{1}^{(\ell)},\hat{\mathbf{q}}_{2}^{(\ell)}, \ldots,\hat{\mathbf{q}}_{N_{f}}^{(\ell)}]\in\mathbb{C}^{M\times N_{f}},\quad \ell=1,\ldots,L. \tag{11}\] We note that each Fourier-transformed data block \(\hat{\mathbf{Q}}^{(\ell)}\) contains \(N_{f}\) frequencies. Wide-sense stationarity and ergodicity allow us to reorganize the Fourier-transformed data into \(N_{f}\) data matrices, one per frequency, \(f_{k}\). In particular, we collect all realizations of the DFT at the \(k\)-th frequency into \[\hat{\mathbf{Q}}_{k}=[\hat{\mathbf{q}}_{k}^{(1)},\hat{\mathbf{q}}_{k}^{(2)}, \ldots,\hat{\mathbf{q}}_{k}^{(L)}]\in\mathbb{C}^{M\times L},\quad\text{for all frequencies $f_{k}$, $k=1,\ldots,N_{f}$.} \tag{12}\] For fluid mechanical and geophysical applications, we usually have \(L\ll M\). For the parallelization of the SPOD algorithm we will make use of this notion. The third step is to construct the cross-spectral density matrix for each frequency, \(f_{k}\). This step corresponds to the discrete counterpart of equation (7), and can be readily achieved by calculating \[\hat{\mathbf{C}}_{k}=\frac{1}{L-1}\hat{\mathbf{Q}}_{k}\hat{\mathbf{Q}}_{k}^{*} \in\mathbb{C}^{M\times M},\ \ \ \text{for all frequencies $f_{k}$, $k=1,\ldots,N_{f}$}, \tag{13}\] where \(L-1\) is a normalization factor, that is only appropriate if the data is centered about the sample mean rather than the long-time (i.e., true) mean. The construction of the cross-spectral density matrix in equation (13) finally allows us to write the discrete analog of the frequency-domain eigenvalue problem defined in equation (6): \[\hat{\mathbf{C}}_{k}\mathbf{W}\boldsymbol{\Phi}_{k}=\boldsymbol{ \Phi}_{k}\boldsymbol{\Lambda}_{k},\ \ \ \text{with}\] \[\boldsymbol{\Phi}_{k}=[\boldsymbol{\Phi}_{k}^{(1)},\boldsymbol{ \Phi}_{k}^{(2)},\ldots,\boldsymbol{\Phi}_{k}^{(L)}]\in\mathbb{C}^{M\times L} \ \ \ \ \ \ \ \ \ \text{(SPOD modes)}, \tag{14}\] \[\boldsymbol{\Lambda}_{k}=\text{diag}(\lambda_{k}^{(1)},\lambda_{ k}^{(2)}\cdots,\lambda_{k}^{(L)})\in\mathbb{R}^{L\times L}\ \ \ \text{(modal energies)}.\] The SPOD modes, \(\boldsymbol{\Phi}_{k}\), and associated modal energies (or eigenvalues), \(\boldsymbol{\Lambda}_{k}\), can be computed by solving equation (14) for each frequency, \(f_{k}\). In practice, to alleviate the computational burden of this step, one usually turns to the method of snapshots [15]: \[\hat{\mathbf{Q}}_{k}^{*}\mathbf{W}\hat{\mathbf{Q}}_{k}\boldsymbol{\Psi}_{k}= \boldsymbol{\Psi}_{k}\boldsymbol{\Lambda}_{k},\ \ \ \boldsymbol{\Phi}=\hat{\mathbf{Q}}\boldsymbol{\Psi} \boldsymbol{\Lambda}_{k}^{-1/2}. \tag{15}\] By construction, for a given frequency \(f_{k}\), the modes \(\boldsymbol{\Phi}_{k}\) are orthonormal, _i.e._, \(\boldsymbol{\Phi}_{k}^{*}\mathbf{W}\boldsymbol{\Phi}_{k}=\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix. Modes at different frequencies are instead not orthonormal, that is, \(\boldsymbol{\Phi}_{k_{1}}^{*}\mathbf{W}\boldsymbol{\Phi}_{k_{2}}\neq\mathbf{I}\), where \(k_{1}\neq k_{2}\), but orthonormal under the full space-time inner product. Finally, the SPOD modes can be grouped per frequency, as follows: \[\boldsymbol{\Phi}=[\boldsymbol{\Phi}_{1},\ldots,\boldsymbol{\Phi}_{N_{f2}}]= \underbrace{[\boldsymbol{\Phi}_{1}^{(1)},\ldots,\boldsymbol{\Phi}_{1}^{(L)}]} _{\boldsymbol{\Phi}_{1}},\underbrace{\boldsymbol{\Phi}_{2}^{(1)},\ldots, \boldsymbol{\Phi}_{2}^{(L)}}_{\boldsymbol{\Phi}_{2}},\ldots,\underbrace{ \boldsymbol{\Phi}_{N_{f2}}^{(1)},\ldots,\boldsymbol{\Phi}_{N_{f2}}^{(L)}]}_{ \boldsymbol{\Phi}_{N_{f2}}}, \tag{16}\] where, for real data, \(\mathbf{Q}\in\mathbb{R}^{M\times N_{t}}\), the total number of frequencies is \(N_{f/2}=\lceil\frac{N_{f}}{2}\rceil+1\). This is because the transformed data at negative frequencies correspond to the conjugates of the positive frequencies, and it is therefore redundant. For additional details on the SPOD method, the interested reader can refer to [5; 7]. ### SPOD for data compression As shown in [16], and further explored in [8], it is possible to compute a matrix of expansion coefficients \(\mathbf{A}\). This can be constructed using a weighted oblique projection of the data onto the modal basis \[\mathbf{A}=(\boldsymbol{\Phi}^{*}\mathbf{W}\boldsymbol{\Phi})^{-1}\boldsymbol{ \Phi}^{*}\mathbf{W}\mathbf{Q}=[\underbrace{a_{1}^{(1)},a_{1}^{(2)},\ldots,a_ {1}^{(L)}}_{\mathbf{a}_{1}},\underbrace{a_{2}^{(1)},a_{2}^{(2)},\ldots,a_{2}^ {(L)}}_{\mathbf{a}_{2}},\ldots,\underbrace{a_{N_{f}}^{(1)},a_{N_{f}}^{(2)}, \ldots,a_{N_{f}}^{(L)}]}_{\mathbf{a}_{N_{f}}}. \tag{17}\] where \(\mathbf{A}\in\mathbb{C}^{(L\times N_{f})\times N_{t}}\) is the matrix containing the expansion coefficients and \(\boldsymbol{\Phi}\in\mathbb{C}^{M\times(L\times N_{f})}\) is a matrix which gathers all the SPOD modes arranged by frequency as in equation (16). The full matrix of expansion coefficients, constructed using all modes and frequencies, has dimensions \(L\times N_{f}\times N_{t}\). In practice, it is common to use only a portion \(L_{r}\) of the total number of modes, and (eventually) a portion \(N_{f_{r}}\) of the total number of frequencies, where we denote with \(\mathbf{\Phi}_{r}\) and \(\mathbf{A}_{r}\), the reduced number of SPOD modes and expansion coefficients, respectively. This reduction recasts the original high-dimensional data into a smaller SPOD latent space of dimension \(L_{r}\times N_{f_{r}}\). Once both SPOD modes and expansion coefficients are available, it is possible to reconstruct the original high-dimensional data as follows \[\tilde{\mathbf{Q}}=\mathbf{\Phi}_{r}\mathbf{A}_{r}, \tag{18}\] where \(\tilde{\mathbf{Q}}\) is an approximation of the original data \(\mathbf{Q}\), given the truncation imposed on the number of SPOD modes and frequencies. The storage of the expansion coefficients and SPOD modes required to reconstruct the high-dimensional data in equation (18), can potentially lead to significant savings in terms of memory storage. In addition, the ability to truncate number of SPOD modes and number of frequencies can be beneficial to just store the frequencies of interest (e.g., removing high-frequency noise from the data), and capture low-rank behaviour of the process under study [5; 6]. For instance, if we consider a problem constituted by 30,000 time snapshots, 37,000,000 spatial points, and 1 variables, we have: \(N_{t}\times M_{\text{space}}\times M_{\text{vars}}=30,000\times 37,000,000 \times 1\). To store this dataset in memory, we require approximately 8.88 TB in double floating-point precision, and 4.44 TB in single floating-point precision. However, if we, for instance, store the first 3 SPOD modes and 100 frequencies of interest, the amount of storage memory required is significantly lowered. To store the SPOD modes, we would need \(3\times 100\times 37,000,000\), that leads to 0.088 TB of memory in double floating-point precision, and 0.044 TB in single floating-point precision. To store the time coefficients, we would need \(3\times 100\times 30,000\), that leads to 0.000072 TB in double floating-point precision and 0.000036 TB in single floating-point precision. Hence, if we store 3 SPOD modes and 100 frequencies, the storage memory required is only 1% of the memory required to store the original dataset, and the data compression s achieved by having full control on what modes and frequencies are left out (if any). ## 3 Parallelization strategy The parallelization efforts have been mainly focused on the matrix operations required by the SPOD algorithm, and on the data Input/Output (I/O). We outline the parallelization of the algorithm first (section 3.1), and reserve section 3.2 for I/O, as it is a crucial aspect to achieve competitive scalability results. ### SPOD algorithm The parallelization strategy for the SPOD algorithm uses a single-program multiple-data (SPMD) approach. It allows maintaining the structure of the code similar to that of the serial one, only introducing parallel communication and synchronization in a limited number of places. For the SPOD algorithm, this consists of decomposing the spatial dimensions \(M_{\text{space}}\) (conveniently flattened) of the data matrix in equation (9). This is a practical and advantageous choice as it allows preserving all operations in time - in particular, the DFT - without needing expensive all-to-all communication patterns. Additionally, the number of spatial points \(M_{\text{space}}\) is usually much larger than the number of variables \(M_{\text{vars}}\), thus allowing for considerably higher parallelism. The decomposition of the space dimension allows a straightforward parallel implementation that only requires one single MPI collective reduction operation. Most of the MPI operations are implemented in an auxiliary utility module. This allows using MPI routines off-the-shelf, requiring minor modifications to the original (serial) code. Entering into more detail, and assuming that the data distribution is achieved right after I/O, the parallelization of the algorithm becomes trivial. Indeed, it consists of a simple parallel reduction operation for the inner product derived from the method of snapshots in equation (15). The MPI-based implementation uses the mpi4py package [17, 18]. In figure 1, we depict the parallel SPOD algorithm. In red, we represent the parallel operations required, while in blue are operations that remain unchanged compared to the serial code. The six steps reported in figure 1 are described in the following. * Step 1) Distribute data: it consists of distributing the spatial dimension of each of the data blocks \(\mathbf{Q}^{\ell},\;\ell=1,\ldots,L\) across the MPI ranks available. * Step 2) DFT (see equation (11)): it consists of performing the DFT along the time dimension. This operation remains unchanged as time has not been distributed. * Steps 3, 4) Inner product and reduction (see equation (13)): the inner product operation involves contracting the spatial dimension, hence it requires a parallel reduction operation. * Step 5) Eigen-decomposition (see equation (15)): this operation does not require any parallel handling as there is no manipulation of the distributed spatial dimension. Since the size of the blocks is typically small, eigen-decompositions are cheap to compute. Therefore, we perform this step redundantly in all MPI ranks. If this task ever becomes a computational bottleneck, it would be straightforward to replace with an optimized implementation exploiting the available parallelism. * Step 6) Parallel I/Q: This operation involves writing the SPOD modes to disk. The I/O handling is described in the next section (3.2), and is key to the scalability and overall performance of our implementation. ### I/O handling We dedicate a separate section to the I/O aspect of PySPOD as, in most practical scenarios, it plays a critical role in the performance of PySPOD overall. This is especially prominent in a strong scaling scenario such as that presented in section 5.2. We divide this section into background and related work (section 3.2.1), and the PySPOD I/O implementation (section 3.2.2). #### 3.2.1 Background and related work Since the early 1990s, I/O has been acknowledged as a significant contributor to parallel application performance [19]. However, relatively little attention was given to I/O compared to the compute and communication subsystems during that time [20]. Since then, the performance gap between the I/O throughput and the compute capability of supercomputers has only increased, hence exacerbating data movement challenges [21]. A 2012 report titled "Storage challenges at Los Alamos National Lab" noted that parallel file systems lack large parallel I/O, which would be both high bandwidth and resistant to I/O patterns without additional tuning [22]. Consequently, I/O systems remain relatively slow and complex, making it difficult for non-expert users to utilize them fully. Middleware libraries were introduced to improve usability and simplify parallel I/O. After all, as [21] noted, users need not be aware of their data's low-level layout and organization, which opens possibilities for specialized I/O middleware. MPI I/O enables simple N-1 I/O (\(N\) processes, 1 file) [23], and it has been an ongoing effort since 1999. It contains optimizations reducing the number of distinct I/O requests using techniques such as data sieving and collective I/O for noncontiguous accesses [24]. MPI I/O, however, is still low-level, as the developer needs to calculate the offsets within the file, while MPI aims to access the data efficiently. Higher-level libraries such as HDF5 [25], Parallel netCDF [26], ADIOS [21], and ADIOS 2 [27] have also been introduced to isolate developers from low-level file system details and alleviate the burden of platform-specific performance optimization. These libraries often build on top of MPI I/O and introduce custom file formats. Figure 1: Schematic of the parallel SPOD algorithm. The key aspect is to obtain an appropriate data decomposition layout that allows preserving all time operations as done in serial (i.e., the DFT), and decompose only the spatial dimensions of the data. Once the data is in the required parallel layout, the parallelization of the SPOD algorithm becomes trivial and consists only of a parallel reduction (step 4) of the inner product (step 3). Although robust solutions for parallel I/O are available, achieving good performance often requires application-specific and system-specific tuning. Researchers have dedicated significant effort to optimizing applications that utilize the popular Lustre file system. Tools such as Lustre IO Profiler [28] were introduced to monitor and understand the I/O activities on the file system. The authors of this tool highlight the importance of incorporating system-specific details, such as the number of Object Storage Targets (OSTs), into the MPI distribution used in order to improve the achieved I/O bandwidth. The authors of Y-Lib [29] also point out the low performance of MPI-IO on Lustre file systems and propose a solution that minimizes contention for file system resources by controlling the number of OSTs with which each aggregator process communicates. This solution is shown to outperform MPI-IO in many cases. Similarly, the authors of [30] demonstrate that Parallel netCDF with MPI-IO does improve the performance over netCDF; however, even with supercomputer-specific optimizations, the performance still disappoints once a large number of MPI ranks is used. The authors utilize asynchronous I/O (quilt) servers alongside Parallel netCDF to further improve the effective bandwidth. Other solutions explore tuning the middleware or file system parameters, such as those presented in [31; 32; 33; 34]. These approaches often use sophisticated techniques, such as genetic algorithms, to identify and tune parameters of the I/O stack (HDF5, MPI-IO, and Lustre/GPFS parameters) [33; 34]. The wealth of work on the topic highlights the difficulty of achieving good performance on current parallel file systems. Despite the availability of powerful solutions and tools to tune the performance, many developers still choose not to use them, even in leadership scale systems [28]. An analysis conducted in 2015 using Darshan [35] found that, depending on the supercomputing facility, between 50 to 95% of monitored jobs used POSIX I/O exclusively. However, as [35] notes, relying on POSIX I/O does have to result in poor application performance. In fact, in some cases, using a naive N-N (\(N\) processes, \(N\) files) approach, where each process reads from or writes to a unique file, can achieve better performance and scalability than N-1, as it does not incur overhead to maintain data consistency. This has been observed in PanFS, GPFS, and Lustre file systems. However, the N-N approach presents challenges related to usability, such as when restarting with a different number of processes, and high metadata costs at scale [22; 36]. #### 3.2.2 PySPOD I/O In PySPOD, we take advantage of the N-N model's good performance, but like authors of [36], slightly modify it to be N-M (generally M \(<\) N) mapping, where M is not to be confused with 'italic' \(M\) adopted in section 2.3. Such a setting, where N is the number of processes and M is the number of files, is better suited to our application characteristics. We use a simple two-phase I/O remnant of that proposed in [20] back in 1993. In particular, we first read the data from disk in a contiguous manner; afterwards, we re-distribute the data according to the parallel decomposition our application needs, as explained in section 3.1. Parallel data re-distribution can be performed either with collective communications, or point-to-point communications. In our case, we use non-blocking point-to-point MPI communications, as it results in higher reliability in the HPC systems that have been used in this work. In the following, we enumerate the reasons for our decision to adopt this two-phase I/O, and categorize these reasons into _application-specific_ and _performance-specific_. _Application-specific_ * PySPOD is designed to analyze large datasets, which are often split over multiple files. One such example is the Climate Data Store (CDS), which provides rich climate datasets [37], and that has been used in the scalability analyses presented in sections 5.2 and 5.3. CDS, however, limits the amount of data that can be downloaded in a single request. The resulting datasets' size can be in the order of tens of terabytes split over several hundred files. Another example arises from computational fluid dynamics (CFD) simulators such as SSDC, Nektar++ and Charles [38, 39, 40]. Such software often writes output using one file per requested timestep. Considering the data size and PySPOD's competitive scalability performance, we decided against converting the data to a common format (such as HDF5), as the conversion could be more expensive than the analysis, and we implemented several native readers for different data formats. * Using two-phase I/O allows reusing sequential readers. The design of keeping the I/O (first phase) separate from the data distribution (second phase) effectively makes the I/O phase into many concurrent but sequential I/O streams. This design makes it easy for a user or a developer to implement support for additional file formats in PySPOD. The programming burden is greatly reduced, as there are plenty of sequential reader modules in the Python Package Index which can be used with minimal effort. This design choice removes the burden of "thinking parallel" from a user/developer perspective, as the data distribution is performed in the second phase, and it does not depend on the file format. _Performance-specific_ * Using a two-phase I/O results in fewer and more contiguous requests to the storage system, which is preferred on a parallel file system. For illustration, most often, the data is split over multiple files, where each file represents a different range of timesteps, and the spatial coordinates follow the first dimension. In our design, each process reads contiguous data from at least one input file, i.e., all spatial dimensions for a subset of timesteps (or, more generally, the first dimension, i.e., time), which is good for performance. An alternative design, such as using MPI-IO to immediately read only a subset of spatial variables for all timesteps, may result in every process accessing every file, depending on the MPI implementation used. In fact, such a reader is also implemented in PySPOD; however, based on poor performance results obtained in early experimentation, we decided to extend and optimize the two-phase reader. * Separating I/O from MPI removes the risk of performance degradation due to an underoptimized MPI distribution. As discussed in section 3.2.1, MPI I/O's performance may depend heavily on the MPI distribution used and its knowledge of the underlying I/O system. By separating I/O and data distribution, we read the data from the disk efficiently and then explicitly distribute it with MPI. Since we use point-to-point communication, which is at the very core of MPI, it is much less likely that those functions will perform poorly. _Application-specific_ and _performance-specific_ * SPOD output is expected in the form of one output file per frequency and mode. However, because the data for each frequency is spread across all processes, this would lead to N-1 storage access for each file. This approach would be highly inefficient due to relatively small spatial dimensions. To address this issue, we use a two-phase process similar to the one used for reading files, allowing each file to be written by only one process. To address the memory capacity limitations of PySPOD, we implemented several precautions. One such measure was to read data in chunks, with each process reading approximately 256 MB of data (value determined empirically). This helps reducing memory overhead since processing each chunk requires twice the memory, as time-split data being sent can only be deallocated once it is received by the target process, and space-split data is being received simultaneously. Additionally, we used a dictionary of NumPy arrays instead of one large array. Such design allows us to deallocate processed data as its FFT transforms replace it in memory, therefore reducing peak memory usage. Specifically, we store the data for each chunk in three dictionary keys, with each NumPy array occupying approximately 85 MB of memory. Additionally, we implemented reading in such a way that one process first seeks through the metadata to identify which range of timesteps is available in which file. This information, broadcast with MPI, significantly reduces the overhead associated with N processes opening all M files. ## 4 Datasets The datasets adopted to test the parallel SPOD algorithm consists of fluid mechanics, and geophysical data. The former uses jet data produced by high-fidelity large-eddy simulation (LES), and is described in section 4.1. The latter uses fifth-generation reanalysis data (ERA5) produced by ECMWF [41], and is described in section 4.2, along with associated SPOD results. ### SPOD analysis of fluid mechanics datasets In our first example, we analyze the data generated by Yeung et al. [42] using the solver Charles [40], from LES of a supersonic twin-rectangular jet at a Reynolds number of \(\mathit{Re}\approx 10^{6}\) based on the jet exit conditions and the equivalent nozzle diameter. The simulation was previously validated by Bres et al. [43] against the companion experiments of Samimy et al. [44]. The time-resolved data consist of 20,000 snapshots of the 3D flow field, interpolated onto a Cartesian grid and saved at a time interval of \(\Delta t=0.2\). Each snapshot records five primitive variables: density (\(\rho\)), velocities (\(u,v,w\)), and temperature (\(T\)), in single precision, for a total of \(N_{x}\times N_{y}\times N_{z}\times N_{\text{var}}=625\times 270\times 344\times 5\) data points per snapshot. Storage of the database requires 18,392 GB on disk in HDF5 format, or 43,251 GB once loaded into memory. Figure 2(a) visualizes an instantaneous Q-criterion isosurface, showing the highly turbulent flow field of the high-Reynolds number jet. The colors represent pressure, \(p-p_{\infty}\). Alternating bands of red and blue in the region \(x\lesssim 5\) correspond to near-field coherent pressure fluctuations, which radiate sound outward and contribute to far-field noise. Figure 2(b) shows planar slices of the instantaneous density gradient magnitude, \(|\nabla\rho|\), i.e., an artificial schlieren. Shock cells can be observed inside the potential cores. Superimposed on the schlieren are contours of the mean streamwise velocity, \(\bar{u}\). Despite the chaotic instantaneous flow field, \(\bar{u}\) displays reflectional symmetries about the major and minor axes, \(y=0\) and \(z=0\), respectively. The mean flow thus recovers the geometrical symmetries of the twin-rectangular nozzles. The modal decomposition of axisymmetric jets is typically preceded by an azimuthal Fourier transform, which exploits the rotational invariance of the turbulent statistics. Without loss of generality, the transform reduces the analysis from a single 3D SPOD to one 2D SPOD per azimuthal wavenumber. The absence of azimuthal homogeneity in the twin-rectangular jet precludes such a simplification, thus necessitating a costly 3D analysis, to which PySPOD is ideally suited. To perform SPOD, we assemble the primitive variables into the state vector \(\mathbf{q}=\left[\rho,u,v,w,T\right]^{\mathrm{T}}\). Since the flow is compressible, we choose the weight matrix \[\mathbf{W}=\int_{z}\!\!\int_{y}\!\!\int_{x}\mathrm{diag}\!\left(\frac{\overline {T}}{\gamma\overline{\rho}M_{j}^{2}},\overline{\rho},\overline{\rho},\overline {\rho},\frac{\overline{\rho}}{\gamma(\gamma-1)\overline{T}M_{j}^{2}}\right) \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z\,, \tag{19}\] such that the inner product \(\langle\mathbf{q}_{1},\mathbf{q}_{2}\rangle=\mathbf{q}_{1}^{\mathrm{H}} \mathbf{W}\mathbf{q}_{2}\) induces the compressible energy norm [45]. Here, \(\gamma=1.4\) is the adiabatic index; \(M_{j}=1.5\) is the jet Mach number. We select a block size of \(N_{f}=256\), with \(50\%\) overlap, giving \(L=155\) blocks. The premultiplied SPOD eigenvalue spectra are reported in figure 3(a). The leading eigenvalues show a prominent peak at \(f\approx 0.2\), where \(f\) is nondimensionalized by the nozzle height and the ambient speed of sound. This peak corresponds to the fundamental screech tone. Screeching of the supersonic twin-rectangular jet stems from acoustic resonance between the nozzle and the shock cells, and has been observed both experimentally [46; 44] and numerically [43; 47; 42] to occur at a similar frequency. The peak at \(f\approx 0.2\) persists at least to the second eigenvalue. Large separations between the first, second, and third eigenvalues--termed low-rank behavior [48]--in the frequency range \(0.2\lesssim f\lesssim 0.3\) signal the presence of coherent structures arising from underlying physical instabilities, in this case the screech mechanism. Figure 2: Instantaneous flow field of the twin-rectangular jet: (a) Q-criterion isosurface, colored by pressure; (b) numerical schlieren on the \(y=0\) and \(z=-1.8\) planes, with contours of mean streamwise velocity on the \(x\in\{8,16\}\) planes. In figure 4, we study these structures by visualizing the pressure component, \(\mathbf{\phi}_{p}\), of the SPOD modes corresponding to the first four eigenvalues at \(f=0.21\). We recover \(\mathbf{\phi}_{p}\) from the density and temperature components of each mode, \(\mathbf{\phi}_{\rho}\) and \(\mathbf{\phi}_{T}\), respectively, using the linearized ideal gas equation, \[\mathbf{\phi}_{p}=\frac{1}{\gamma}\big{(}\mathbf{\phi}_{\rho}\overline{T}+\overline{\rho }\mathbf{\phi}_{T}\big{)}\,. \tag{20}\] Isovalues of \(\mathrm{Re}\{\mathbf{\phi}_{p}\}=\pm 0.0005\) are chosen to highlight the 3D structure of the far-field acoustic waves. Each row in figure 4 shows one mode. The left and right columns, 4(a,c,e,g) and 4(b,d,f,h), also display the cross-sectional views of the planes \(z=1.3\) and \(y=-0.25\), respectively. These provide insights into the wavepackets within the jet plume, which are well-known to be efficient sources of noise [49]. In figure 4(a,b), mode 1 recovers near-perfect antisymmetry about the major-axis plane, \(y=0\), and symmetry about the minor-axis plane, \(z=0\). In contrast, mode 2 in 4(c,d) is clearly antisymmetric about both planes. Mode 4 in 4(g,h), on the other hand, is symmetric about both planes. The symmetry of mode 3 is difficult to ascertain visually from 4(e,f). In particular, the strong in-phase (4(b,h)) and out-of-phase (4(d)) coupling between the twin jets observed in modes 1, 2, and 4 appear to be lost in mode 3 (4(f)). This is due to insufficient statistical convergence. The twin-rectangular nozzle configuration is geometrically invariant with respect to reflection about the major and minor axes. It thus belongs in the dihedral group \(D_{2}\). Such geometrical symmetries are broken by turbulence; however, as figures. 2(b) and 4 demonstrate, they can reappear in the statistics. To investigate the extent to which the leading SPOD modes at all frequencies exhibit symmetry or antisymmetry, we perform \(D_{2}\)-symmetry decomposition [42] of each mode into four components, \[\mathbf{\phi}=\mathbf{\phi}_{\mathrm{SS}}+\mathbf{\phi}_{\mathrm{SA}}+\mathbf{\phi}_{\mathrm{ AS}}+\mathbf{\phi}_{\mathrm{AA}}\,, \tag{21}\] where the subscripts denote symmetry (S) or antisymmetry (A) about the major (first letter) and minor (second letter) axes [50]. The importance of each symmetry can be quantified by projecting the mode onto its symmetry components. We define the normalized projection coefficient, \[\alpha=\left|\frac{\left\langle\mathbf{\phi},\mathbf{\phi}_{\mathrm{S/A}}\right\rangle }{\left\langle\mathbf{\phi},\mathbf{\phi}\right\rangle}\right|. \tag{22}\] Intuitively, \(\alpha\) measures the relative energy of each component as defined by the compressible energy norm in equation (19). In figure 3(b), we color the first four eigenvalues at each frequency by the symmetry component that contains the maximum relative energy. The percent opacity is set to \(\alpha\); in other words, 100% opacity indicates the SPOD mode perfectly recovers a particular symmetry. At \(f=0.21\), the colors indicate that modes one through four are dominated by the AS, AA, SA, and SS symmetries, respectively. These symmetries confirm our visual observations of the modes in figure 4. The dominant symmetry can vary between mode numbers, but also across frequencies. For instance, the leading mode switches from the AS symmetry at \(f\approx 0.2\) to the SA symmetry in the range \(0.03\lesssim f\lesssim 0.2\). The AS symmetry then re-emerges below \(f\approx 0.03\). Variations in dominant symmetries reflect the property that, in general, SPOD modes at distinct frequencies represent distinct physical structures [48]. The degree of symmetry recovery, \(\alpha\), falls off with increasing mode number as a consequence of incomplete convergence. On the other hand, the decay of \(\alpha\) with frequency in the range \(f\gtrsim 0.2\) suggests that higher-frequency modes, which have more limited spatial support, are less likely to establish shear-layer coupling. Lack of coupling in turn prevents modes from achieving symmetry. A comprehensive discussion of twin-rectangular jet dynamics exceeds the scope of this work. However, our findings illustrate some of the physical insights that can only be gleaned from a 3D modal analysis, which PySPOD vastly accelerates. Figure 3: Premultiplied SPOD eigenvalue spectra of the twin-rectangular jet. The spectra in (a) fade from black to white with increasing mode number. Only the first four eigenvalues at each frequency are displayed in (b), where they are colored by the dominant \(D_{2}\)-symmetry component of the respective SPOD modes, with higher opacity denoting greater symmetry recovery, \(\alpha\). Modes corresponding to the highlighted (\(\bullet\)) eigenvalues at \(f=0.21\) are reported in figure 4. Figure 4: SPOD modes of the twin-rectangular jet at frequency \(f=0.21\): (a,b) mode 1; (c,d) mode 2; (e,f) mode 3; (g,h) mode 4. Isosurfaces of \(\text{Re}\{\boldsymbol{\phi}_{p}\}=\pm 0.0005\) are shown, along with cross-sections at \(z=1.3\) (left column) and \(y=-0.25\) (right column). The corresponding SPOD eigenvalues are highlighted in figure 3. ### SPOD analysis of geophysical datasets In our second example, we use data from ERA5, a fifth-generation reanalysis dataset produced by the ECMWF that combines model data with global observations using the laws of physics to create a complete and consistent global climate and weather dataset. We obtained this data from the Climate Data Store (CDS) [37]. In particular, we consider the horizontal speed of air moving towards the east on 37 pressure levels (i.e., vertical levels) spanning from January 1940 to December 2022 [37]. This quantity is also referred to as U component of the horizontal wind velocity, and its unit is meters per second. The dataset contains 727,584 time snapshots on a 3D grid of dimension \(1440\times 721\times 37\). The total size of this dataset is 51,745 GB, corresponding to 27.9 trillion data points and 103,490 GB in memory when using single-precision floating-point, as for this example, and 199,452 GB in double-precision. In figure 5, we depict the horizontal wind velocity adopted for pressure level 1 (top left), 12 (top right), 24 (bottom left), and 37 (bottom right). dataset. The SPOD algorithm uses 10-year data blocks, that corresponds to a block size of \(N_{f}=87,600\) time snapshots, resulting in a total of \(L=8\) blocks, where we used 0% overlapping. The analysis of this dataset aims to capture the quasi-biennial oscillation (QBO) that has an approximate period of 2 to 2.5 years, as reported in [6]. This atmospheric oscillation is characterized by quasi-periodic reversals of the zonal-mean zonal winds in the equatorial stratosphere - see also [51]. QBO has important implications on teleconnections, influencing weather patterns in the Northern Hemisphere, and the tropics (including tropical precipitation) [52]. Its most distinctive feature is a latitudinal band of the U component of the wind velocity in the tropical region (\(\pm 20^{\circ}\) latitude). Figure 5: U component of the wind velocity for pressure (i.e., vertical) levels 1 (top left), 12 (top right), 24 (bottom left), and 37 (bottom right), at midnight (00:00) on the 1st of January 2010. Level 1 corresponds to a pressure of 1 millibars, level 12 to 125 millibars, level 24 to 600 millibars, and level 37 to 1000 millibars. Indeed, figure 6 shows the eigenvalue spectra for the U component of the wind velocity, where the leading eigenvalue show a prominent peak at period \(T=912.5\) days. This peak corresponds to the QBO, and shows as this phenomenon exhibits low-rank behaviour, since the energy of the leading eigenvalue is remarkably separated from the other eigenvalues. Figure 7 shows the highly-coherent three-dimensional structure of the leading mode. This manifests as a latitudinal band of the U component of the wind velocity in the tropical region, that is the signature of QBO. The results are consistent with those in [6], albeit the data adopted there was ERA-20C [53], that has significantly coarser spatial and temporal resolution, but longer coverage (from 1900 to 2010). In our results, we also observe several high-frequency peaks in the eigenvalues (periods shorter than 1 day) thanks to the improved temporal resolution of our dataset. Figure 6: Eigenvalue spectra vs. period (in days). The pink vertical line denotes the peak associated to the QBO, whose associated mode is depicted in figure 7. We can also notice other high-frequency peaks, related to yearly, sub-yearly, daily and sub-daily patterns. As for the example presented in section 4.1, a comprehensive discussion of QBO dynamics is outside the scope of this work. Yet, we remark that the analysis outlined in this section would have been extremely challenging without the parallel (distributed) implementation presented in this work. ## 5 Scalability The scalability results were obtained on the geophysical dataset introduced and analyzed in section 4.2. Here, we discuss the scalability setup (section 5.1), strong (section 5.2) and weak (section 5.3) scalability studies conducted using that dataset. ### Scalability setup To test the scalability of the PySPOD library, we used the Shaheen II supercomputer, a Cray XC40 system hosted by King Abdullah University of Science and Technology (KAUST) [54]. Each of Shaheen's 6,174 nodes features two Intel Haswell (Xeon E5-2698v3) CPUs with 16 cores each and 128 GB of memory. The nodes are connected via a Cray Aries interconnect with Dragonfly topology. While the nodes were allocated in exclusive mode by the scheduler, the network was shared with other users, as is typical in a production environment. To alleviate the memory capacity bottleneck and maximize the I/O bandwidth, we run PySPOD with 4 processes per node. For storage, we utilized the Cray Sonexion(r) 2000 Storage System, which offers over 16 PB of usable capacity using 72 Scalable Storage Units (SSUs), 144 Object Storage Services (OSSs) and 144 Object Storage Targets (OSTs). The system features 5,988 4 TB disks. The theoretical performance of this storage exceeds 500 GB/s [55], and published application benchmarks have shown that applications such as WRF can achieve a bandwidth of 25-35 GB/s, including communication, or 65 GB/s of raw I/O performance measured on aggregators when using 144 Figure 7: Real part of the leading three-dimensional SPOD mode for the U component of the wind velocity; period of 912.5 days. OSTs and 4 MPI ranks per node. Similar results have been obtained with NPB benchmarks [56]. We stored both datasets on the Lustre storage system described in section 5.1. Since many processes may need to access each file and the datasets are large, we used a stripe count of 144. However, we did not use striping for output to limit each process to only communicate with one Object Storage Target (OST) and reduce contention, given that each process writes a single file. In both cases, we used the default stripe size of 1 MB. We report the following timings for PySPOD: time for I/O (combined reading and writing of 5 leading modes to disk), computation of the DFT (step 2 in figure 1), computation of the inner product and associated matrix \(\hat{\mathbf{M}}\) (step 3 in figure 1), eigenvalue decomposition (step 5 in figure 1, and computation of the SPOD modes (just prior to step 6 in figure 1. Given the variability of performance on Dragonfly networks [57] and shared file systems [58], we report arithmetic averages based on 5 repetitions. The main development branch of PySPOD (commit TODO:6) was utilized, along with the following packages: mpi4py 3.1.4, netCDF4 1.6.3, NumPy 1.24.2, SciPy 1.10.1, and Xar-ray 2023.2.0. Cray-provided modules for Python 3.10.1 and Cray MPICH 7.7.18 were used. ### Strong scalability To perform the strong scalability analysis, we used a 2D version of the dataset presented in section 4.2, corresponding to the 10 hPa pressure level [59]. This dataset contains 727,584 time snapshots on a 1440\(\times\)721 grid and is 1,407 GB in size when stored on the disk in netCDF format. The data set contains a total of 755.4 billion data points in single-precision floating point, which amounts to 5,628 GB in memory once stored in double-precision floating-point for this study. As we scale from 256 up to 8,192 processes (64 to 2,048 nodes), this corresponds to between 21.99 GB and 0.68 GB, or 4,055 and 126 spatial points over time per process. This provides a broad range of scenarios. In all cases, all the temporal data is split into 1-year (8,760 snapshots) blocks. As shown in figure 8, the outcomes differ depending on the task that is being performed (different curve colors in figure 8). In particular, we achieved a satisfactory speedup for I/O as the number of processes increased (teal curve), with the peak average read bandwidth of 73.76 GB/s (4,096 processes) and the maximum individual measured bandwidth of 88.04 GB/s. Even though the read speed peaks at 4,096 processes, the combined I/O time (reading and writing) is minimally lower when using 2,048 processes. The reported timings include communication (i.e., the second phase of the two-phase I/O) and should be compared to the 25-35 GB/s figure quoted in section 5.1. We consider the results extremely competitive on this hardware. The FFT calculation (violet curve), the second most expensive part of the SPOD algorithm, exhibited surprising behavior. The speedup is measured at 50\(\times\) when the number of processes is increased by 32\(\times\) (from 256 to 8,192 processes). This phenomenon has been previously observed on the same supercomputer [60] and can be explained by cache effects, frequency throttling, and possibly problem size-dependent NumPy optimizations. The computation of matrix \(\hat{\mathbf{M}}\) (brown curve), a collective communication-heavy routine, stops scaling past 2,048 cores; however, at this point, it only takes around 7 seconds. Similarly, the eigenvalue computation (orange curve), which is problem size-dependent and not parallelized (see rationale in section 3.1), is represented by a horizontal line, i.e., its cost depends on the dimensions of the data and not the number of processes. On the other hand, the calculation of SPOD modes (green curve) scaled well despite its cost being measured in single seconds. In terms of overall efficiency, we calculate it to be 74% when using 2,048 cores and 38% or lower once increasing to 4,096 cores and beyond. Even though the efficiency is only 15% when using 8,192 processes, we note that in this scenario, the total runtime is only around 127 seconds. The shortest runtime was achieved when using 4,096 processes and was around 98 seconds, 4 seconds faster than when using 2,048 processes. Based on those results and to use the computational resources responsibly, we suggest using at least 1 GB of data per process in practical applications. Figure 8: Strong scalability of PySPOD using horizontal speed of air moving towards the east data from January 1940 to December 2022, using from 256 to 8,192 processes. Dashed lines represent ideal scalability for each component. ### Weak scalability For the weak scalability analysis, we used the ERA5 dataset containing the horizontal U component of wind speed on 37 pressure levels described in section 4.2. As mentioned there, the total size of this data set is 51,745 GB, and we used between 10 and 80 years of data, corresponding to between 3.4 and 26.8 trillion data points stored using 6,273 GB to 49,863 GB of disk space. The corresponding memory required in double-precision floating-point that was adopted in this study ranged from 25,092 GB to 199,452 GB. To maintain a constant load per process, we use 10 years of data per 2,048 processes (512 nodes) and scale up to 16,384 processes (4,096 nodes), where we use 80 years of the horizontal U component of wind speed data. Similarly to the strong scalability analysis, we split the data temporally into 1-year blocks. As shown in figure 9, we observe satisfactory I/O behavior (teal curve). The bandwidth peaks at 80.41 GB/s when using 12,288 cores. DFT (violet curve) and modes (green curve) computation also remain efficient as more data and processes are used. However, the computation of matrix \(\hat{\mathbf{M}}\) shows worse scaling and its timing displays significant variation in repeated executions (brown curve). In an environment with shared interconnect, this result points to the increased collective communication cost at scale. Additionally, the eigenvalue computation (orange curve), a serial component, becomes more expensive as the problem size grows. Overall, the efficiency is 64% when using 6,144 cores. Despite the decrease in efficiency to 38% when using 16,384 cores, the overall runtime for this scenario is under 20 minutes, which Figure 9: Weak scalability of PySPOD using the hourly horizontal speed of air data on 37 pressure levels data. 10 years of data per 2,000 processes, between January 1940 and December 2020 (when using 16,384 processes). we find acceptable given the sheer size of the dataset (49,863 GB on disk and 199,452 GB in memory). ## 6 Discussion and conclusions The new parallel SPOD algorithm allows modal decompositions that were extremely challenging if not impossible with the serial algorithms available. We were able to compute SPOD decompositions up to 199 TB using HPC platforms, and exploiting the scalability and performance of the parallel algorithm. In particular, the key novel aspect is the I/O handling dictated by the smart data layout that was devised and implemented. This allowed preserving all time operations (more specifically the DFT), and trivially distribute across MPI ranks the spatial component of the data. The results reported in section 4 show the power of the package in providing results on big data, enabled by the scalability performance shown in section 5. The latter were possible thanks to an efficient implementation of I/O, that also allowed for a reduction in terms of memory consumption. The new package may allow unlocking new physics from big data that was not possible to analyze before. ## Acknowledgements Marcin Rogowski, Lisandro Dalcin, and Matteo Parsani were supported by King Abdullah University of Science and Technology (KAUST). Brandon C. Y. Yeung and Oliver T. Schmidt gratefully acknowledge support from Office of Naval Research award N00014-23-1-2457, under the supervision of Dr. Steve Martens. Romit Maulik was supported by U.S. DOE ASCR Award Data-intensive Scientific Machine Learning: DE-FOA-2493. Gianmarco Mengaldo was supported by MOE Tier 2 grant 22-5191-A0001-0. The authors are thankful to the KAUST Supercomputing Laboratory for their computing resources. LES calculations were carried out on the "Onyx" Cray XC40/50 system in ERDC DSRC, using allocations provided by DoD HPCMP.
2306.17424
Audio Embeddings as Teachers for Music Classification
Music classification has been one of the most popular tasks in the field of music information retrieval. With the development of deep learning models, the last decade has seen impressive improvements in a wide range of classification tasks. However, the increasing model complexity makes both training and inference computationally expensive. In this paper, we integrate the ideas of transfer learning and feature-based knowledge distillation and systematically investigate using pre-trained audio embeddings as teachers to guide the training of low-complexity student networks. By regularizing the feature space of the student networks with the pre-trained embeddings, the knowledge in the teacher embeddings can be transferred to the students. We use various pre-trained audio embeddings and test the effectiveness of the method on the tasks of musical instrument classification and music auto-tagging. Results show that our method significantly improves the results in comparison to the identical model trained without the teacher's knowledge. This technique can also be combined with classical knowledge distillation approaches to further improve the model's performance.
Yiwei Ding, Alexander Lerch
2023-06-30T06:38:33Z
http://arxiv.org/abs/2306.17424v1
# Audio Embeddings as Teachers for Music Classification ###### Abstract Music classification has been one of the most popular tasks in the field of music information retrieval. With the development of deep learning models, the last decade has seen impressive improvements in a wide range of classification tasks. However, the increasing model complexity makes both training and inference computationally expensive. In this paper, we integrate the ideas of transfer learning and feature-based knowledge distillation and systematically investigate using pre-trained audio embeddings as teachers to guide the training of low-complexity student networks. By regularizing the feature space of the student networks with the pre-trained embeddings, the knowledge in the teacher embeddings can be transferred to the students. We use various pre-trained audio embeddings and test the effectiveness of the method on the tasks of musical instrument classification and music auto-tagging. Results show that our method significantly improves the results in comparison to the identical model trained without the teacher's knowledge. This technique can also be combined with classical knowledge distillation approaches to further improve the model's performance. M Music Informatics Group Georgia Institute of Technology [email protected] &Alexander Lerch Music Informatics Group Georgia Institute of Technology [email protected] ## 1 Introduction The classification of music has always been a widely popular task in the field of Music Information Retrieval (MIR). Music classification serves as an umbrella term for a variety of tasks, including music genre classification [1], musical instrument classification [2], and music auto-tagging [3]. The last decade has seen dramatic improvements in a wide range of such music classification tasks due to the increasing use of artificial neural networks [4, 5, 6, 7]. One major contributing factor to these impressive accomplishments is the increased algorithmic complexity of the machine learning models which also means that the training process requires an increased amount of data. As not all tasks have this abundance of annotated data, transfer learning has been widely and successfully applied to various music classification tasks [8]. In transfer learning, a model is first pre-trained on a large-scale dataset for a (source) task that is somewhat related to the (target) task and then fine-tuned with a comparably smaller dataset of the target task [9]. This enables knowledge to be transferred across datasets and tasks. Transfer learning has been repeatedly shown to result in state-of-the-art performance for a multitude of MIR tasks [10, 11, 12]. Another side effect of the increasing model complexity is the slow inference speed. One way to address this issue is model compression by means of knowledge distillation. Here, a low-complexity (student) model is trained while leveraging the knowledge in the high-complexity (teacher) model [13, 14]. The teacher-student paradigm has met with considerable success in reducing the model complexity while minimizing performance decay [15, 16]. In this study, we integrate ideas and approaches from both transfer learning and knowledge distillation and apply them to the training of low-complexity networks to show the effectiveness of knowledge transfer for music classification tasks. More specifically, we utilize pre-trained audio embeddings as teachers to regularize the feature space of low-complexity student networks during the training process. Thus, the main contributions of this paper are a systematic study of * the effectiveness of various audio embeddings as teachers for knowledge transfer, * different ways to apply the knowledge transfer from teachers to students, and * the impact of data availability on the performance of the investigated systems. The models and experiments are publicly available as open-source code. 1 Footnote 1: [https://github.com/suncerock/EAST-music-classification](https://github.com/suncerock/EAST-music-classification). Last accessed on June 21, 2023 ## 2 Related Work This section first briefly introduces transfer learning and knowledge distillation, which are both often used to transfer knowledge between tasks and models, respectively, and then surveys the application of feature space regularization in the training of neural networks. ### Transfer Learning In transfer learning approaches, a model is pre-trained on a source task with a large dataset and subsequently fine-tuned on a (different but related) target task with a (typically smaller) dataset [9]. By utilizing the knowledge learned from the source task, models trained following the transfer learning paradigm can often achieve significantly better results than the same models trained directly on the target task [17]; this is especially the case if these models have a large number of parameters and the training data for the target task is limited. In the case where fine-tuning the whole model might be too computationally expensive, another way to do transfer learning is to use the pre-trained embeddings and train only the classification head. This allows for a separation of the tasks of computing the embeddings and the classification itself. Transfer learning has been successfully applied to a wide variety of areas ranging from computer vision [18, 19] to natural language processing [20]. In MIR, transfer learning has been used for a multitude of target tasks [8, 10, 21, 11]. Besides fine-tuning the whole model, pre-trained embeddings such as VGGish [22] and Jukebox [23] have also shown good performance on many tasks including auto-tagging [12, 24], instrument classification [4, 12], and music emotion recognition [24, 25, 26, 12]. One disadvantage of transfer learning is the slow inference speed. In most cases, the model has a large number of parameters, which means that both fine-tuning (if done on the whole model) and inference potentially lead to a high computational workload. ### Knowledge Distillation Approaches for knowledge distillation aim at model compression, i.e., reducing the complexity of the network. The knowledge of a (usually high-complexity) pre-trained network (the teacher) is transferred to a different (low-complexity) network (the student) during the training phase, in which the student not only learns from the ground truth labels but also from the teacher predictions. This is achieved by adding a "distillation loss" term to the student's loss function to learn from the teacher's prediction [13, 14]. The most popular distillation loss is the Kullback-Leibler divergence between the logits of the student and the teacher, with a hyperparameter called temperature to soften the probability distribution of the teacher's prediction over classes [13]. The soft target provides more "dark" knowledge than the ground truth hard label [27, 28]. The Pearson correlation coefficient has also been proposed as a distance measure between the logits as an alternative to the Kullback-Leibler divergence [29]. Besides learning from logits, the student network can also try to learn from the feature map from the intermediate layers of the teacher network [30, 31, 32]. As the feature maps of the student and teacher do not necessarily share the same dimension and the same size, a variety of ways to match the feature space of the student and the teacher have been proposed [33, 34, 31]. Therefore, feature-based knowledge distillation has more flexibility than the logits-based traditional approach, which, at the same time, also makes it more challenging to find the best way of matching the feature space [35, 36]. ### Feature Space Regularization Feature-based knowledge distillation is a technique of regularizing the feature space of the network during training. Besides knowledge distillation, there exists a wide variety of other ways to implement regularization. One example is contrastive learning, which aims at contrasting the features of instances with positive labels against negative labels [37, 38]. Contrastive learning has been shown to improve the performance of neural networks on music auto-tagging [39, 40] and music performance assessment [41]. Regularizing the feature space using pre-trained audio embeddings has also been reported to be effective in music classification [42] and music source separation [43], where Hung and Lerch proposed to use pre-trained embeddings to help structure the latent space during training. This technique is similar to but different from both transfer learning and knowledge distillation. In transfer learning, the same model is used on two different datasets, and a typical setting is that knowledge from the large dataset will be transferred to the small dataset. In knowledge distillation, only one dataset is used and the typical setting is that the knowledge will be transferred from a large model to a small model. In comparison, regularizing the feature space using embeddings requires neither the dataset nor the model to be the same, yet still allows to transfer knowledge learned by the teacher model from a large dataset to the low-complexity student network for a different (small) dataset. ## 3 Methods Inspired by the promising preliminary results of prior work [42], we integrate the idea of transfer learning and knowledge distillation by using pre-trained audio embeddings as teachers to regularize the feature space of the student network during training. The overall pipeline is illustrated in Figure 1. ### Loss Function Similar to knowledge distillation [13], we rewrite our loss function as \[\mathcal{L}=(1-\lambda)\mathcal{L}_{\mathrm{pred}}+\lambda\mathcal{L}_{\mathrm{ reg}} \tag{1}\] where \(\mathcal{L}_{\mathrm{pred}}\) is the loss function for conventional neural network training, \(\mathcal{L}_{\mathrm{reg}}\) is the loss function that measures the distance between the student network's feature map and the pre-trained embeddings, and \(\lambda\in[0,1]\) is a weighting hyper-parameter. ### Regularization Location Different stages in a neural network output different feature maps, and the optimal location to apply regularization continues to be controversially discussed in feature-based knowledge distillation [36]. In this study, we investigate either regularizing only the final feature map before the classification head as shown in Figure 1 or regularizing the feature maps at all stages of the student network. ### Feature Alignment To measure the distance between the student feature map \(l\in\mathbb{R}^{T_{\pi}\times C_{\pi}}\) and the pre-trained teacher embeddings \(v\in\mathbb{R}^{T_{\pi}\times C_{\pi}}\) which might have different numbers of time frames (i.e., \(T_{\pi}\neq T_{\mathrm{t}}\)), we first align the intermediate feature map with the pre-trained embeddings in time by repeating the one with fewer time frames, then compute the distance for each frame and finally average them along the time axis. ### Distance Measure Considering that pre-trained embeddings and feature maps have often different dimensionalities, the use of distance measures that are independent of dimensionality allows for easier application. #### 3.4.1 Cosine Distance Difference Cosine distance difference 2 as proposed in previous work [42, 43] measures the difference in the cosine distance between pairs of samples. Given \(n\) pairs of samples of single-time-frame features \(l_{1},l_{2},...,l_{n}\) and pre-trained embeddings \(v_{1},v_{2},...,v_{n}\), the cosine distance difference for one pair is Footnote 2: has been referred to in previous work as Distance-based Regularization (Dis-Reg) [42, 43]. \[D_{ij}=|d_{\mathrm{cos}}(l_{i},l_{j})-d_{\mathrm{cos}}(v_{i},v_{j})|, \tag{2}\] and the distance for this time frame is averaged among all pairs. #### 3.4.2 Distance Correlation Distance correlation was proposed as a generalization of classical correlation to measure the independence between two random vectors in arbitrary dimensions [44]. It is capable of handling features of different dimensionality; furthermore, correlation-based distance measures have been shown to be effective in knowledge distillation [29, 32]. Using the same notation as above, we define \[a_{ij} =\|l_{i}-l_{j}\|, \tag{3}\] \[\bar{a}_{l.} =\frac{1}{n}\sum\limits_{j=1}^{n}a_{ij},\quad\bar{a}_{.j}=\frac{1 }{n}\sum\limits_{i=1}^{n}a_{ij},\quad\bar{a}_{..}=\frac{1}{n^{2}}\sum\limits_{ i,j=1}a_{ij}\] (4) \[A_{ij} =a_{ij}-\bar{a}_{i.}-\bar{a}_{.j}+\bar{a}_{..} \tag{5}\] where \(i,j\in\{1,2,...,n\}\), and similarly, \(b_{ij}=\|v_{i}-v_{j}\|\) and \(B_{ij}=b_{ij}-\bar{b}_{i.}-\bar{b}_{.j}+\bar{b}_{..}\)3 The distance for the time frame is then Footnote 3: Eq. (3) uses 2-norm following the implementation in [https://github.com/zhenxingjian/Partial_Distance_Correlation](https://github.com/zhenxingjian/Partial_Distance_Correlation). \[\mathcal{L}_{\mathrm{reg}}=1-\mathcal{R}_{n}^{2}(l,v)=1-\frac{\mathcal{V}_{n} ^{2}(l,v)}{\sqrt{\mathcal{V}_{n}^{2}(l,l)\mathcal{V}_{n}^{2}(v,v)}} \tag{6}\] where \[\mathcal{V}_{n}^{2}(l,l) =\frac{1}{n^{2}}\sum\limits_{i,j=1}^{n}A_{ij}^{2},\quad\mathcal{ V}_{n}^{2}(v,v)=\frac{1}{n^{2}}\sum\limits_{i,j=1}^{n}B_{ij}^{2},\] \[\mathcal{V}_{n}^{2}(l,v)=\frac{1}{n^{2}}\sum\limits_{i,j=1}^{n}A_ {ij}B_{ij}.\] Note that \(\mathcal{V}_{n}^{2}(l,l)\) and \(\mathcal{V}_{n}^{2}(v,v)\) will be \(0\) if and only if all the \(n\) samples of features (or embeddings) within one batch are identical [44], which we assume not to occur here. To optimize both distance measures during training, block stochastic gradient iteration is used, which means that the distance is computed over mini-batches instead of the whole dataset [45, 46]. With stochastic approximation, the computational complexity of the distance measure for \(n\) samples is reduced from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(mn)\) where \(m\) is the batch size. It is worth mentioning that both distance measures ensure that if the distance is zero, the feature maps would Figure 1: Overall pipeline of training a model by using pre-trained embeddings as teachers. The training loss is a weighted sum (weighting factor omitted in the figure) of prediction loss and regularization loss. The regularization loss measures the distance between pre-trained embedding and the output feature map after the feature alignment. During inference, only the bottom part with the blue background is used. differ from the pre-trained embeddings by only an orthogonal linear transformation, which can be modeled in a single linear layer. Therefore, if the regularization loss is zero, the student would have the same performance as the teacher in classification. ## 4 Experimental Setup We test the effectiveness of using pre-trained embeddings as teachers on two different tasks, datasets, and models with four different pre-trained embeddings as follows. ### Tasks, Datasets, Models, and Metrics #### 4.1.1 Musical Instrument Classification with OpenMIC Musical instrument classification is a multi-label classification problem. We use the OpenMIC dataset [2], which provides weakly labeled audio snippets of length 10 s. Following prior work [4, 49], we use the suggested test set and randomly split 15% of the training data as the validation set, resulting in 12,692 training observations, 2,223 validation observations, and 5085 test observations. To ensure a consistent sample rate, the audio is resampled to 32 kHz [5, 49]. As the dataset is not completely labeled, i.e., parts of the labels are missing and not labeled as positive or negative, the missing labels are masked out when computing the loss function as suggested in previous work [5, 10, 49]. We use receptive field regularized ResNet (CP-ResNet) [5] for this task, as it reaches state-of-the-art performance when trained only on the OpenMIC dataset (i.e., neither trained with transfer learning nor trained with any knowledge distillation). CP-ResNet has a ResNet-like structure [19] with an added hyper-parameter \(\rho\) to control the maximum receptive field of the ResNet. We set \(\rho=7\) to match the setting which provides the best results in the original work [5]. The results are reported with the metrics mean Average Precision (mAP) and F1-score. The F1-score is calculated in a macro fashion, which means that for each instrument, the F1-score is computed for both the positive labels and the negative labels and then averaged, and the final F1-score is the mean of the F1-scores of all instruments. The detection threshold for the prediction is set to 0.4 following previous work [5]. #### 4.1.2 Music Auto-Tagging with MagnaTagATune Similar to musical instrument classification, music auto-tagging is also a multi-label classification problem. We use the MagnaTagATune dataset [3] for this task, which comes with audio clips of approximately 29.1 s. Following previous work, we use only the top 50 labels and exclude all the songs without any positive label from the dataset [7, 50]. For comparability, the data split is adopted from previous work, with audio files in the directories '0' to 'b' being the training set, 'c' being the validation set, and 'e' and 'f' being the test set [48, 51], resulting in 15,247 training clips, 1,529 validation clips, and 4,332 test clips. We apply a modified fully convolutional neural network (FCN) [6] to this task. It is the simplest model among the benchmark models for the MagnaTagATune dataset [48] and consists of several convolution and max-pooling layers. To further reduce the complexity of the model, we apply the MobileNet-like modification [52] to the network by breaking the \(3\times 3\) convolutions into depth-wise separable convolutions and \(1\times 1\) convolutions. The results are evaluated with mAP and ROC-AUC. ### Pre-trained Embeddings #### 4.2.1 VGGish VGGish [22] is a widely used embedding in MIR, with a VGG network [53] being trained on a large number of Youtube videos. The open-source PyTorch implementation is used to extract VGGish features 4 which by default extracts 128 principle components and then quantizes them to 8 bit. The time resolution is 960 ms. Footnote 4: [https://github.com/harritaylor/torchvggish](https://github.com/harritaylor/torchvggish). Last accessed on April 4, 2023. #### 4.2.2 OpenL3 The OpenL3 embedding [54, 55] is trained on a music subset of AudioSet [56] in a self-supervised paradigm. The audio embeddings are extracted using the open-source Python package OpenL3 5 with the dimensionality being 512. To keep consistent with VGGish, the time resolution is set to 960 ms. Footnote 5: [https://github.com/marrl/open13/tree/main](https://github.com/marrl/open13/tree/main). Last accessed on April 4, 2023. Footnote 6: [https://github.com/kkoutini/PaSST/tree/main](https://github.com/kkoutini/PaSST/tree/main). Last accessed on April 4, 2023. #### 4.2.3 PaSST PaSST [10] is a 7-layer transformer trained on AudioSet for acoustic event detection. It applies the structure of a vision transformer [16, 57] and proposes the technique of Patchout to make the training efficient. We use the open-source code 6 released by the authors to extract the 768-dimensional embeddings. The time resolution is also set to 960 ms. Footnote 7: [https://github.com/qiugiangkong/audioset_tagging_cnn](https://github.com/qiugiangkong/audioset_tagging_cnn). Last accessed on April 4, 2023. #### 4.2.4 PANNs PANNs [11] include several convolutional neural networks and are also trained on AudioSet for acoustic event detection. We use the default CNN14 model from the official repository 8. The embedding dimensionality is 2048. Different from other embeddings, PANNs provide only one global embedding for each clip of audio. Pilot experiments have shown that extracting the embeddings for short segments and concatenating them does not improve performance. Footnote 8: [https://github.com/harritaylor/torchvggish](https://github.com/harritaylor/torchvggish). Last accessed on April 4, 2023. ### Systems Overview The following systems are evaluated for comparison: * Baseline: CP ResNet (on OpenMIC) and Mobile FCN (on MagnaTagATune) trained without any extra regularization loss. * Teacher\({}_{\text{LR}}\): logistic regression on the pre-trained embeddings (averaged along the time axis), which can be seen as one way to do transfer learning by freezing the whole model except for the classification head. * KD: classical knowledge distillation where the soft targets are generated by the logistic regression. * EAsT\({}_{\text{Cos-Diff}}\) (for Embeddings-As-Teachers): feature space regularization as proposed by Hung and Lerch that uses cosine distance difference and regularizes only the final feature map [42]. * EAsT\({}_{\text{Final}}\) and EAsT\({}_{\text{All}}\): proposed systems based on distance correlation as the distance measure, either regularizing only at the final stage or at all stages, respectively. * EAsT\({}_{\text{KD}}\): a combination of classical knowledge distillation and our method of using embeddings to regularize the feature space. The feature space regularization is done only at the final stage. We perform a search of \(\lambda\) for each of the EasT systems and choose the best-performing value on the validation set. 8 Footnote 8: For all the hyperparameters, please refer to the config files in our GitHub. ## 5 Results This section presents the results of different systems and their performance in the case of limited training data. ### Results on OpenMIC and MagnaTagATune Table 1 shows the results on the OpenMIC and the MagnaTagATune datasets. We can observe that the models trained with the extra regularization loss consistently outperform the non-regularized ones on both datasets, with all features, and all regularization methods. This means that the knowledge in the embeddings is successfully transferred to the student networks and consistently enhances the performance. Although EAsT\({}_{\text{Final}}\) appears to give better results on the OpenMIC dataset while EAsT\({}_{\text{All}}\) seems to have slightly better performance on the MagnaTagATune dataset, the difference between them is very small, meaning that the model does not benefit significantly from being regularized by pre-trained embeddings at earlier stages where the feature maps are still relatively low-level. The results for the teacher systems show that the older VGGish and OpenL3 embeddings are clearly outperformed by the more recently proposed embeddings PaSST and PANNs. In fact, the teacher systems for the newer embeddings perform so strongly that the students can rarely outperform them, while the student systems trained with VGGish and OpenL3 provide better results than the corresponding teachers. We can see that whether the teachers themselves have an excellent performance or not, students benefit from learning the additional knowledge from these embeddings, and the students' upper limit is not bounded by the performance of teachers. Comparing KD and the EAsT\({}_{\text{Final}}\) or EAsT\({}_{\text{All}}\) systems, \begin{table} \begin{tabular}{l|c c c c c c c c c} \hline \hline \multirow{2}{*}{**OpenMIC**} & \multicolumn{3}{c}{None} & \multicolumn{3}{c}{VGGish} & \multicolumn{3}{c}{OpenL3} & \multicolumn{3}{c}{PaSST} & \multicolumn{3}{c}{PANNs} \\ \cline{2-11} & mAP & F1 & mAP & F1 & mAP & F1 & mAP & F1 & mAP & F1 \\ \hline CP ResNet* [5] &.819 &.809 & - & - & - & - & - & - & - & - \\ SS CP ResNet* [5] &.831 &.822 & - & - & - & - & - & - & - & - \\ \hline Teacher\({}_{\text{LR}}\) & - & - &.803 &.799 &.803 &.798 & **.858** & **.837** &.853 & **.834** \\ KD (w/ mask) ** & - & - &.829 &.820 &.823 &.813 &.851 &.834 &.848 &.823 \\ \hline EAsT\({}_{\text{Cos-Diff}}\) & - & - &.838 &.824 & **.838** &.820 &.837 &.822 &.836 &.814 \\ EAsT\({}_{\text{Final}}\) & - & - & **.842** & **.828** &.835 & **.822** &.847 &.830 &.849 &.828 \\ EAsT\({}_{\text{AB}}\) & - & - &.836 &.823 &.835 & **.822** &.845 &.827 &.845 &.827 \\ EAsT\({}_{\text{KD}}\) & - & - &.836 &.825 &.836 &.821 & **.852** & **.834** & **.857** & **.831** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on OpenMIC (above) and MagnaTagATune (below) dataset for different models regularized with different pre-trained embeddings. Best performances are in bold, and best results excluding the teachers are underlined. *Reported results [5], SS means being trained with shake-shake regularization [47]. **When using KD, the missing labels in OpenMIC were masked to avoid potentially adding more training data. \({}^{\dagger}\)Results from the open-source re-implementation [48]. we can see that with VGGish and OpenL3 embeddings, regularizing the feature space provides better results than simply using the teachers' soft targets. On the other hand, for the PaSST and PANNs embeddings, classical knowledge distillation provides competitive results. The possible reason is that the soft targets given by "weak" teachers might have provided too much incorrect information to the students while the high-quality soft targets generated by the "strong" teachers provide good guidance for the students' training. The combination system EAsTKD gives us better results with PaSST and PANNs embeddings (with the exception of no noteworthy improvement with the PaSST embedding on the OpenMIC dataset) while for VGGish and OpenL3 embeddings, the performance is not as good as EAsTFinal or EAsTAll in most cases. This observation is in accordance with our speculation that traditional knowledge distillation performs best with a "strong" teacher. While learning from audio embeddings benefits a student network even more in the presence of a "strong" teacher, learning from "weak" embeddings can still improve the model's performance. ### Comparison of Model Complexity Table 2 lists the number of parameters as well as rough inference speed measurements 9 of the models. Footnote 9: reference GPU: NVIDIA 2070 Super The numbers of parameters only take the backbone structure (i.e., excluding the final classification head) into account so that it does not vary across datasets with different numbers of classes. Iterations per second are tested with 128\(\times\)1000 input spectrograms. We can see that Mobile FCN and CP ResNet are much faster in inference than pre-trained models. ### Limited Training Data To investigate the impact of limited training data on our methods, we present the system performances for reduced training data, i.e., for 25%, 50%, and 75% of the original training data. The results are shown in Figure 2. We use VGGish and PaSST as the pre-trained embeddings. We can observe that limiting the training data has the greatest impact on the baseline systems, which show the biggest performance drop. On the OpenMIC dataset, EAsTCo-Diff and EAsTFinal have similar decreases in mAP, and the KD system is less affected. An interesting finding is that when the VGGish embedding is used, KD shows better performance for limited data amounts while it is outperformed by EAsTCo-Diff and EAsTFinal when the whole OpenMIC dataset is available. This means using embeddings as teachers might still require a sufficient amount of data to have good guidance on the student models. On the MagnaTagATune dataset, however, the EAsTCo-Diff and EAsTFinal systems show less performance decay than either KD or the baseline when the training data is limited. This suggests that in our training settings, there is no certain answer to which method is least affected by the lack of training data, and the answer might be dependent on specific tasks, models, and data. ## 6 Conclusion and Future Work In this paper, we explored the use of audio embeddings as teachers to regularize the feature space of low-complexity student networks during training. We investigated several different ways of implementing the regularization and tested its effectiveness on the OpenMIC and MagnaTagATune datasets. Results show that using embeddings as teachers enhances the performance of the low-complexity student models, and the results can be further improved by combining our method with a traditional knowledge distillation approach. Future work will investigate the performance of our method on a wider variety of downstream tasks and embeddings. Moreover, as there have been a wide variety of models to extract audio and music embeddings, we speculate that using an ensemble of different pre-trained embeddings also has considerable potential. Finally, the flexibility of feature-based knowledge distillation offers a wide range of possible algorithmic modifications. Our focus will be on evaluating different distance measures and regularizing the network using features from different stages of the teacher network instead of using only the output embeddings. \begin{table} \begin{tabular}{l|c c} \hline \hline Model & Parameters (M) & Iteration / s \\ \hline VGGish & 72.1 & 172.2 \\ OpenL3 & 4.69 & 117.9 \\ PaSST & 86.1 & 18.7 \\ PANNs & 79.7 & 70.6 \\ \hline Mobile FCN & 0.34 & 319.3 \\ CP ResNet & 5.52 & 205.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the model complexity. Figure 2: Results with limited training data on two datasets.
2307.16413
Rotating detectors in dS/AdS spacetimes
We analyse several aspects of detectors with uniform acceleration $a$ and uniform rotation $\Omega$ in de Sitter ($\Lambda>0$) and anti-de Sitter ($\Lambda<0$) spacetimes, focusing particularly on the periodicity, in (Euclidean) proper time $\tau_{\rm traj}$, of geodesic interval $\tau_{\rm geod}$ between two events on the trajectory. For $\Lambda<0$, $\tau_{\rm geod}$ is periodic in ${\rm i} \tau_{\rm traj}$ for specific values of $a$ and $\Omega$. These results are used to obtain numerical plots for the response rate $\dot{\mathcal{F}}$ of Unruh-de Witt detectors, which display non-trivial combined effects of rotation and curvature through the dimensionless parameter $\Lambda c^2/\Omega^2$. In particular, periodicity does not imply thermality due to additional poles in the Wightman function away from the imaginary axis. We then present some results for stationary rotational motion in arbitrary curved spacetime, as a perturbative expansion in curvature.
Hari. K, Dawood Kothawala
2023-07-31T05:39:20Z
http://arxiv.org/abs/2307.16413v3
# Rotating detectors in dS/AdS ###### Abstract We analyse several aspects of detectors with uniform acceleration \(a\) and uniform rotation \(\Omega\) in de Sitter (\(\Lambda>0\)) and anti-de Sitter (\(\Lambda<0\)) spacetimes, focusing particularly on the periodicity, in (Euclidean) proper time \(\tau_{\rm traj}\), of geodesic interval \(\tau_{\rm good}\) between two events on the trajectory. For \(\Lambda<0\), \(\tau_{\rm good}\) is periodic in \(i\tau_{\rm traj}\) for specific values of \(a\) and \(\Omega\). These results are used to obtain numerical plots for the response rate \(\hat{\cal F}\) of Unruh-de Witt detectors, which display nontrivial combined effects of rotation and curvature through the dimensionless parameter \(\Lambda c^{2}/\Omega^{2}\). In particular, periodicity does not imply thermality due to additional poles in the Wightman function away from the imaginary axis. We then present some results for stationary rotational motion in arbitrary curved spacetime, as a perturbative expansion in curvature. ## I Introduction Studying physical processes in a uniformly accelerated frame of reference in Minkowski spacetime[1; 2; 3] often forms a first step towards understanding these processes in curved spacetime. The mapping between these is provided by the so called _principle of equivalence_. However, by its very nature, the equivalence principle gives no insights into the quantitative role of curvature in these processes. One can obtain curvature corrections perturbatively, of course, but such results are of limited interest from a conceptual point of view since the limit of zero acceleration can not then be taken. A well known result[4; 5; 6] that illustrates this is the response of a uniformly accelerated Unruh-de Witt detector in de Sitter spacetime with cosmological constant \(\Lambda\), which is thermal with a temperature proportional to \(\sqrt{a^{2}+\Lambda}\). A perturbative analysis would yield the leading curvature correction at \(O(\Lambda/a^{2})\), and miss the \(a\to 0\) limit. In a recent work [7], it was shown that similar contribution due to (electric part of) Riemann tensor arises in general curved spacetimes, of which the (anti-) de Sitter results are but special cases. A remarkable partial summation of an infinite series once again yields new insights that one could not have gained from a naive application of the equivalence principle. In this paper, we broaden the scope of such results by considering uniformly accelerating and rotating detectors in maximally symmetric spacetimes and obtaining certain exact results that should be relevant for the response of such detectors. We also give the result for arbitrary curved spacetimes, but, unfortunately, we are forced to leave it as a perturbation expansion since we have not been able to re-sum even a subset of terms in presence of rotation. Nevertheless, the exact results we present for rotating detectors in dS/AdS spacetimes by themselves yield important insights into the non-trivial of \(\Lambda\) (including its sign) on the response of the detectors. Three key results derived in this work are: 1. **Relation between proper time and geodesic interval:** Remarkably, the relation between the geodesic distance, \(\Delta\tau_{\rm good}\) and proper time distance, \(\Delta\tau_{\rm traj}\) for a stationary trajectory with constant acceleration and constant torsion in a maximally symmetric spacetime can be expressed in a form closely resembling the structure in Minkowski spacetime (see Eqs. (14) 2. **Periodicity in \(\Delta\tau_{\rm good}^{2}\):** The above result yields the condition for periodicity of \(\Delta\tau_{\rm good}^{2}\) in Euclidean proper time (\(\Delta\tau_{\rm traj}\to i\Delta t_{\rm traj}\)) for \(\Lambda<0\), for specific values of acceleration and torsion. 3. **General spacetime:** We also obtain a perturbative expression for \(\Delta\tau_{\rm good}^{2}\) for stationary trajectories in arbitrary curved spacetime, assuming derivatives of Riemann to be small. The (square of) geodesic interval, \(\sigma(x,x^{\prime})^{2}=-\Delta\tau_{\rm good}^{2}(x,x\prime)\) between two points on a trajectory is an important quantity that appears prominently in the analysis of classical as well quantum measurement processes in curved spacetimes. The latter, for instance, through its appearance in the leading short distance (Hadamard) form of the two point function \[G_{\rm H}(x,y)=\frac{\Delta^{1/2}(x,y)}{\sigma(x,y)^{2}}+\ldots \tag{1}\] where \(\Delta(x,y)\) is the van Vleck determinant, derived again from \(\sigma^{2}\). Amongst many things, the above form serves as the key mathematical tool to evaluate the response of an Unruh-deWitt detector coupled to quantum field in an arbitrary curved spacetime. ## II Stationary motion The trajectory of a timelike curve is specified using functions \(x^{a}(\tau)\) defined at each point of the curve, with \(\tau\) as the proper time along the curve. A more elegant way to represent the trajectory is through its curvature invariants, which, in four dimensions, are acceleration, torsion, and hypertorsion. At every point on the trajectory an orthonormal tetrad, \(e^{i}_{a}\) can be constructed from the derivatives of \(x^{a}(\tau)\), which satisfy the condition, \[e^{i}_{a}e_{b}=\eta_{ab} \tag{2}\] Here, \(\eta_{ab}=\text{diag}(-1,1,1,1)\). These tetrads serve as the basis vectors for the vector space at each point on the worldline. They obey the Serret-Frenet equations given by, \[\frac{De^{i}_{a}}{\text{d}\tau}=K_{a}^{\ b}e^{\text{i}}_{b} \tag{3}\] where the structure of the matrix \(K_{ab}\) is given by, \[K_{ab}=\begin{bmatrix}0&-a(\tau)&0&0\\ a(\tau)&0&\Omega(\tau)&0\\ 0&-\Omega(\tau)&0&\lambda(\tau)\\ 0&0&-\lambda(\tau)&0\end{bmatrix} \tag{4}\] where \(a(\tau)\) is the magnitude of acceleration, \(\Omega(\tau)\) is the torsion, and \(\lambda(\tau)\) is the hypertorsion. These are the curvature invariants of the trajectory. Note that the matrix \(K_{ab}\) is antisymmetric, \(K_{ab}=-K_{ba}\). The worldline is stationary when these invariants are constant and do not depend on the parameter \(\tau\). These stationary worldlines are classified into six categories (see Ref. [8; 9; 10]) for Minkowski spacetime according to the values of curvature invariants. For a stationary trajectory, let \(u^{i}(\tau)\) be the 4-velocity and \(n^{i}(\tau)\) be the unit vector along the direction of 4-acceleration. We will be using Serret-Frenet equations to construct the tetrads and will express the worldline in terms of the tetrads at \(\tau=0\). One starts with the tangent vector to the curve, and a unit vector in the direction of acceleration, from which remaining orthogonal vectors are obtained using Gram-Schmidt orthogonalization. \[\nabla_{\mathbf{u}}u^{k} = a\,n^{k}\ ;\quad\nabla_{\mathbf{u}}n^{k}=\Omega\,b^{k}+a\,u^{k}\ ;\] \[\nabla_{\mathbf{u}}b^{k} = \lambda\,d^{k}-\Omega\,n^{k}\ ;\quad\nabla_{\mathbf{u}}d^{k}=- \lambda\,b^{k} \tag{5}\] Here, \(\nabla_{\mathbf{u}}\) is the covariant derivative, \(n^{k}=a^{k}/a\) is the normal vector in the direction of acceleration, \(b^{k}\) is the binormal orthogonal to \(u^{k}\) and \(a^{k}\), and \(d^{k}\) is another unit vector orthogonal to all other unit vectors. These equations are more succinctly expressed in terms of the Fermi derivative defined by, \[\frac{D_{F}Y^{i}}{\text{d}\tau}=\nabla_{\mathbf{u}}Y^{i}+\Omega^{i}_{\ k}Y^{k}=0\] where, \(\Omega^{ik}:=a^{i}u^{k}-u^{i}a^{k}+\omega^{ik}\) and \(\omega^{ik}:=\varepsilon^{ikjl}u_{j}\omega_{l}\). For stationary motion, \(K_{ab}=\Omega_{ba}\). The key geometrical quantity, which will also be our main focus, is the geodesic distance between two points on a stationary trajectory characterized by Serret-Frenet equations. To do this, we will follow the method sketched in [7]. This method essentially uses Riemann normal coordinates (RNC) to solve for the trajectory as a power series \[x^{i}(\tau)=\sum_{n=0}^{\infty}\frac{\tau^{n}}{n!}\left[\frac{\text{d}^{n}x^{ i}}{\text{d}\tau^{n}}\right]_{\tau=0} \tag{6}\] in which the coefficients on the RHS are determined by taking higher derivatives of the defining equation \[\frac{\text{d}^{2}x^{i}}{\text{d}\tau^{2}}+\Gamma^{i}_{bc}\frac{\text{d}x^{b }}{\text{d}\tau}\frac{\text{d}x^{c}}{\text{d}\tau}=a^{i} \tag{7}\] and using the Serret-Frenet equations. This method was explained and used in [7] for rectilinear, uniformly accelerated motion in arbitrary curved spacetime, and was shown to yield a remarkable result involving an analytically resummable piece of an otherwise infinite (perturbative) expansion. In this work, we will explore the effects of rotation to see if a similar result holds in this case. ## III Maximally symmetric spacetimes As a first step towards studying stationary trajectories in arbitrary curved spacetime, we will consider stationary trajectories in maximally symmetric spacetimes. As we will show, some very interesting analytic results can be obtained in this case, with several remarkable similarities and mappings with the corresponding results in Minkowski spacetime, which we have discussed in Appendix A; we will refer to the results in this appendix after deriving the corresponding results in maximally symmetry. We will mostly use the metric of maximally symmetric spacetimes in embedding coordinates \(X^{i}\), given by[11], \[g_{ab}=\eta_{ab}+\frac{\Lambda}{1-\Lambda\eta_{ij}X^{i}X^{j}}\eta_{ac}\eta_{ bd}X^{c}X^{d} \tag{8}\] Then the Christoffel connections can be expressed as \(\Gamma^{a}_{bc}=\Lambda X^{a}g_{bc}\). It can be easily shown that the stationary motion with acceleration and torsion will be always in the hyperplane \(u^{i}(0)\)-\(n^{i}(0)\)-\(b^{i}(0)\) using Serret-Frenet equations. ### Trajectory and geodesic distance Further, using Eq. (5) and Eq. (7) the Taylor series expansion for the trajectory similar to Minkowski spacetime can be obtained as, \[\frac{\mathrm{d}X^{i}}{\mathrm{d}\tau} = U^{i}\] \[\frac{\mathrm{d}^{2}X^{i}}{\mathrm{d}\tau^{2}} = aN^{i}+\Lambda X^{i}\] \[\frac{\mathrm{d}^{3}X^{i}}{\mathrm{d}\tau^{3}} = a\,\Omega\,B^{i}+a^{2}\frac{\mathrm{d}X^{i}}{\mathrm{d}\tau}\] \[\frac{\mathrm{d}^{\mathsf{p}}X^{i}}{\mathrm{d}\tau^{\mathsf{p}}} = (a^{2}-\Omega^{2}+\Lambda)\frac{\mathrm{d}^{\mathsf{p}-2}X^{i}}{ \mathrm{d}\tau^{\mathsf{p}-2}}+\Omega^{2}\Lambda\frac{\mathrm{d}^{\mathsf{p}- 4}X^{i}}{\mathrm{d}\tau^{\mathsf{p}-2}}\] Here, \(\mathsf{p}\geq 4\), \(U^{i}\), \(N^{i}\), \(B^{i}\) are similar to unit vectors \(u^{i}\), \(n^{i}\), \(b^{i}\) in Minkowski spacetime, but now defined using the embedding coordinates. The above equations are obtained by expanding the covariant derivatives and using the Christoffel connections previously described. Solving the recursion equation will determine all the higher-order terms. The terms in each direction can be summed into hyperbolic functions, sinh and cosh. The final form of the summed series expressing the trajectory is, \[Z^{i}(\tau) = a\left[\cosh{(q_{+}\tau)}-\cosh{(q_{-}\tau)}\right]\,N^{i}(0)- \frac{a\,\Omega}{q_{+}q_{-}q_{0}}\left[q_{+}\sinh{(q_{-}\tau)}-q_{-}\sinh{(q_ {+}\tau)}\right]\,B^{i}(0) \tag{9}\] \[+\frac{1}{q_{+}q_{-}q_{0}^{2}}\left[(q_{-}^{2}+\Omega^{2})q_{+} \sinh{(q_{-}\tau)}-(q_{+}^{2}+\Omega^{2})q_{-}\sinh{(q_{+}\tau)}\right]\,U^{i }(0)\] Here, \(q_{0}:=\left[(a^{2}-\Omega^{2}+\Lambda)^{2}+4\Lambda\Omega^{2}\right]^{1/4}\) and \(q_{\pm}:=(1/\sqrt{2})\sqrt{(a^{2}-\Omega^{2}+\Lambda)\pm q_{0}^{2}}\). These constants are similar to the ones obtained for stationary motion with all curvature invariants. In fact, there seems to be mapping between these trajectories, which will be discussed soon. The mapping between the embedding coordinates and the RNC is given by[12], \[X^{i}=\Delta^{-1/(D-1)}\hat{x}^{i} \tag{10}\] where, \(\Delta=\Delta(0,\hat{x})\) is the van Vleck determinant with \(D\) as the dimension of spacetime. The geodesic distance can be evaluated using the expression for flat spacetime, \(-\Delta\tau_{\mathrm{good}}=\eta_{ab}\hat{x}^{a}\hat{x}^{b}=\eta_{ab}Z^{a}Z^{b }\Delta^{2/(D-1)}\). For maximally symmetric spacetimes, we can use the formula, \(\Delta^{2/(D-1)}=\Lambda\Delta\tau_{\mathrm{good}}^{2}/\sin^{2}(-\Delta\tau_{ \mathrm{good}}\sqrt{\Lambda})\) for van Vleck determinant and simplify the equation for geodesic distance to obtain, \[\Delta\tau_{\mathrm{good}}^{2}=-\frac{1}{\Lambda}\left[\sin^{-1}\left(\sqrt{- \Lambda\Delta\tau_{\mathrm{ms}}^{2}}\right)\right]^{2} \tag{11}\] Here, \(\Delta\tau_{\mathrm{ms}}^{2}:=-(\eta_{ab}Z^{a}Z^{b})\) given by, \[\Delta\tau_{\mathrm{ms}}^{2} = \Delta\widetilde{\tau}^{2}\left(1+\frac{1}{4}\Lambda\Delta \widetilde{\tau}^{2}\right) \tag{12}\] \[\Delta\widetilde{\tau}^{2} = \frac{4}{q_{0}^{2}}\left[\left(1+\frac{\Omega^{2}}{q_{+}^{2}} \right)\sinh^{2}\left(\frac{q_{+}\Delta\tau_{\mathrm{traj}}}{2}\right)-\left(1+ \frac{\Omega^{2}}{q_{-}^{2}}\right)\sinh^{2}\left(\frac{q_{-}\Delta\tau_{ \mathrm{traj}}}{2}\right)\right] \tag{13}\] Equation 11 can be rearranged using Eq. (12) to a similar form as obtained in Eq. (23) of [7], \[\Delta\tau_{\mathrm{good}}^{2}=\frac{4}{\Lambda}\left[\sinh^{-1}\left(\frac{1} {2}\sqrt{\Lambda\Delta\widetilde{\tau}^{2}}\right)\right]^{2} \tag{14}\] The equation for the hyperbolic trajectory in the ref. [7] will be a special case when \(\Omega=0\). ### Periodicity in \(\Delta\tau_{\rm good}^{2}\) The periodicity in \(\Delta\tau_{\rm good}^{2}\) for Euclidean time(\(\Delta\tau_{\rm traj}\to i\Delta t_{\rm traj}\)) can be analysed using Eq. (13). Equation 13 will be periodic only when both conditions given below are satisfied. i) \(\{q_{+},q_{-}\}\in\mathbb{R}\) ii) \(q_{+}/q_{-}\in\mathbb{Q}\) The first condition restricts \(q_{-}\in\mathbb{R}\), which implies \[1-\sqrt{1+\frac{4\Lambda\Omega^{2}}{a^{2}-\Omega^{2}+\Lambda}}>0,\quad\sqrt{a ^{2}-\Omega^{2}+\Lambda}>0\] The above inequalities are only satisfied for \(\Lambda<0\) and \(a^{2}>\Omega^{2}+|\Lambda|\). Using the first condition, the second condition can be simplified as, \[\frac{q_{+}}{q_{-}}=\frac{\sqrt{1+\sqrt{1-\frac{4|\Lambda|\Omega^ {2}}{(a^{2}-\Omega^{2}-|\Lambda|)^{2}}}}}{\sqrt{1-\sqrt{1-\frac{4|\Lambda| \Omega^{2}}{(a^{2}-\Omega^{2}-|\Lambda|)^{2}}}}}=r\] \[\implies\frac{|\Lambda|\Omega^{2}}{(a^{2}-\Omega^{2}-|\Lambda|)^ {2}}=\frac{r^{2}}{(r^{2}+1)^{2}} \tag{15}\] Here, \(r\in\mathbb{Q}\) and \(r>1\). The above condition ensures that \(q_{+}\) and \(q_{-}\) are commensurable. Therefore, the conditions for periodicity in \(\Delta\tau_{\rm good}^{2}\) can be summarized as: \[\Lambda < 0\] \[a^{2} > \Omega^{2}+|\Lambda|\] \[\frac{|\Lambda|\Omega^{2}}{(a^{2}-\Omega^{2}-|\Lambda|)^{2}} = \frac{r^{2}}{(r^{2}+1)^{2}}\] ### Mappings between stationary motions in Minkowski and dS/AdS. 1. _Helical motion in Minkowski and rotation in dS/AdS_: The RHS of Eq. (13) and Eq. (23) have a striking similarity. The functional form of these equations are similar. If we try compare the constants, the mapping between these equations can be summarized as, \begin{tabular}{c c} \hline Minkowski & Maximally symmetric \\ \hline \hline \(a_{\eta}^{2}\) & \(a^{2}+\Lambda\) \\ \(\Omega_{\eta}^{2}\) & \(a^{2}(\Omega^{2}/(a^{2}+\Lambda))\) \\ \(\lambda^{2}\) & \(\Lambda(\Omega^{2}/(a^{2}+\Lambda))\) \\ \hline \end{tabular} Note that \(\Omega_{\eta}^{2}+\lambda^{2}=\Omega^{2}\). The curvature constant of maximally symmetric spacetime, \(\Lambda\), plays a similar role of hypertorsion, \(\lambda\) in flat spacetime. 2. _Rotation in Minkowski and rotation in dS/AdS with \(a=\Omega\):_ In the case of maximally symmetric spacetimes, when the acceleration and torsion are equal, Eq. (12) reduces to, \[\Delta\tau_{\rm ms}^{2}=\frac{4(\Lambda+\Omega^{2})}{\Lambda^{2}}\sinh^{2} \left(\frac{\sqrt{\Lambda}}{2}\Delta\tau_{\rm traj}\right)-\frac{\Omega^{2} \Delta\tau_{\rm traj}^{2}}{\Lambda} \tag{16}\] The mapping between the flat spacetime and maximally symmetric spacetime for this motion is, \(a_{\eta}^{2}\to a^{2}+\Lambda\) and \(\Omega_{\eta}\to a=\Omega\). ## IV General spacetime We now turn focus on arbitrary curved spacetimes, hoping again to obtain the relation between the geodesic interval and the proper time interval for points on stationary trajectories in these spacetimes, characterised again through Serret-Ferret conditions. The method we will employ is the one based on a judicious use of RNC, described in Sec. II of Ref. [7]. The RNC is setup at some initial point \(p_{0}\) and the trajectory of the probe at the point \(p\) is expressed using Eq. (6). From the definitions of RNC we have, \(\hat{z}^{a}(\Delta\tau)=\hat{x}^{a}(p)=(\Delta\tau_{\rm good})\,\hat{t}^{a}(0; \Delta\tau)\) and \((\Delta\tau_{\rm good})^{2}=\eta_{ab}\hat{z}^{a}(\Delta\tau)\hat{z}^{b}( \Delta\tau)\), where \(\hat{t}^{a}\) is the tangent vector along the geodesic curve connecting the points \(p_{0}\) and \(p\). Utilizing Eq. (7) and its higher derivatives in RNC along with Serret-Frenet equations, each terms in the series can be obtained as, \[\hat{\hat{z}}^{i}(0) = u^{i}|_{p_{0}}=u^{i}(0)\] \[\hat{\hat{z}}^{i}(0) = \left[a\,n^{i}-\Gamma^{i}_{mk}u^{m}u^{k}\right]_{p_{0}}=\left[a\,n ^{i}\right]_{p_{0}}\] \[\hat{\hat{z}}^{i}(0) = \left[a\,u^{k}\partial_{k}n^{i}-\Gamma^{i}_{mj,k}u^{k}u^{m}u^{j}-2 \Gamma^{i}_{mj}(u^{k}\partial_{k}u^{j})\right]_{p_{0}}\] \[= \left[a\,\nabla_{\bf u}n^{i}-3\Gamma^{i}_{km}u^{k}a^{m}-\Gamma^{i} _{mj,k}u^{k}u^{m}u^{j}\right.\] \[\left.+\Gamma^{i}_{mj}\Gamma^{j}_{kl}u^{m}u^{k}u^{l}\right]_{p_{0}}\] \[= \left[a\,\nabla_{\bf u}n^{i}-\Gamma^{i}_{mj,k}u^{k}u^{m}u^{j} \right]_{p_{0}}\] \[= \left[a\,\Omega\,b^{i}+a^{2}\,u^{i}-\Gamma^{i}_{mj,k}u^{k}u^{m}u^{ j}\right]_{p_{0}}\] The Christoffel connections in the above terms are evaluated using the series expansions in RNC. In the case of non-rotating motion (\(\Omega=0\)), this procedure gave the following series in Ref. [7], \[\Delta\tau_{\rm good}^{2} = \tau_{\rm acc}^{2}+\frac{1}{12}a^{2}\tau_{\rm acc}^{4}+\frac{1}{360} \left(a^{4}+3a^{2}\mathscr{E}_{n}\right)\tau_{\rm acc}^{6} \tag{17}\] \[+\frac{1}{20160}\left(a^{6}+17a^{2}\mathscr{E}_{n}^{2}+18a^{4} \mathscr{E}_{n}\right)\tau_{\rm acc}^{8}\] \[+\frac{1}{1814400}\left(a^{8}+81a^{6}\mathscr{E}_{n}+339a^{4} \mathscr{E}_{n}^{2}\right.\] \[\left.+155a^{2}\mathscr{E}_{n}^{3}\right)\tau_{\rm acc}^{10}+O( \tau_{\rm acc}^{12})+\mathscr{R}_{A}\] Here, \(\mathscr{E}_{n}:=R_{0n0n}=R_{abcd}u^{a}n^{b}u^{c}n^{d}\) and \(\mathscr{R}_{A}\) collectively represents all terms that have at least one Riemann tensor with at least one index which is neither \(0\) nor \(n\). A remarkable feature of this series, as was pointed out in [7], is that it admits a partial re-summation involving terms containing acceleration and the tidal part of the Riemann tensor into a nice analytic function, \[\Delta\tau_{\rm good} = \frac{2}{\sqrt{-\mathscr{E}_{n}}}\sinh^{-1}\Biggl{[}\sqrt{\frac{ -\mathscr{E}_{n}}{a^{2}-\mathscr{E}_{n}}}\sinh\left(\frac{\sqrt{a^{2}-\mathscr{ E}_{n}}}{2}\,\tau_{\rm acc}\right)\Biggr{]} \tag{18}\] \[+\mathscr{R}_{A}\] For rotational motion \(\Omega\neq 0\), we can obtain a similar series expansion in curvature using CADABRA. To \(O(\Delta\tau_{\rm traj}^{9})\), this is given by \[\Delta\tau_{\rm good}^{2} = \Delta\tau_{\rm traj}^{2}+\frac{1}{12}a^{2}\Delta\tau_{\rm traj}^ {4}+\frac{1}{360}a^{2}(a^{2}-\Omega^{2}+3R_{0n0n})\Delta\tau_{\rm traj}^{6}- \frac{1}{120}a^{2}\Omega\,R_{n0bb}\,\Delta\tau_{\rm traj}^{7}+\frac{1}{20160} \Biggl{(}a^{6}-2a^{4}\Omega^{2}\] \[+a^{2}\Omega^{4}+18a^{4}R_{0n0n}+\frac{128}{3}a^{2}\Omega^{2}R_{0 bbb}-17a^{2}R_{0n0a}\mathbf{R}_{0n0}^{\bullet}+50a^{2}\Omega^{2}R_{0n0n}+32a^{3} \Omega R_{nbn0}\Biggr{)}\Delta\tau_{\rm traj}^{8}+O(\Delta\tau_{\rm traj}^{9})\] Here, \(\bullet\) index represent the direction along acceleration and torsion. However, unlike the case with \(\Omega=0\), we have not been able to obtain even a partial re-summation of this series to a nice analytic function of acceleration, torsion, and different Riemann tensor components. The exact result in dS/AdS, which was helpful in partially identifying such a re-summation in Ref. [7], is not of much use for \(\Omega\neq 0\) since the presence of an extra space-like direction introduces additional components of Riemann, which are all equal or zero in maximal symmetry. ## V Comment on rotating observers in BTZ black hole In Ref. [13], the authors discuss about the detector response of an observer who is corotating with the horizon of Banados-Teitelboim-Zanelli (BTZ) blackhole[14; 15; 16]. The BTZ blackhole metric can be constructed by a suitable coordinate transformation from 2+1 Anti-de Sitter(AdS\({}_{3}\)) spacetime. The AdS\({}_{3}\) metric is given by, \[d\,s^{2}=-\left(\frac{\hat{r}^{2}}{\ell^{2}}-1\right)d\,\hat{t}^{2}+\left( \frac{\hat{r}^{2}}{\ell^{2}}-1\right)^{-1}d\,\hat{r}^{2}+r^{2}d\,\hat{\phi}^{2} \tag{20}\] with \(f=\hat{r}^{2}/\ell^{2}-1\) and \(\ell\) is the curvature length scale. By the following identification, \[\hat{t} = \frac{1}{\ell}(r_{+}t-r_{-}\ell\phi)\] \[\hat{\phi} = \frac{1}{\ell}\left(r_{+}\phi-\frac{r_{-}t}{\ell}\right)\] \[\hat{r} = \ell\sqrt{\frac{r^{2}-r_{-}^{2}}{r_{+}^{2}-r_{-}^{2}}}\] the metric of exterior region of black hole is obtained as, \[d\,s^{2}=-(N^{\perp})^{2}d\,t^{2}+(f)^{-2}d\,r^{2}+r^{2}\left(d \phi+N^{\phi}d\,t\right)^{2} \tag{21}\] where, \[N^{\perp}=\left(-M+\frac{r^{2}}{\ell^{2}}+\frac{J^{2}}{4r^{2}} \right)^{1/2};\quad N^{\phi}=-\frac{J}{2r^{2}} \tag{22}\] with \(M=(r_{+}^{2}+r_{-}^{2})/\ell^{2}\) and \(J=2r_{+}r_{-}/\ell\). The corotating observer in ref. [13], rigidly rotates with the horizon having an angular velocity, \(\omega_{H}=r_{-}/(r_{+}\ell)\). The temperature obtained from the response of the detector for this observer is \(T=(1/2\pi)(\alpha-1)^{-1/2}\), where \(\alpha=(r^{2}-r_{-}^{2})/(r_{+}^{2}-r_{-}^{2})\). The Seret-Frenet equation for this observer shows zero torsion and the acceleration is constant for \(r=\text{constant}\) with a magnitude, \(a=(1/\ell)\sqrt{(r^{2}-r_{-}^{2})/(r^{2}-r_{+}^{2})}\). If we use the inverse coordinate transformation from BTZ to AdS\({}_{3}\), the temperature will read as, \(T=(1/2\pi)(\hat{r}^{2}-\ell^{2})^{-1/2}\) which is exactly equal to the expression for uniformly accelerating observer in anti-de Sitter spacetime[4; 5; 6; 7], \(T=(1/2\pi)\sqrt{a^{2}+\Lambda}\) where \(a\) is the magnitude of the acceleration and \(|\Lambda|^{-1/2}\) is the curvature length scale. This temperature can be read off from the coefficient of \(\tau_{\rm traj}\) in the geodesic distance if it is invariant for \(\tau_{\rm traj}\to\tau_{\rm traj}+2\pi i\). When one considers an observer rotating with \(\omega\neq\omega_{H}\) at some constant radius, \(r=r_{0}\), the observer is in stationary motion with constant acceleration and torsion. The trajectory in BTZ coordinates will be, \((A\,\tau_{\rm traj},r_{0},A\,\omega\,\tau_{\rm traj})\), where, \(A=\left[r^{2}(1/\ell^{2}-\omega^{2})+J\,\omega-M\right]^{-1/2}\). Using the formula for the geodesic distance in the flat embedding spacetime, \(\mathbb{R}^{2,2}\) given in ref. [13], the geodesic distance for the trajectory in flat embedding spacetime for the motion described here can be obtained as, \[-\Delta\widehat{\tau}^{2}=2\,\alpha(r_{0})\,\sinh^{2}\left[\frac{A\,r_{+}}{2 \ell}\left(\omega-\omega_{H}\right)\Delta\tau_{\rm traj}-\frac{r_{+}}{\ell} \pi\,n\right]+2\,(1-\alpha(r_{0}))\sinh^{2}\left[\frac{A\,r_{+}}{2}\left( \frac{1}{\ell^{2}}-\omega\,\omega_{H}\right)\Delta\tau_{\rm traj}-\frac{r_{-} }{\ell}\pi\,n\right] \tag{23}\] where, \(n\in\mathbb{Z}\). We tried to map this trajectory from BTZ spacetime to AdS\({}_{3}\) spacetime using the inverse coordinate transformation, which gives the trajectory as, \[\hat{t} = A\,r_{+}\left(\frac{1}{\ell^{2}}-\omega\,\omega_{H}\right)\tau_ {\rm traj}\] \[\hat{r} = \sqrt{\alpha}\,\ell\] \[\hat{\phi} = \frac{A\,r_{+}}{\ell}\left(\omega-\omega_{H}\right)\tau_{\rm traj}\] Estimating acceleration and torsion for this trajectory and plugging into Eq. (13) gives the same expression as obtained in Eq. (23) with \(n=0\). For the periodicity in geodesic distance when \(\tau_{\rm traj}\to i\,t_{\rm traj}\) the coefficients of \(\Delta\tau_{\rm traj}\) should be commensurate, \[\frac{\left(\omega-\omega_{H}\right)}{\ell\left(\frac{1}{\ell^{2}}-\omega\, \omega_{H}\right)}=m \tag{24}\] where, \(m\in\mathbb{Q}\). ## VI Detector response Consider a particle detector model[3; 17; 18] with two energy levels, \(|E_{0}\rangle_{d}\) and \(|E_{1}\rangle_{d}\) that moves on a stationary trajectory, \(x^{i}(\tau)\) as discussed in the previous sections. It interacts with the free real scalar field \(\phi\) through an interaction Hamiltonian, \[H_{\rm int}=\lambda\,\chi(\tau)\,m(\tau)\,\phi(x^{i}(\tau)) \tag{25}\] where, \(\lambda\) is the coupling constant, \(\chi(\tau)\) is the switching function, and \(m(\tau)\) is the monopole moment. We consider adiabatic switching for the detector. The detector is in its ground state when the interaction with the scalar field begins and the field is in a state, \(\Phi\) and assumes that it satisfies the Hadamard property. To the first order in perturbation theory, the probability of the detector to be in the excited state can be expressed as, \[P\left(\Delta E\right)=\lambda^{2}\,|_{d}\langle E_{0}|m(0)|E_{1}\rangle_{d}|^ {2}\,\mathcal{F}(\Delta E) \tag{26}\] where \(\mathcal{F}(\Delta E)\) is the response function that contains the information on the trajectory of the detector and the initial state of the field, and \(\Delta E\) is the energy gap of the detector, \(E_{1}-E_{0}\). The response function for adiabatic switching is given by, \[\mathcal{F}\left(\Delta E\right)=\lim_{\epsilon\to 0^{+}}\int_{- \infty}^{\infty}\,{\rm d}\tau^{\prime}\,\int_{-\infty}^{\infty}\,{\rm d}\tau\,e ^{-i\Delta E(\tau-\tau^{\prime})}\,G_{\epsilon}^{+}\left(\tau,\tau^{\prime}\right) \tag{27}\] Here \(G_{\epsilon}^{+}\left(\tau,\tau^{\prime}\right)\) is the pull-back of the Wightman function, \(G_{\epsilon}^{+}\left(x^{i}(\tau),x^{i}(\tau^{\prime})\right)\) with \(i\varepsilon\) prescription. A more useful quantity is the proper time derivative of the response function known as the instantaneous transition rate. For stationary motion, \(G^{+}(\tau,\tau^{\prime})\to G^{+}(\tau-\tau^{\prime})\) and using the transformation \(\tau-\tau^{\prime}=u\) and \(\tau^{\prime}=v\), the transition rate can be defined as, \[\dot{\mathcal{F}}(\Delta E)=\lim_{T\to\infty}\frac{\mathcal{F}( \Delta E)}{T}=\lim_{\epsilon\to 0^{+}}\int_{-\infty}^{\infty}\,{\rm d}u\,e^{-i \Delta E\,u}\,G_{\epsilon}^{+}\left(u\right). \tag{28}\] ### de Sitter spacetime In de Sitter spacetime we consider a conformally coupled scalar field for simplicity. The Wightman function for the conformally coupled scalar field[19; 20] is given by, \[G^{+}\left(x,x^{\prime}\right)=-\frac{\Lambda}{4\pi^{2}y_{\epsilon}(x,x^{\prime })} \tag{29}\] where, \(y_{\varepsilon}(x,x^{\prime})\) is the de Sitter invariant distance function with proper \(i\varepsilon\) prescription, \(y(x,x^{\prime})\to y(\Delta\tau_{\rm geod})=-4\sinh^{2}\left(\sqrt{\Lambda}\, \Delta\tau_{\rm geod}/2\right)\). The transition rate is obtained using numerical methods and is shown in Fig. 2 for different values of acceleration, torsion and curvature length scale. The transition rate approaches the thermal spectrum with temperature, \(k_{B}T=\left(\hbar/2\pi\right)\sqrt{a^{2}+\Lambda}\) as \(\Omega\to 0\). ### Anti-de Sitter spacetime For anti-de Sitter spacetime, we again consider a conformally coupled scalar field. The Wightman function is given by[21], \[G^{+}_{\rm AdS_{4}}\left(x,x^{\prime}\right)=-\frac{\Lambda}{4\pi^{2}}\left[ \frac{1}{y(x,x^{\prime})}-\frac{\zeta}{y(x,x^{\prime})-2}\right] \tag{30}\] where \(\zeta\in\{0,-1,1\}\) corresponds to whether the boundary condition specified at infinity is transparent, Dirichlet or Neumann respectively. We will restrict our analysis to transparent boundary conditions for simplicity. In Fig. 3, the response rate of an Unruh-de Witt detector in stationary motion with acceleration and torsion with the parameter values chosen for periodicity in geodesic distance when \(\tau\to it\) is illustrated. The transition rate do not correspond to the thermal spectrum due to the presence of complex poles with real parts. ## VII Conclusions and discussion We have derived some exact results for uniformly accelerated and rotating detectors in maximally symmetric spacetimes. The key result on which all others are based is the relation between geodesic and proper time intervals between two points on the detector trajectory. Since two-point functions, such as the Wightman function, depend on the former, whereas the detector response involves Fourier transform with respect to the latter, the analytic structure of this relation in the complex (proper time) plane determines the response. In particular, we uncover a periodicity in Euclidean time for certain values of rotation parameter \(\Omega\) for \(\Lambda<0\), while no such periodicity can be obtained for \(\Lambda>0\) or \(\Lambda=0\) (the well known case of uniformly rotating detector in Minkowski spacetime). While periodicity of Wightman function in Euclidean time is one of the conditions for thermality, it is not the only one. There are certain analyticity conditions which are also required, as discussed in Ref. [22]; in particular, \(|G^{+}(u)|\) should be bounded by a polynomial, \(P(|{\rm Re}\,u|)\) in the strip, \(S=\{u\in\mathbb{C}\,|-\beta<{\rm Im}\,u<0\}\), where \(\beta\) is the periodicity. This condition is not satisfied for generic rotational motion due to the presence of additional complex poles with non-zero real parts. Our results are important as they highlight the subtle role which curvature, howsoever small, can play in determining the _nature_ of response of accelerated, rotating detectors even in the point limit, where equivalence principle would generically imply only perturbative corrections. In the spirit of the uniformly accelerated (rectilinear) motion discussed in [7], we also obtained perturbative expansions in curvature for accelerated, rotating detectors in arbitrary curved spacetime (ignoring the derivatives of curvature). However, unlike in [7], we have not been able to uncover a subset of terms that can be summed to an analytic expression. One obvious reason for this is the complexity of the series in presence of rotation. Besides, our exact results in maximally symmetric spacetimes offer no insights in presence of rotation which brings in an additional spacelike direction that, in turn, brings in more combination of curvature components than can be resolved by looking at dS/AdS, which has just one. We hope, however, to address this in a future work. Any re-summation involving acceleration, curvature and rotation is bound to be of tremendous significance not only in the context of quantum detectors, but also for classical processes in curved spacetimes that involve rotation. ###### Acknowledgements. H.K. would like to thank Indian Institute of Technology, Madras, Chennai, India and the Ministry of Human Resources and Development (MHRD), India for financial support. We would like to thank Prof. Jorma Louko for useful discussions and suggestions, and for drawing our attention to Ref. [22]. ## Appendix A Minkowski spacetime In Minkowski spacetime, the six stationary motions for different values of curvature constants are: i) uniform linear acceleration, ii) circular, iii) cusped, iv) catenary, and v) helical worldlines. Once the trajectory is expressed in terms of the tetrads at the origin, the relation between the geodesic distance and the proper time can be established using, \(\Delta\tau_{\rm geod}^{2}=\eta_{ab}x^{a}(\Delta\tau_{\rm traj})x^{b}(\Delta \tau_{\rm traj})\), where the arc length(proper time), \(\tau\) is denoted by \(\Delta\tau_{\rm traj}\). Evaluating the derivatives in the Taylor expansion in terms of \(u^{i},n^{i},b^{i},d^{i}\) gives, \[\frac{{\rm d}x^{i}}{{\rm d}\tau} = u^{i}\] \[\frac{{\rm d}^{2}x^{i}}{{\rm d}\tau^{2}} = an^{i}\] \[\frac{{\rm d}^{3}x^{i}}{{\rm d}\tau^{3}} = a\Omega b^{i}+a^{2}\frac{{\rm d}x^{i}}{{\rm d}\tau}\] \[\frac{{\rm d}^{4}x^{i}}{{\rm d}\tau^{4}} = a\Omega\lambda d^{i}+(a^{2}-\Omega^{2})\frac{{\rm d}^{2}x^{i}}{{ \rm d}\tau^{2}}\] \[\frac{{\rm d}^{p}x^{i}}{{\rm d}\tau^{p}} = (a^{2}-\Omega^{2}-\lambda^{2})\frac{{\rm d}^{p-2}x^{i}}{{\rm d} \tau^{p-2}}+a^{2}\lambda^{2}\frac{{\rm d}^{p-4}x^{i}}{{\rm d}\tau^{p-4}}\] for \({\sf p}\geq 5\). Solving the above derivatives using the recursion relations, we solved the odd and even derivative separately. The series expansion for the trajectory will sum to hyperbolic functions. The trajectory obtained using the solution of the recurssion relation after the summation is given as, Figure 3: The detector transition rate for 3+1 Anti-de Sitter spacetime with transparent boundary condition plotted against the dimensionless energy gap, \(\Delta E\,\sqrt{|\Lambda|}={\cal E}\). Left: \(a=1.5\sqrt{17}\sqrt{|\Lambda|}\), Middle: \(a=\sqrt{10\,|\Lambda|}\), Right: \(a=\sqrt{33\,|\Lambda|}/2\). All the parameter values in this plot satisfy the periodicity condition described in III.2. The dashed lines corresponds to thermal spectrum with different temperature and it is clear that the detector transition rate do not correspond to thermal spectrum even though there is periodicity in geodesic distance for \(\tau\to i\,t\). Figure 2: The detector transition rate for a rotating detector in 3+1 de Sitter spacetime is plotted against the dimensionless energy gap, \(\Delta E\,\sqrt{\Lambda}={\cal E}\) for different values of acceleration and torsion. Left: \(a=2\sqrt{\Lambda}\), Middle: \(a=\sqrt{\Lambda}\), Right: \(a=\sqrt{\Lambda}/2\). The dashed line corresponds to the thermal spectrum for a linearly accelerating detector in de Sitter. \[x^{i}(\tau) = \frac{2}{q_{-}q_{+}q_{0}^{2}}\left[(a^{2}+\lambda^{2}+\Omega^{2}+q_{0 }^{2})q_{-}\sinh{(q_{+}\tau)}-(a^{2}+\lambda^{2}+\Omega^{2}-q_{0}^{2})q_{+} \sinh{(q_{-}\tau)}\right]\,u^{i}(0) \tag{10}\] \[+\frac{a\,\Omega}{q_{-}q_{+}q_{0}^{2}}\left[q_{-}\sinh{(q_{+}\tau )}-q_{+}\sinh{(q_{-}\tau)}\right]\,b^{i}(0)\] \[+\frac{\Omega}{a\,\lambda\,q_{0}^{2}}\left[q_{+}^{2}\left(\cosh{( q_{-}\tau)}-1\right)-q_{-}^{2}\left(\cosh{(q_{+}\tau)}-1\right)\right]\,d^{i}(0)\] \[+\frac{1}{2aq_{0}^{2}}\left[(a^{2}+\lambda^{2}+\Omega^{2}+q_{0}^ {2})\left\{\cosh{(q_{+}\tau)}-1\right\}-(a^{2}+\lambda^{2}+\Omega^{2}-q_{0}^{ 2})\left\{\cosh{(q_{-}\tau)}-1\right\}\right]\,n^{i}(0)\] Here, \(q_{0}:=\left[(a^{2}-\Omega^{2}-\lambda^{2})^{2}+4a^{2}\lambda^{2}\right]^{1/4}\), \(q_{\pm}:==(1/\sqrt{2})\sqrt{(a^{2}-\Omega^{2}-\lambda^{2})\pm q_{0}^{2}}\). To find the relation between proper time of the trajectory and the geodesic distance, use \(\Delta\tau_{\rm geod}^{2}=\eta_{ab}x^{a}(\Delta\tau_{\rm traj})x^{b}(\Delta \tau_{\rm traj})\), noting that \(u^{i}\), \(n^{i}\), \(b^{i}\), and \(d^{i}\) are unit vectors and orthogonal to each other. Then the geodesic distance is, \[\Delta\tau_{\rm geod}^{2}=\frac{4}{q_{0}^{2}}\left[\left(1+\frac{(\Omega^{2}+ \lambda^{2})}{q_{+}^{2}}\right)\sinh^{2}\,\left(\frac{q_{+}\Delta\tau_{\rm traj }}{2}\right)-\left(1+\frac{(\Omega^{2}+\lambda^{2})}{q_{-}^{2}}\right)\sinh^{2 }\,\left(\frac{q_{-}\Delta\tau_{\rm traj}}{2}\right)\right] \tag{11}\] The above equation is valid for any general stationary motion in Minkowski spacetime. Special cases, such as hyperbolic motion [23], is easily obtained by putting torsion and hypertorsion to zero. Other trajectories such as circular motion can be obtained in a similar manner. The different stationary trajectories are as given below: ### Uniform linear acceleration For uniform linear acceleration or hyperbolic motion, the torsion and hypertorsion will be zero, hence the motion will be in the \(u^{i}-n^{i}\) plane. The constants \(q_{0},\,q_{-},\,q_{+}\) will reduce to \(a\), \(0\), \(\sqrt{2}a\) respectively. The trajectory will be given by, \[x^{i}(\tau) = \frac{\sinh{(a\tau)}}{a}u^{i}(0)+\frac{[\cosh{(a\tau)}-1]}{a}n^{i} (0) \tag{12}\] The equation for the geodesic distance from the above trajectory implies to the well know result, \[\Delta\tau_{\rm geod}^{2}=\frac{4}{a^{2}}\sinh^{2}\left(\frac{a}{2}\Delta\tau_ {\rm traj}\right) \tag{13}\] ### Motion with acceleration and torsion When hypertorsion vanishes and only acceleration and torsion are present, the trajectory will differ according to the values of \(a\) and \(\Omega\). The general motion with only \(a\) and \(\Omega\) motion can be expressed using the unit vectors, \(u^{i}(0)\), \(n^{i}(0)\), and \(b^{i}(0)\). The constants \(q_{0}\), \(q_{-}\), and \(q_{+}\) to \(\sqrt{a^{2}-\Omega^{2}}\), \(0\), and \(\sqrt{a^{2}-\Omega^{2}}\) respectively. Applying this to Eq. (11) gives the relation between geodesic distance and proper time. In the case of stationary motion with no hypertorsion, the trajectories are classified into 3 types depending on the magnitudes of acceleration and torsion. a) _Circular motion_: When \(|\Omega|>|a|\) the motion is bounded(circular). The spatial projection of this motion is circular motion. b) _Cusped motion_: For equal magnitudes of acceleration and torsion, the spatial projection of the motion is a cusp. c) _Catenary motion_: The spatial projection is catenary when the magnitude of the acceleration is greater than the torsion. The motion is unbounded and the equations for the trajectory and geodesic distance can be obtained by putting \(\lambda=0\) in Eq. (10) and Eq. (11) respectively. ### Motion with acceleration, torsion and hypertorsion The equation for the trajectory is already given in Eq. (10) and the geodesic distance is given in Eq. (11). The equation for the geodesic distance of this general stationary motion is similar to stationary motion in maxi mally symmetric spacetimes with acceleration and torsion. The spatial projection of this motion is a helix.
2309.14970
Recurrent Hypernetworks are Surprisingly Strong in Meta-RL
Deep reinforcement learning (RL) is notoriously impractical to deploy due to sample inefficiency. Meta-RL directly addresses this sample inefficiency by learning to perform few-shot learning when a distribution of related tasks is available for meta-training. While many specialized meta-RL methods have been proposed, recent work suggests that end-to-end learning in conjunction with an off-the-shelf sequential model, such as a recurrent network, is a surprisingly strong baseline. However, such claims have been controversial due to limited supporting evidence, particularly in the face of prior work establishing precisely the opposite. In this paper, we conduct an empirical investigation. While we likewise find that a recurrent network can achieve strong performance, we demonstrate that the use of hypernetworks is crucial to maximizing their potential. Surprisingly, when combined with hypernetworks, the recurrent baselines that are far simpler than existing specialized methods actually achieve the strongest performance of all methods evaluated. We provide code at https://github.com/jacooba/hyper.
Jacob Beck, Risto Vuorio, Zheng Xiong, Shimon Whiteson
2023-09-26T14:42:28Z
http://arxiv.org/abs/2309.14970v4
# Recurrent Hypernetworks are Surprisingly Strong in Meta-RL ###### Abstract Deep reinforcement learning (RL) is notoriously impractical to deploy due to sample inefficiency. Meta-RL directly addresses this sample inefficiency by learning to perform few-shot learning when a distribution of related tasks is available for meta-training. While many specialized meta-RL methods have been proposed, recent work suggests that end-to-end learning in conjunction with an off-the-shelf sequential model, such as a recurrent network, is a surprisingly strong baseline. However, such claims have been controversial due to limited supporting evidence, particularly in the face of prior work establishing precisely the opposite. In this paper, we conduct an empirical investigation. While we likewise find that a recurrent network can achieve strong performance, we demonstrate that the use of hypernetworks is crucial to maximizing their potential. Surprisingly, when combined with hypernetworks, the recurrent baselines that are far simpler than existing specialized methods actually achieve the strongest performance of all methods evaluated. ## 1 Introduction Meta-reinforcement learning (Beck et al., 2023) uses sample-inefficient reinforcement learning (RL) to learn a sample-efficient reinforcement learning algorithm. The sample-efficient algorithm maps the data an agent has gathered so far to a policy based on that experience. To this end, any sequential model such as a recurrent neural network (RNN), can be deployed to learn this mapping end-to-end (Duan et al., 2016; Wang et al., 2016). Such methods are also called _black-box_(Beck et al., 2023). Alternatively, much prior work has focused on a category of _task-inference_ methods that are specialized for meta-RL. A meta-RL algorithm learns to reinforcement learn over a distribution of MDPs, or _tasks_. By explicitly learning to infer the task, many methods have shown improved performance relative to the recurrent baseline Humplik et al. (2019); Zintgraf et al. (2020); Kamienny et al. (2020); Liu et al. (2021); Beck et al. (2022). Recent work has shown the simpler recurrent methods to be a competitive baseline relative to task-inference methods (Ni et al., 2022). However, such claims are contentious, as the supporting experiments compare only to one task-inference method designed for meta-RL, the experiments provide additional compute to the recurrent baseline, and the results still show similar or inferior performance to more complicated methods on the majority of difficult domains. In particular, they consider two toy domains and four challenging domains, with RNNs significantly outperformed on two of the four challenging domains, and superior to the single task-inference baseline on only one. In this paper, we conduct a far more extensive empirical investigation with stronger and carefully designed baselines in meta-RL specifically. In addition, we afford equal computation in terms of number of samples for hyper-parameter tuning to all existing baselines. We present the key insight that the use of a hypernetwork architecture (Ha et al., 2017) is crucial to maximizing the potential of recurrent networks. For an illustration of the potential magnitude of improvement, see Figure 1. While the use of a hypernetwork with RNNs is not a novel idea, they have never been evaluated in meta-RL beyond a single environment, let alone shown to outperform contemporary task-inference methods (Beck et al., 2022). We additionally provide preliminary evidence that the robust performance hypernetworks achieve such is in part due to how they condition on the current state and history. Finally, our results establish recurrent hypernetworks as an exceedingly strong method on meta-RL benchmarks that is also far simpler than alternatives, providing significant ease of use for practitioners in meta-RL. ## 2 Related Work Recurrent Meta-RL.Many meta-RL methods structure the learned RL algorithm as a black box using a neural network as a general purpose sequence model (Duan et al., 2016; Wang et al., 2016; Mishra et al., 2018; Fortunato et al., 2019; Ritter et al., 2021; Wang et al., 2021; Ni et al., 2022). While any sequence model can be used, often the model is structured as an RNN (Duan et al., 2016; Wang et al., 2016; Ni et al., 2022). Such models (Duan et al., 2016; Wang et al., 2016) are commonly used as simple meta-RL baselines. One study has shown RNNs to be a competitive baseline in meta-RL (Ni et al., 2022); however, the scope of the study was broader than meta-RL and the evidence specific to meta-RL is inconclusive. First, the study evaluates only a single specialized meta-RL method (Zintgraf et al., 2020), which was, but is not currently, state-of-the-art (Beck et al., 2022). Second, the experiments use results or hyperparameters from the original papers, while affording extra computation to tune the RNNs on each benchmark, including dimensions that were not tuned for the other baselines. This computation includes tuning architecture choices, the context length, the RL algorithm used, and the inputs (Ni et al., 2022). And third, the study does not show particularly strong performance of recurrent methods relative to the chosen specialized baseline. On the MuJoCo domains, the recurrent baseline outperforms the specialized method on only one of these four domains, performs similarly on another, and is significantly outperformed on the other remaining two (Ni et al., 2022). In contrast, our work compares against four specialized baselines; affords equal computation to all methods, defaulting to hyper-parameters that favor existing task-inference methods for parameters that are not tuned; and still establishes recurrent hypernetworks as the strongest method evaluated. Task Inference Meta-RL.In addition to recurrent meta-RL methods, task-inference methods (Humplik et al., 2019; Zintgraf et al., 2020; Kamienny et al., 2020; Liu et al., 2021; Beck et al., 2022) and policy-gradient methods (Yoon et al., 2018; Finn et al., 2017; Vuorio et al., 2019; Zintgraf et al., 2019) constitute a significant bulk of existing work. We exclude the latter methods from Figure 1: In some environments, recurrent neural networks fail to learn meta-RL tasks, whereas recurrent hypernetworks achieve strong performance. comparison since the estimation of a policy gradient in policy-gradient approaches requires more data than in our benchmarks (Zintgraf et al., 2019; Beck et al., 2023). Task inference methods are a strong baseline for our benchmark, but are generally more complicated than recurrent meta-RL methods. For example, such methods typically add a task inference objective (Humplik et al., 2019), and may also add a variational inference component (Zintgraf et al., 2021), or pre-training of embeddings with privileged information (Liu et al., 2021). In this paper, we ablate each of these components to create the strongest task-inference baselines possible. In the end, we find the more complicated task inference methods are still inferior to the recurrent baseline with hypernetworks. Hypernetworks.A hypernetwork (Ha et al., 2017) is a neural networks that produces the parameters (weights and biases) for another neural network, called the base network. Hypernetworks have been used in supervised learning (SL) (Ha et al., 2017; Chang et al., 2020), Meta-SL (Rusu et al., 2019; Munkhdalai and Yu, 2017; Przewiezlikowski et al., 2022), and meta-RL (Beck et al., 2022; Xian et al., 2021; Peng et al., 2021; Sarafian et al., 2021). While these networks are complicated, and can fail to work out-of-the-box, simple initialization methods can be sufficient to enable stable learning (Beck et al., 2022; Chang et al., 2020). In meta-RL, only Beck et al. (2022) have investigated training a hypernetwork end-to-end to arbitrarily modify the weights of a policy. This study suggests that hypernetworks are particularly useful in preventing interference between different tasks and enable greater returns as the the number of parameters increases. However, their study shows a task-inference method to be superior, and the recurrent hypernetwork is evaluated only on a single task with results that are statistically insignificant. Recurrent hypernetworks have never been widely evaluated in meta-RL, let alone shown to outperform contemporary task-inference methods. ## 3 Methods ### Problem Setting A task in RL is formalized as a Markov Decision Processes (MDP), defined as a tuple of \((\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P},\gamma)\). At each time-step \(t\), the agent inhabits a state, \(s_{t}\in\mathcal{S}\), which it can observe. The agent then performs an action \(a_{t}\in\mathcal{A}\). The MDP subsequently transitions to the next state \(s_{t+1}\sim\mathcal{P}(s_{t+1}|s_{t},a_{t})\colon\mathcal{S}\times\mathcal{A} \times\mathcal{S}\rightarrow\mathbb{R}_{+}\), and the agent receives reward \(r_{t}=\mathcal{R}(s_{t},a_{t})\colon\mathcal{S}\times\mathcal{A}\rightarrow \mathbb{R}\) upon entering \(s_{t+1}\). The agent acts to maximize the expected future discounted reward, \(R(\tau)=\sum_{r_{t}\in\tau}\gamma^{t}r_{t}\), where \(\tau\) denotes the agent's trajectory throughout an episode in the MDP, and \(\gamma\in[0,1)\) is a discount factor. The agent takes actions sampled from a learned policy, \(\pi(a|s):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}_{+}\). Meta-RL algorithms learn an RL algorithm, \(f(\tau)\), that maps from the data, \(\tau\), sampled from a single MDP, \(\mathcal{M}\sim p(\mathcal{M})\), to policy parameters \(\phi\). As in a single RL task, \(\tau\) is a sequence up to time-step \(t\) forming a trajectory \(\tau_{t}\in(\mathcal{S}\times\mathcal{A}\times\mathbb{R})^{t}\). Here, however, \(\tau\) may span multiple episodes within a single MDP, since multiple episodes of interaction may be necessary to produce a reasonable policy. We use the same symbol, \(\tau\), but refer to it as a _meta-episode_. The policy, \(\pi_{\theta}(a|\phi=f_{\theta}(\tau))\), is parameterized by \(\phi\). \(f\) is itself parameterized by \(\theta\), which are referred to as the _meta-parameters_. Figure 2: The standard RNN policy (a) and the RNN policy with a hypernetwork (b). The objective in meta-RL is to find meta-parameters \(\theta\) that maximize the sum of the returns in the meta-episode across a distribution of tasks (MDPs): \[\operatorname*{arg\,max}_{\theta}\mathbb{E}_{\mathcal{M}\sim p(\mathcal{M})} \bigg{[}\mathbb{E}_{\tau}\bigg{[}R(\tau)\bigg{|}\pi_{\theta}(\cdot|f_{\theta}( \tau)),\mathcal{M}\bigg{]}\bigg{]}. \tag{1}\] \(f_{\theta}\) is referred to as the _inner-loop_, which produces \(\phi\), in contrast to the _outer-loop_, which produces \(\theta\). ### Recurrent Methods Recurrent methods are perhaps the simplest and most common meta-RL baseline. Recurrent methods use an RNN to encode history and train all meta-parameters end-to-end on Equation 1. These methods are depicted in Figure 2. While neither recurrent networks (RNN below), nor the the combination of a hypernetwork with recurrent networks (RNN+HN below) is a novel, the combination has never been widely evaluated in meta-RL (Beck et al., 2022), but we will show that the combination actually achieves the strongest results. Rnn.Our first recurrent baseline is the simplest and is equivalent to RL2 (Duan et al., 2016) and L2RL (Wang et al., 2016). In this case \(\pi_{\theta}(a|\phi=f_{\theta}(\tau))\), where \(f\) is a recurrent network, \(\pi\) is a feed-forward network, and \(f\) and \(\pi\) each use distinct subsets of the meta-parameters, \(\theta\). Rnn+HN.Our second recurrent model is the recurrent hypernet and achieves the strongest results. Here, the recurrent network produces the weights and biases for the policy directly: \(\pi_{\phi}(a|s)\). The state must be passed as input again to this policy for the feed-forward policy to condition on an input, and we follow the initialization method for hypernetworks, Bias-HyperInit, suggested by Beck et al. (2022). In this initialization, the hypernetwork's final linear layer is initialized with a zero weight matrix and a non-zero bias, so that the hypernetwork produces the same base-network parameters for any trajectory at the start of training. ### Task-Inference Methods Task-inference methods (Beck et al., 2023) constitute the main category of meta-RL methods capable of adaptation as quickly as recurrent (i.e., black-box) methods (Humplik et al., 2019; Zintgraf et al., 2020; Kamienny et al., 2020; Liu et al., 2021; Beck et al., 2022). These methods train the inner-loop not end-to-end but rather to identify the task, within the given task distribution. One perspective on these methods is that they attempt to shift the problem from the more difficult meta-RL setting to the easier multi-task setting by learning to explicitly infer the task (Beck et al., 2023). Here we define relevant task-inference methods used as baselines (Figures 3 and 4). For additional ablations, summary, and details on the method selection process, see the appendix. TI Naive.The inner-loop of a meta-RL method must take in the trajectory, \(\tau\), and produce the policy parameters, \(\phi\). In task inference methods, the inner-loop additionally produces an estimate Figure 3: Task-inference baselines. Naive task (a), additional multi-task pre-training (b). of the current task, \(\hat{c}_{\mathcal{M}}\), given a known task representation, \(c_{\mathcal{M}}\). While it is possible to represent \(\phi\) as \(\hat{c}_{\mathcal{M}}\) directly, i.e. pass \(\hat{c}_{\mathcal{M}}\) to the policy, it is common to compute \(\phi\) from an intermediate layer of the network that predicts \(\hat{c}_{\mathcal{M}}\), which contains more information about the belief state (Humplik et al., 2019; Zintgraf et al., 2020). It is also common to use an information bottleneck to remove information not useful for task inference from the trajectory (Humplik et al., 2019; Zintgraf et al., 2020). Following Zintgraf et al. (2020), we condition \(\phi\) on the mean and variance of the bottleneck layer in order to explicitly condition the policy on task uncertainty for more efficient exploration (Figure 3). Putting these together, we can write a task inference method as follows: \[\mu =P^{\mu}(RNN(\tau))\] \[\sigma =P^{\sigma}(RNN(\tau))\] \[IB =\mathcal{N}(z;\mu,\sigma)\] \[\hat{c}_{\mathcal{M}} =P^{c}(z\sim IB)\] \[\phi =ReLU(P^{\phi}\!\perp\!(\mu,\sigma))\] \[J_{infer}(\theta) =\mathbb{E}_{\mathcal{M}}[\mathbb{E}_{\tau|\pi}[-||c_{\mathcal{M }}-\hat{c}_{\mathcal{M}}||_{2}^{2}]]\] \[J_{prior}(\theta) =\mathbb{E}_{\mathcal{M}}[\mathbb{E}_{\tau|\pi}[D(IB||\mathcal{N} (z;\mu=0,\sigma=I))]],\] where \(P^{c}\), \(P^{\phi}\), \(P_{\mu}\), and \(P_{\sigma}\) are all linear projections, \(\perp\) represents a stop-gradient, \(P^{\phi}\!\perp\!(\mu,\sigma)\) is the matrix multiplication of \(P^{\phi}\) with stop-gradient of a concatenation of \((\mu,\sigma)\), and \(D\) is the KL-divergence. Here, \(J_{infer}+J_{prior}\) constitutes the evidence lower bound from Zintgraf et al. (2020) and is used to train \(IB\), whereas all other parameters (\(P^{\phi}\) and \(\pi(\cdot|\phi)\)) are trained via Equation 1. TI.As presented, the TI Naive baseline may suffer from a known issue where the given task representation contains too little or too much information (Beck et al., 2023). When too little information is present, the policy may miss information crucial to the task. When too much information is present, the policy may be presented with the difficult problem of separating the useful information from irrelevant task features. Towards this end, it is possible to pre-train the task representation end-to-end using an additional policy (Humplik et al., 2019; Kamienny et al., 2020; Liu et al., 2021). Our TI baseline (Figure 3) is the same as TI Naive, except that an additional multi-task policy, \(\pi^{\prime}\), is pre-trained to learn a representation of the task, \(g_{\theta}(c_{\mathcal{M}})\). Given a linear projection, \(P^{g}\): \[\hat{g} =P^{g}(z\sim IB)\] \[J_{multi}(\theta) =\mathbb{E}_{\mathcal{M}\sim p(\mathcal{M})}[\mathbb{E}_{\tau}[ R(\tau)|\pi^{\prime}_{\theta}(\cdot|g_{\theta}(c_{\mathcal{M}})),\mathcal{M}]\] \[J_{infer}(\theta) =\mathbb{E}_{\mathcal{M}}[\mathbb{E}_{\tau|\pi(\cdot|\phi)}[-|| g_{\theta}(c_{\mathcal{M}})-\hat{g}||_{2}^{2}]].\] For a fair comparison, training of the multi-task policy, \(\pi^{\prime}\), occurs at the expense of training the meta-learned policy \(\pi\), with the total number of samples remaining constant. Instead of fully training the multi-task policy, we experiment with different amounts of pre-training in the appendix, finding significant benefits already from less than 5% of total training allocation for the pre-training. TI++HN.The TI++HN baseline is the same as TI, with three additions that we found to strengthen task inference (Figure 4). The first two additions (++) are novel and are 1) initializing the parameters of the meta-policy, \(\pi\), to that of the pre-trained multi-task policy, \(\pi^{\prime}\), to encourage transfer and 2) Figure 4: Task-inference baselines. Task inference with additional parameter reuse and hypernetwork (a), an existing contemporary task-inference algorithm (Beck et al., 2022) (b). training of the task-inference (\(J_{infer}\)) over trajectories from the initial multi-task training phase in addition to the meta-learning phase, since the former tend to be more informative and simply provide extra data. The third addition (HN) uses a hypernetwork to condition the policy on \(\phi\)(Beck et al., 2022). We write this as \(\pi_{\phi}(\cdot|s)\) to show that \(\phi\) represents the weights and biases of \(\pi\), just as in the recurrent baseline. The output of the hypernetwork is \(\phi\), and the input to the hypernetwork is a the projection of \(\mu\) and \(\sigma\), \(\phi=h(ReLU(P^{\phi}\bot(\mu,\sigma)))\). When using a hypernetwork with the first two additions (++), the parameters of the hypernetwork for the meta-learned policy are initialized to the parameters of hypernetwork for the multi-task policy, instead of sharing policy parameters directly. VI+HN.While task-inference methods rely on known task representations, it is also possible to design methods that can infer the MDP more directly. This can be done by inferring transitions and rewards in full trajectories, since the transition function and reward function collectively define the MDP. In particular, such a method, called VariBAD, is proposed by Zintgraf et al. (2020), and extended with the use of hypernetworks by (Beck et al., 2022). Here, we call this method VI+HN, and it is a contemporary task-inference method (Figure 4). Precisely, this model reconstructs full trajectories including future transitions for meta-episodes, \(\tau_{0:T}\), instead of task embeddings, from \(\tau\), the current trajectory: \[J_{infer}(\theta)=\mathbb{E}_{\mathcal{M}}[\mathbb{E}_{\tau|\pi(\cdot|\phi)}[ -||\tau_{0:T}-\hat{\tau}_{0:T}||_{2}^{2}]].\] See the appendix for ablations with VI alone. ## 4 Experiments In this section we compare recurrent hypernetworks (RNN+HN) to task inference baselines. We evaluate over three simple navigation domains (Zintgraf et al., 2020; Humplik et al., 2019; Rakelly et al., 2019), designed to test learning of exploration and memory, in addition to four more difficult tasks using MuJoCo (Todorov et al., 2012), and one task testing long-term memory from visual observations in MineCraft (Beck et al., 2020). Results show meta-episode return, optimized over five learning rates and averaged over three seeds (four in MineCraft), with a 68% confidence interval using bootstrapping. Additional details and results on Meta-World (Yu et al., 2020) are available in the appendix. Our experiments demonstrate that while our task inference methods are strong baselines, RNN+HN is able to outperform them and achieve the highest returns. ### Grid-Worlds Navigation tasks are a common benchmark in meta-RL (Zintgraf et al., 2020; Humplik et al., 2019; Rakelly et al., 2019). Here we evaluate on the grid-world variant from Zintgraf et al. (2020), in addition to two of our own variants. The first environment, Grid-World, consists of a five by five grid with a goal location in one of the grid cells. The agent starts in the bottom left corner of the grid, and then must navigate to a goal location, which is held constant throughout the meta-episode. (Details are in the appendix.) This environment is useful for testing how well a meta-learning algorithm learns to efficiently explore in the gridworld as it searches for the goal. Additionally, our Grid-World Show environment was designed to be relatively harder for end-to-end methods, in order to provide a challenge for our proposed method. In this environment, the goal position is visible to the agent Figure 5: Evaluation on grid-world benchmarks. RNN+HN and TI++HN improve return. RNN+HN achieves the greatest asymptotic return and sample efficiency. at the first timestep of each episode. Task inference methods will directly encourage the storage of this information in memory, whereas the end-to-end recurrent methods must learn to store this information through its effect on the policy. In contrast, our Grid-World Dense environment provides dense rewards and may be easier for end-to-end methods. In this environment, the agent receives and observes a reward equal to the Manhattan distance to the goal location. Instead of inferring the task explicitly, the agent can simply move up or to the right until the reward stops increasing. Surprisingly, on all three grid-worlds, RNN+HN achieves both the greatest asymptotic return and greatest sample efficiency (Figure 5). The recurrent hypernetwork achieves the fastest learning on Gridworld Show, despite the environment being specifically designed to be harder for end-to-end methods. TI++HN dominates all other task-inference baselines on these grid-worlds, suggesting that it is a relatively strong task-inference method. Collectively, these grid-worlds demonstrate that end-to-end learning with hypernetworks can learn to store the task in memory and to explore optimally with this information directly from return, just as well as task-inference methods. ### MuJoCo Here we evaluate baselines on more challenging domains. We evaluate on all four MuJoCo variants proposed by Zintgraf et al. (2020), which is known to be a common and more challenging meta-RL benchmark involving distributions of tasks requiring legged locomotion (Zintgraf et al., 2020; Humplik et al., 2019; Rakelly et al., 2019; Beck et al., 2023). Ant-Dir and Cheetah-Dir both involve non-parametric task variation, whereas Cheetah-Vel and Walker include parametric variation of the target velocity and physics coefficients respectively. For environment details, see the appendix. We expect Walker in particular to be difficult for end-to-end methods since it has the largest space of tasks with the dynamics defined by 65 different parameters. Assuming the values of these parameters are important for the optimal policy, task inference methods may learn the optimal policy faster. On Cheetah-Dir and Ant-Dir, RNN+HN achieves greater returns than all other baselines by a wide margin (Figure 6). On Cheetah-Vel, all methods achieve fairly similar results, with RNN+HN Figure 6: Models evaluated on MuJoCo benchmark. RNN+HN matches returns on Walker and Cheetah-Vel, and exceeds returns on Cheetah-Dir and Ant-Dir. still achieving the greatest asymptotic return by a small margin. As expected, RNN+HN does not outperform task inference on Walker. For Walker, only TI outperforms RNN+HN in terms of efficiency, and only TI Naive outperforms RNN+HN in terms of asymptotic return; however, the effect size is small and both TI and TI Naive have among the worst performance on Cheetah-Dir and the grid-worlds. Still, RNN+HN achieves similar performance on Walker, which is notable in a high dimensional task space. And, RNN+HN achieves greater returns overall. ### MineCraft We additionally evaluate on the MC-LS environment from Beck et al. (2020), designed to test long-term memory from visual observations in MineCraft. Here, the agent navigates through a series of 16 rooms. In each room, the agent navigates left or right around a column, depending on whether the column is made of diamond or iron. Discrete actions allow for a finite set of observations. Correct behavior can be computed from the observation and receives a reward of 0.1. At the end, the agent moves right or left depending on a signal (red or green) that defines the task and is shown before the first room. Correct and incorrect behavior receives a reward of 4 and -3, respectively. We allow the agent to adapt over two consecutive episodes, forming a single meta-episode. On MC-LS we compare RNN+HN to VI+HN alone, given a limited compute budget and since VI+HN is an established contemporary task-inference baseline Beck et al. (2022). Additionally, we add an extra seed (four in total) and a linear learning rate decay due to high variance in the environment. In Figure 7, we see that RNN+HN significantly outperforms VI+HN. While VI+HN learns to navigate through all rooms, it does not reliably learn the correct long-term memory behavior. In contrast, RNN+HN is able to adapt reliably within two episodes, and one seed even learns to adapt reliably within a single episode. While further work is needed to learn the optimal policy, these experiments demonstrate that RNN+HN outperforms VI+HN, even on more challenging domains. ## 5 Discussion Here we investigate why the recurrent hypernetworks have such robust performance on meta-RL benchmarks. First, we observe that the state processing differs between the RNN and RNN+HN Figure 8: The RNN+S policy (a) and the RNN policy with a hypernetwork (b). Figure 7: RNN+HN outperforms VI+HN on MC-LS (MineCraft) environment. baselines. In particular, RNN conditions on the current state only through its dependence on \(\phi\), whereas hypernetworks pass in the state again to the policy. Thus, we investigate whether the difference in inputs alone could be the cause of the improvement in performance. To this end, we introduce a new ablation to test the effect of just passing in the state again. Details are below. Second, we inspect how sensitivity to latent variables encoding the trajectory affects performance. RNN+S.Hypernetworks condition on the current state both through \(\phi\), which contains information about trajectory, including the current state, and by directly conditioning on the state. Since the hypernetwork conditions on state twice, we test to see the effect of conditioning on the state twice without hypernetworks. We call this ablation RNN+S, which we write as \(\pi_{\theta}(a|s,\phi)\) (Figure 8). In an empirical evaluation, we see that while RNN+S does perform favorably relative to RNN alone, RNN+HN still outperforms RNN+S (Figures 9 and 10). In particular, RNN+HN achieves similar returns to RNN+S on Ant-Dir and Cheetah-Vel, and outperforms RNN+S on all other environments, in terms of asymptotic return and sample efficiency. Taken together, we see that the advantage of RNN+HN comes both from the ability to re-condition on state directly and from the hypernetwork architecture. These results confirm that to achieve the strongest performance, re-conditioning on state directly is not sufficient, and that the hypernetwork architecture itself is still critical. Figure 10: RNN+HN matches or exceeds RNN+S on MuJoCo tasks. RNN alone is a weak baseline. Figure 9: RNN+HN matches or exceeds RNN+S and RNN, but RNN+S is also strong on grid-worlds. Latent Gradients.We also investigate how the hypernetworks condition on the trajectory. In particular, we investigate the sensitivity of the output of the network to an intermediate latent representation of the trajectory. For this purpose, we chose to measure the gradient norm of the first hidden layer of the hypernetwork on the Walker environment. We perform this investigation both with the initialization method designed for hypernetworks that we used throughout our experiments, Bias-HyperInit [Beck et al., 2022], and with Kaiming initialization [He et al., 2015], not designed for hypernetworks. We add Kaiming initialization, since Bias-HyperInit ignores trajectories at the start of training [Beck et al., 2022]. First, we confirm the finding of Beck et al. [2022] that Bias-HyperInit is crucial for performance (Figure 11). Second, we see the two models that performs worst, RNN and RNN+HN Kaiming, also have the greatest norm. Moreover, we find that both RNN+HN and RNN+S start with a low gradient norm and then further decrease this norm throughout training, whereas the RNN model increases this norm. We hypothesize that a low norm, i.e., low sensitivity to the latent variable, is crucial for stable training and that the RNN model increases this norm to remain sensitive to the state, since the state is only encoded in the latent for this model. ## 6 Limitations As an empirical study of meta-RL, we cannot guarantee that recurrent hypernetworks will improve over every baseline nor on every environment. However, we mitigate this issue by comparing to many baselines and performing many ablations. In particular, we compare to a contemporary task-inference method (VI+HN), design our own baseline which we show to be stronger than others on all grid-worlds (TI++HN), and also include standard methods (TI and TI Naive), in addition to further ablations in the appendix. In as much as an empirical study can, we believe our study demonstrates a significant improvement of the RNN+HN method over existing baselines. ## 7 Conclusion In this paper, we establish recurrent hypernetworks as a surprisingly strong method in meta-RL. While much effort has gone into designing specialized task-inference methods for meta-RL, we present the surprising result that the simpler recurrent methods can be easily adapted to outperform the task-inference methods. By combining recurrent methods with the hypernetwork architecture, we achieve a new strong baseline in meta-RL that is both robust and easy to implement. In comparison to existing evidence, we provide much stronger empirical results, afford equivalent computation for tuning to all baselines, and establish recurrent hypernetworks as a strong method. We additionally show that passing the state variable to the policy is a crucial component of this method. Finally, we presented gradient analysis suggesting lower latent gradient norms to play an important role in the performance of meta-RL methods. Since the gradient analysis is preliminary and investigates state and latent variables in isolation, future work could investigate the interaction between these variables. Future work could also analyze the interaction between hypernetworks and other sequence models, such as transformers. We hope these insights, along with a simple and robust method, open the way for the broader use of sample-efficient learning in meta-RL and beyond. Figure 11: Returns (a) and Gradient norms on Walker (b). The RNN method must increase this norm to condition on state, whereas others do not. Lower gradient norms seem important to performance. ## Acknowledgments and Disclosure of Funding We would like to thank Luisa Zintgraf for her help with the VariBAD code-base along with general advice and discussion. Jacob Beck is supported by the Oxford-Google DeepMind Doctoral Scholarship. Risto Vuorio is supported by EPSRC Doctoral Training Partnership Scholarship, Department of Computer Science Scholarship, and Scatcherd European Scholarship. Zheng Xiong is supported by UK EPSRC CDT in Autonomous Intelligent Machines and Systems (grant number EP/S024050/1) and AWS.
2309.14964
A dynamic systems approach to harness the potential of social tipping
Social tipping points are promising levers to achieve net-zero greenhouse gas emission targets. They describe how social, political, economic or technological systems can move rapidly into a new state if cascading positive feedback mechanisms are triggered. Analysing the potential of social tipping for rapid decarbonization requires considering the inherent complexity of social systems. Here, we identify that existing scientific literature is inclined to a narrative-based account of social tipping, lacks a broad empirical framework and a multi-systems view. We subsequently outline a dynamic systems approach that entails (i) a systems outlook involving interconnected feedback mechanisms alongside cross-system and cross-scale interactions, and including a socioeconomic and environmental injustice perspective (ii) directed data collection efforts to provide empirical evidence for and monitor social tipping dynamics, (iii) global, integrated, descriptive modelling to project future dynamics and provide ex-ante evidence for interventions. Research on social tipping must be accordingly solidified for climate policy relevance.
Sibel Eker, Charlie Wilson, Niklas Höhne, Mark S. McCaffrey, Irene Monasterolo, Leila Niamir, Caroline Zimm
2023-09-26T14:33:51Z
http://arxiv.org/abs/2309.14964v1
# A dynamic systems approach to harness the potential of social tipping ###### Abstract Social tipping points are promising levers to achieve net-zero greenhouse gas emission targets. They describe how social, political, economic or technological systems can move rapidly into a new state if cascading positive feedback mechanisms are triggered. Analysing the potential of social tipping for rapid decarbonization requires considering the inherent complexity of social systems. Here, we identify that existing scientific literature is inclined to a narrative-based account of social tipping, lacks a broad empirical framework and a multi-systems view. We subsequently outline a dynamic systems approach that entails (i) a systems outlook involving interconnected feedback mechanisms alongside cross-system and cross-scale interactions, and including a socioeconomic and environmental injustice perspective (ii) directed data collection efforts to provide empirical evidence for and monitor social tipping dynamics, (iii) global, integrated, descriptive modelling to project future dynamics and provide ex-ante evidence for interventions. Research on social tipping must be accordingly solidified for climate policy relevance. ## Introduction The urgency for rapid and sustained reductions in greenhouse gas (GHG) emissions has drawn the attention of scientific and policy debates to social tipping points, which can trigger accelerated climate action through cascading effects in societies, institutions, and economic systems once a critical threshold is crossed. Therefore, social (or positive) tipping points[1, 2] have gained wide attention as a high-leverage opportunity to counter-act upon high-risk climate tipping points[3] and to use limited policy resources most efficiently[4]. Social tipping points describe how social, political, economic or technological systems can move rapidly into a new system state or functioning[2]. The term often refers to nonlinear state change, without a clear distinction from similar phenomena, such as regime shift and critical transition[5]. A growing scientific literature, therefore, develops a definition and the theory of social tipping mechanisms, either harnessing an analogy to climate tipping mechanisms[1, 6], or from a sociotechnical transitions perspective[2]. For instance, Milkoreit et al.[7] seeks a common definition by comprehensively surveying literature trends with various keywords related to social tipping. They find a rising publication count from the mid-2000s, dominated by disciplines of socio-ecological systems, climate change, and economics. Their content analysis of tipping point definitions emphasizes positive feedback structures as the core driver of nonlinear transitions between multiple stable states with limited reversibility, as well as multi-scale processes and cascading effects between systems. In addition to alternative stable states, nonlinearity, positive feedbacks, limited reversibility as the four key attributes of tipping points,'social' tipping points are characterized by desirability and intentionality in support of decarbonization and sustainability[5]. Furthermore, social systems involve more complex sets of interacting drivers and mechanisms, and do not have a single control variable [6]. These complexities mean single points or critical thresholds are difficult to isolate in social systems, therefore referring to processes and dynamics instead of the term social tipping point is more applicable[8]. The existing literature on social tipping dynamics, however, is missing a practical framework that embeds conceptual and empirical aspects of social tipping processes in order to inform tailored decisions, hence often exhibits an overuse of the term tipping point[5]. Therefore, the large potential beneficial impact of social tipping might be jeopardized by a weak analytical understanding due to the current limited and biased methods. Here, we unpack the challenges that impede a strong analytical understanding of social tipping, and then propose a dynamic systems approach to tackle them. This dynamic systems approach aims to address the scientific purpose of a foundational understanding of system dynamics of social tipping, and the instrumental purpose of identifying effective interventions. It integrates (i) a systemic outlook on ST mechanisms that focuses not only on reinforcing but also impeding feedback mechanisms, as well as cascading effects across different subsystems, (ii) longitudinal data requirements and harmonization for empirical evidence and monitoring the effectiveness of interventions (iii) dynamic simulation modelling to explore the collective and cascading behaviour of feedback mechanisms and to create ex-ante evidence for effective ST interventions. In the following sections, we first summarize the current state of knowledge and debate on social tipping dynamics, then delineate the main challenges and outline the dynamic system approach we propose. ## What we know so far about social tipping ### Social systems in which tipping can occur Several social systems can exhibit tipping dynamics. For instance, based on expert elicitation and literature review, Otto _et al._[1] have identified six'social tipping elements', that is, social, political, economic or technological systems in which tipping processes towards rapid decarbonization can occur. These are, as depicted in Figure 1, (i) energy production and storage, where subsidy programs and decentralized production can trigger rapid decarbonization; (ii) financial markets, where divestment from fossil fuels can rapidly reinforce investors' belief in the risks of carbon-intensive assets; (iii) education, where climate change coverage in school curricula can trigger sustained widespread engagement in climate action; (iv) norms and values, where advocacy by a few thought leaders can lead to a large population recognising anti-fossil fuel values; (v) urban infrastructure, where choosing clean technologies can trigger both cost reductions and consumer interest in pro-environmental choices, and (vi) information feedbacks, where disclosure of emission information on consumer products can trigger rapid behavioural change. Sharpe and Lenton[9] discuss the adoption of new technologies such as EVs and solar photovoltaics as specific examples related to energy and urban infrastructure. Farmer et al.[10] add institutional structures, e.g. UK Climate Change Act, since they can shape long-term and consistent climate policies. Taylor and Rising[11] focus on agriculture and demonstrate the presence of an economic positive tipping point beyond which the agricultural land use intensity starts declining. One of the biggest promises of ST dynamics is the cascading effects through interactions between the systems. For instance, Otto _et al.[1]_ argue that more emphasis on climate change in the education system can lead to wider advocacy activities that trigger norm and value shifts while creating a higher sensitivity to carbon-emission disclosures on consumer products. Stadelmann-Steffen et al.[12] exemplify cross-system interactions with the historical phaseout of ozone-depleting chemicals. They consider Montreal Protocol, non-CFC substitutes and public concerns over UV radiation and skin cancer as interacting political, technological and behavioural tipping elements, respectively. Another example is provided by Pascual et al.[13] who identify the opportunities for positive tipping that emerge from the interactions between biodiversity, climate and society. Simulation results of Moore et al.[14] show a tipping behaviour in projected global carbon emissions resulting from cascading positive feedbacks through individual action, social conformity, climate policy and technological learning. Besides cross-system interactions, cross-scale interactions can also trigger tipping dynamics as they result in contagion from individuals or organizations at the micro level to meso-level communities and macro-level countries and the world. For instance, renewable power and EV policies in a handful of frontrunner countries have been shown to accelerate the transition on a global scale across countries and sectors[15, 9, 16]. Similarly, a single schoolchild's protest has led to a global Fridays for Future movement, and through interconnections with other systems such as Figure 1: **Sylized depiction of six social tipping elements identified by Otto et al. (2020).** Social tipping elements (STE) refer to social systems in which tipping dynamics towards rapid decarbonization can be observed due to the annotated key positive feedback loops. The interventions that can trigger tipping dynamics in each elements are noted in grey. Besides the feedbacks within them, STEs have interconnections that can create cascading effects. policy, it could create ST dynamics[12]. Interventions at the meso-level of communities (10000 - 100000 people) are identified to have maximum leveraging effect for rapid decarbonization[17], due to cross-scale interactions and pedagogy for agency[18]. ### Social tipping interventions ST interventions are active changes made to social systems in order to trigger or activate tipping processes, including those through cascading effects[1]. Such interventions can be 'kicks' that push the system onto a new trajectory without changing underlying structure but by triggering the loops (e.g., financial disclosure), or'shifts' that change the system rules (e.g., institutional structures such as UK Climate Change Act)[10]. Not every climate change mitigation strategy, measure, action or policy can be considered a tipping intervention, unless they trigger or create relevant feedback loops underlying tipping dynamics. National policies such as targeted investments, pricing policies, incentives, and regulations are considered ST interventions focused on feedbacks in specific systems[19]. Such interventions can also be triggered by civil society and create the constituency for government-led interventions[20, 6] through cross-system interactions. For instance, behavioural interventions like communicating changes in social norms can accelerate demand-side mitigation, and positive spillovers can lead to tipping dynamics within or across consumption domains[21]. Moreover, reliance on market-based incentives, such as tax credits, may perpetuate wealth inequalities, weakening community empowerment and engagement, and acceptability of interventions. Therefore, ST interventions should be distinguished as those that can activate positive feedback mechanisms to trigger cascading dynamics across scales and systems. ### Data availability and modelling Current scientific literature shows an inclination towards narrative-based presentation of potential social tipping dynamics, where empirical evidence is either in a limited context, or expert elicited and not observational. For instance, EV adoption is described as an example of cross-scale tipping dynamics in a narrative form[9], or possible tipping dynamics of coal phase-out in China is described based on an actor-objective-context framework[22]. Monitoring tipping processes is a data-intensive yet crucial activity to track if tipping threshold is approached or exceeded.. For instance, the transformation seismograph of New Climate Institute tracks indicators of tipping processes in power and transport systems[33]. Climate Action [19] monitors energy system indicators such as cost parity between renewable electricity generation and fossil-fuel assets. Systems Change Lab's data dashboard adds industry and finance indicators to these monitoring activities[24]. Similarly, Climate Watch monitors the policy system based on the records of countries that enhance their Nationally Determined Contributions (NDCs) or have net zero pledges in their law, policy documents or political pledges[25]. Quantitative simulations compile empirical evidence on individual systemic relationships from selected literature, market data or surveys, project the emerging long-term dynamics and demonstrate the conditions for tipping behaviour to occur. Existing evidence from simulation studies, though, remains limited to specific single systems, such as dietary change[26, 27], global spread of urban innovations[28], urban cycling[28], or ground-water management[30]. The stylized global model of Moore _et al._[14] notably combines multiple systems, from public opinion to individual technology adoption, climate policy and endogenous technological change, and show that individual action triggers a cascade of positive feedback processes through technological learning and social conformity for climate policy support. ## Challenges and knowledge gaps in the analysis of social tipping To identify how feedbacks, multiple systems, cascading effects between them and evidence for social tipping dynamics are characterized, we scan the recent social tipping literature and find that (Figure 2, Supplementary Table 1) there is a clear weighting towards: 1. single systems or scales where ST dynamics can occur, such as adoption of electric vehicle (EV) technology in the transport system, instead of multiple or connected systems (e.g. energy, finance, social norms, education) and scales (e.g. community, national, global); 2. a focus only on positive feedback mechanisms that can create the tipping dynamics sought by a particular perspective or agenda, but omit the tightly related negative feedback loops or undesired positive ones; 3. narrative or qualitative presentation of evidence for the account of social tipping (ST) dynamics, where the discussion remains mostly theoretical with empirical evidence obtained from selected literature. Additionally, we observe that many case studies or empirical evidence are obtained from the Global North where different circumstances of the Global South are overlooked. There is an inherent degree of relativity in what is positive or negative, since a positive tipping outcome for one population may be viewed as disastrous for another. ### Focusing on single systems One of the biggest promises of ST dynamics is the cascading effects through interactions between the systems, yet these interconnections are sparsely examined as the single-system view in the existing literature shows. A single system focus without considering cross-system and cross-scale interactions, negative feedbacks, socioeconomic and geographic differences limits the scope and relevance of the intervention assessments. As sustainability transitions Figure 2: **Categorization of the emerging social tipping literature.** The publication data were retrieved from a search on Scopus database in October 2022 with search terms _“social tipping” OR ”positive tipping” OR ”sensitive intervention points” OR ”socio-ecological tipping” OR ”socio-economic tipping”_ in article titles, abstract and keywords. We added five more articles identified during an expert workshop8 to the resulting 59, and after screening for relevance, we categorized the remaining 36 articles. The rows of the figure refer to this categorization in terms of whether they provide empirical evidence or remain at a theoretical level, what source of evidence they use for tipping dynamics and how they present this evidence, whether they consider single or multiple social tipping (ST) systems, whether they focus on positive or negative feedbacks, and whether their geographic emphasis is on Global North or South. The numbers in parentheses refer to the number of publications in each category. ‘NA’ in the Geographic Emphasis’ row refers to the publications in which geographic coverage is not specified. ‘Other’ in the _Source of Evidence_ row includes parametric evaluations or studies based on expert elicitation and selected literature reviews. The articles and full categorization can be seen in **Supplementary Table 1.**_ expert Frank Geels emphasizes[31], unlike relatively well-defined climate tipping points, analysing ST points requires taking the inherent complexity of social systems into account and all the efforts leading up to the tipping point. Therefore, the potential of ST interventions can be assessed with a more comprehensive approach aligned with social systems. ### Focusing only on positive feedback loops The core driving mechanism of ST dynamics are positive feedback loops, hence most interventions proposed in the existing studies target those (Figure 2). Intervention outcomes are uncertain, though, due to the interactions between reinforcing (positive) and balancing (negative) feedback loops. Social movement interventions can trigger positive feedback loops of norm and value changes, for instance, yet they also cause value polarisation as a countervailing process. Efforts to stop specific lithium mines or wind farm projects and pipelines by local activists often runs counter to industrial scale climate policy ambitions. Protest movements against both fossil and low-carbon energy projects have stopped, suspended, or slowed new developments, but have also led to violence, with 10% of 649 cases analysed involving assassination of activists[32]. Polarisation also leads to a loss of diversity in opinion, ideas and solutions[33, 34], undermining system resilience and jeopardizing the promise of interacting positive feedbacks for accelerated climate action. Such unintended consequences are also at the core of 'just transitions' research which addresses the coupled relationship between carbon emissions and ever-increasing wealth and energy inequalities, and highlights the need for a precautionary and holistic perspective on tipping and its justice implications[35]. Therefore, formulation of effective interventions can benefit from considering the role of negative feedbacks to avoid resistance and unintended consequences. ### Lack of observational data and model-based studies as empirical evidence The empirical evidence underlying the theoretical, narrative-based discussion on social tipping often comes from selected, domain-specific literature. A few studies statistically show historical tipping dynamics based on large-scale data, such as European Social Survey[36] or gridded land use data[11]. A few lab experiments confirm the presence of tipping dynamics created by social conformity[37, 39] where adoption of a new norm by the 25-40% of the population (critical mass) triggers further contagion. Even fewer field trials demonstrate the role of critical mass and information feedbacks in a real-life setting[40]. Such contextual and methodological limitations of empirical evidence cascades into the modelling studies that consolidate available data. Modelling studies are based mostly on a Global North perspective (Supplementary Table 1) and reflects neither the needs of future global consumers nor the complexity that contribute to socioeconomic and environmental inequality particularly in the Global South. Monitoring systems that aggregate national and global data are useful in tracking observed developments, and they can be expanded to social systems to include behaviour, norm and value changes with carefully selected metrics that indicate tipping dynamics and for which data can be collected. The uptake of proposed tipping interventions by policymakers and stakeholders requires clear empirical evidence on their effectiveness. Therefore, geographically and contextually more comprehensive statistical, experimental and modelling studies are needed to establish such clarity of evidence. ## Dynamic systems approach to social tipping To address the gaps in the conceptualization and assessment of ST points and interventions, we introduce a three pillared dynamic systems approach with examples developed in an expert workshop that focused on participatory modelling of key social tipping processes[8]. The three pillars refer to delineations of system structures, quantifying and monitoring tipping dynamics, and dynamic modelling to consolidate available empirical knowledge and evaluate potential interventions. ### Systems outlook Understanding potential tipping dynamics for rapid decarbonization can be enhanced by delineating the underlying system structure based on three principles: 1. _Characterize and map the feedback mechanisms in each social tipping system, by taking potential barriers to positive tipping dynamics into account._ ST processes described in many existing studies depict the mental models of experts from physical climate science or social sciences such as transition studies based on sector-specific historical behaviour. These mental models often focus on the critical threshold of a tipping process, describe a unidirectional impact from interventions to outcomes, and do not always explicate closed chains of relationships (feedback loops). Delineating the feedback mechanisms, however, can lead to a better understanding of the eventual dynamic system behaviour. ST dynamics are expected to occur as a result of positive (reinforcing) feedback mechanisms[11, 42] that amplify a change in the same direction through a loop of system elements. Existing conceptualizations of ST processes emphasize such feedbacks that positively affect decarbonization and overlook the negative ones (Figure 2). Even though tipping dynamics are characterized by reinforcing feedbacks, dynamic systems are characterized by multiplicity of coupled negative and positive feedbacks[13]. Therefore, considering negative effects and feedbacks can provide a more balanced estimate of the tipping potential and help avoid unintended consequences of interventions. For instance, rapid divestment from fossil fuel assets is considered a financial tipping intervention[1], yet it can lead to financial instability and adverse distributional consequences that can undermine system functioning[8]. Diffusion of ethical values against fossil fuel exploitation through social conformity is another key social tipping process[1]. This reinforcing loop of social conformity, though, is counter-acted in reality by the feedback mechanisms of polarization and industry resistance, which might impede the tipping potential of norm changes. Moreover, what may be considered a positive tipping in the Global North, e.g. rapid and large-scale decarbonization, may trigger unintended negative consequences in the Global South and other marginalized regions, such as the closing down of wealth-generating markets and export opportunities, if not planned for. Multiple methods can be employed in combination to delineate the feedback mechanisms underlying tipping dynamics. For instance, participatory systems mapping methods either based on causal loop diagrams[44] or fuzzy cognitive maps[45] can elicit and align expert views. Qualitative or semi-quantitative models co-developed using these participatory methods can be complemented by literature reviews for quantitative empirical evidence. Box I and Figure 3 exemplify coupled feedback loops delineated in a participatory modelling workshop and supported by empirical studies listed in Supplementary Table 2. **Box I: Multiple positive and negative feedback mechanisms governing norm and value changes** Since social and moral norms are a key driver of human behaviour, shifting towards anti-fossil fuel norms is considered a key social tipping process for rapid decarbonization[1]. Advocacy against fossil fuel extraction even by a small group of thought leaders or influencers can stimulate the diffusion of pro-environmental values[10]. The feedback loop _norm change against fossil fuels_ in Figure 3 depicts this reinforcing mechanism of diffusion: _Thought leaders_ who advocate for anti-fossil-fuel norm changes can be individuals or organizations within civil society, international organizations, state leaders and subnational governments[47]. Their advocacy activities are empirically shown to influence public opinion and mobilization against fossil fuel exploitation, as exemplified by the individual influence of Bill McKibben[48] and Greta Thunberg[49], or the student activists mostly influenced by their leaders[50]. As the population against fossil fuel exploitation increases, more thought leaders or norm entrepreneurs emerge from different communities and newly created coalitions[51], closing the loop of diffusion. In contrast, the reinforcing loop of _norm change for fossil fuels_ acts as a primary impediment to anti-fossil fuel norm shifts, since it represents a value polarization cycle commonly observed in climate debate in multiple countries[52, 53]. Recent lab experiments also show that identity and polarization are strong impediments to tipping dynamics in a broader context[38]. Pro-fossil fuel norms develop similarly to the anti-fossil fuel norms: _Population supporting fossil fuel exploitation_ increases as the _advocacy about the benefits of fossil fuel exploitation_ becomes prevalent, as exemplified by the strong relation between public opposition to one of the major Figure 3: Main feedback loops underlying the social tipping dynamics in the norms and values system, derived from expert elicitation and supported by the empirical studies listed in Supplementary Table 2. A positive causal link implies that a change in variable A changes variable B in the same direction, whereas a negative link implies a change in the opposite direction. A positive feedback loop refers to a closed chain of relationships that includes an even number of negative links, and where a change in any element, either in the positive or negative direction, is reinforced through the loop. A negative feedback loop refers to a closed chain with an odd number of negative links, where a change is balanced through the loop. US climate policies and views of politicians and certain TV channels[54]. In return, political leaders adopt a polarizing language to appeal to the increasing fraction of population supporting their view[55], which enhance advocacy activities and make fossil fuel policies one of the most politically polarized issues especially in the US[52]. People who are exposed to opposing views stick to their own view more strongly[86], hence advocacy activities enhance value polarization and reinforce the norm change feedbacks on both sides. The amplifying effect of partisan identification on climate policy support among both for Republicans and Democrats in the US[57] exemplify the role of such feedbacks. A balancing feedback mechanism that affects norm shifts is the _fossil fuel advocacy_ loop. As the population against fossil fuel exploitation increases, the resulting social mobilization leads to policies that restrict fossil fuel extraction and use, as observed in many local and national settings so far[32, 58]. _Regulations restricting fossil fuel use_ is the main driver of _corporate promotion by the fossil fuel industry[59]_, which enhances the pro-fossil fuel advocacy activities[60], and eventually reduces the _population against fossil fuel exploitation_. This feedback loop potentially dampens the growth of _population against fossil fuels_, hence the norm shifts. A similar balancing loop can be formulated due to the media coverage of climate change leading to higher pro-fossil fuel advertisements[59], often triggered by advocacy activities of influential thought leaders. The real-world example of fossil fuel resurgence following the war in Ukraine provides an opportunity to examine how these dynamics can play out on the world stage. * _Identify and map the interactions across multiple systems in the conceptualization of social tipping processes._ The analysis of social tipping processes tends to be system-specific, such as the diffusion of EVs in the transport sector[9]. However, many of these systems are strongly interconnected, as exemplified by the education-society links for shifting to anti-fossil fuel norms[1], or policy-technology-behaviour connections that tipped the phaseout of CFCs[12]. Growing empirical evidence supports the presence of interactions between public opinion, social norms, individual pro-environmental behaviour, climate policy, climate impacts (and their effects on opinion), and technological change[61, 4]. The dynamic behaviour resulting from these cross-system interactions that potentially lead to additional feedback mechanisms might accelerate tipping dynamics and boost the effectiveness of interventions, or vice versa. The examples of cross-system interactions provided so far are limited in scope, and further interconnections can be identified and analysed, for instance, between education, finance and energy systems. Estimating and validating the tipping potential of interventions can benefit from maintaining a feedback perspective in specifying these interconnections, rather than formulating them as linear cascading effects, and from the crucial support of empirical findings. Participatory approaches with experts and stakeholders from different communities can facilitate interdisciplinary research needed to identify the existing and potential cross-system interactions. While providing quick access to well-informed mental models and available empirical evidence, participatory research can also steer new empirical research for quantifying cross-system interactions. Such participatory approaches themselves can contribute to social tipping through their impact on social movements[62]. Box II and Figure 4Figure 3 exemplify interactions between energy, finance, urban infrastructure, policy and society delineated in a participatory modelling workshop and supported by empirical studies listed in Supplementary Table 3. ## Box II: Cross-system interactions Energy, finance, policy, societal and urban infrastructure systems involve positive feedback mechanisms that can individually lead to social tipping dynamics [1, 8]. They also interact with each other through linear cascading effects and wider feedback loops that can amplify or dampen the tipping dynamics. Figure 4 depicts those interactions which are based on an expert co-modelling workshop and the empirical studies listed in Supplementary Table 3. The loop _fossil fuel (FF) financing through market presence_ shows a key coupling of finance and energy systems: The higher the _fossil fuel energy supply_, the higher the demand, leading to higher _expected value of FF assets_[8], hence more investment and higher FF energy supply[8](and the reverse applies). This feedback loop is further reinforced by the _credibility of emission reduction commitments_. If investors trust climate policy announcement and introduction, they would revise their risk assessment for FF firms, leading to higher cost of capital for FF investments, lowering profitability, thus the FF asset value [65, 66]. Credibility of commitments leads also to a lower cost of capital for renewable energy investments, further enabling decarbonization. Credibility of commitments is reduced by a continuing high demand for FFs Figure 4: **Main feedback loops resulting from the interactions between energy, finance, urban, social and policy systems,** derived from expert elicitation and supported by 27 empirical studies listed in Supplementary Table 3. Double lines on arrows indicate a delay in the relationship depicted by that arrow. (See the caption of Figure 3 for explanation of the notation.) but enhanced by the strength of climate policies itself[77]. The expected value of FF assets is also dependent on perceived _climate change impacts[88]_, which creates the balancing feedback loop of _FF financing through externalities_ as diminishing FF supply would reduce the climate impacts in the long term. Another major driver of expected value of FF assets is the _momentum of international climate policies_. For instance, the Paris Agreement led to a significant reduction in the high-carbon stock values and an increase in the cost of borrowing[99]. International climate policies eventually reduce the FF supply through not only their financial impacts but also their direct impact on national _regulations restricting fossil fuel use[70]_. National policies such as carbon tax or emission trading focus on fossil fuel consumption, yet those restricting supply has gained momentum[71]. Their impact on global FF supply is yet to be achieved[72], as the location of such policies and FF extraction match[73]. The balancing loop _social legitimacy of climate action_ depict the influence of social changes on the FF energy supply through finance and policy: _Population engaged in climate action_ through direct mitigation behaviours such as energy saving or civic action enhances the momentum of international climate policies by putting pressure on negotiations and signalling the readiness for national policies[74]. Worsening _climate change impacts_ increase the engagement in climate action either directly[61, 75], or indirectly via _thought leaders[76]_ who communicate the climate change causes and solutions. Climate impacts are dependent on _fossil fuel energy supply_, which can be traced back to the _momentum of climate policies._ _Enabling social pressure_ loop depicts the connection of urban infrastructure, energy, policy and finance systems: _Provision of low-carbon urban technology_ can facilitate low-carbon behaviours such as reducing household waste and energy use[77] or cycling[78], increasing the population engaged in climate action and eventually lowering fossil fuel energy supply. This in return can reduce the _cost of low-carbon energy_, subsequently the _cost of low-carbon urban technologies_, resulting in further provision of low-carbon urban technology[79]. _Identify and map the interactions across multiple scales in the conceptualization of contagion dynamics that lead to social tipping_ Social contagion among individuals is a strong feedback loop that triggers tipping dynamics[80]. Contagion can also be observed among and across communities, firms, authorities and nations[81], resembling fractals that replicate the same structure[18]. Acknowledgement of different scales of agency and their cross-scale interactions helps to overcome the fractal carbon trap[5], where the decision-making agency is attributed to a single actor or ideology (such as free-market solutions to social, economic and environmental problems) towards diverse, multilevel, catalytic action at different scales. System conceptualization can explicate the scale of each tipping mechanism, such as individuals, multinational corporations or national governments, and identify the bi- or multi-lateral interactions between those scales. Box I exemplifies the contagion effects among individuals, and how these relate to firm-level actions and national policies. Such an explicit account of different scales of action and their interactions also helps formulating tipping interventions to fulfil dynamic needs of the society and capture arising opportunities, beyond achieving a static goal such as emission reductions[82]. ### Data gathering and harmonization Complementing the system conceptualization described above, dedicated data collection efforts are needed to move beyond specific, single system data from selected literature; to consolidate empirical evidence for conceptual feedback loops underlying tipping dynamics as exemplified in Box I and II; and to monitor the actual or potential effectiveness of interventions. Data collection requires identifying the key indicators that can represent the dynamics created by coupled feedback mechanisms and interventions. For instance, cost parity between low-carbon and fossil-fuel energy supply combines the dynamics of technological learning and economies of scale feedbacks both from the low-carbon and fossil fuel energy sector. Box III and Figure 5 exemplify two monitoring variables identified in an expert workshop. **Box III : Monitoring social tipping dynamics** Monitoring social tipping dynamics requires operationalizing variables that can capture the cascading feedback dynamics in multiple systems. Below are two examples of such variables, with Figure 5 presenting a stylized potential trajectory of each variable. **Number of systemically important companies calculating climate Value-at-Risk** is an indicator of climate risk perception in the financial markets, hence the perceived risk of fossil fuel assets. Systemically important companies can be defined as those which have more than $ 100 billion in assets. We estimate this variable to have increased increasingly in recent years, but the critical threshold is yet to be achieved in the next few years. **Willingness to pay for climate action** can be used to monitor population engaged in climate action, for which data is already collected and used in some contextual studies. Willingness to pay is heavily dependent on income level and economic situation, hence expected to fluctuate over time. In the middle income group, willingness to pay is expected to have an increasing future trend, whereas it is estimated to be well below a critical threshold currently in the low-income group. Monitoring ST dynamics requires harmonizing different sources of time-series data on common time frames to enable detecting cascading cross-system changes. For instance, social media data can be used as a high-frequency, publicly available and low-cost global source[83] to monitor the norm and value changes in social systems, in combination with purposeful, lower frequency Figure 5: Stylized trajectories of monitoring variables, Number of systemically important companies calculating climate Value-at-Risk at left, and Willingness to pay for climate action at right. data such as World Values Survey[84]. Harmonizing this data on norm and value changes with records of other systems, such as international and national policy action, energy cost parities, technology adoption levels and financial flows, can help quantifying the bilateral cross-system relationships and monitoring their cascading effects towards tipping. Sharing these harmonized data on online platforms can facilitate further in-depth collaborative research within scientific communities, whereas public display can demonstrate the importance of rapid action. ### Dynamic modelling Modelling is a key tool in analysing and navigating dynamic systems, helping understand how a system works, and bringing rigor to the analysis with an explicit formulation of ideas and assumptions, consolidation of data, and logical tracing of those formulation sequences. Models, either qualitative or quantitative, provide a future outlook by estimating how a variable is likely to evolve, diagnose what factors have the greatest leverage to change outcomes, and assist in ex-ante policy assessments. In the social tipping context, quantitative modelling is commonly used (Figure 2) to demonstrate the conditions under which tipping occurs, yet in stylized cases and mostly from a single system perspective. Dynamic modelling can support the analysis of social tipping dynamics by embedding four key aspects. First, models should enable a systematic demarcation of interconnected feedback mechanisms within multiple systems and their cross-system and cross-scale interactions from micro to meso and macro level. Second, the models should be grounded in representative data and move towards quantitative realm for computational analyses of feedback dynamics. Quantifying social systems at a global level is challenging and aggregation in stylized representations is unavoidable. Still, quantitative methods aligned with the global data can provide actionable evidence for the long-term effectiveness of interventions, while qualitative and participatory approaches facilitate conceptualization and dissemination of such quantitative modelling. Third, to tackle the broad scope of multiple social systems and feedbacks, an iterative modelling approach can help, where broad system boundaries are narrowed down through empirical support and computational diagnostic analyses, and further research efforts are dedicated to those feedbacks that are shown to be more important. Fourth, interdisciplinary modelling can ensure policy relevance through either hard or soft coupling between models of tipping dynamics and the existing climate policy models. A modelling discipline that has widely and influentially guided climate policy assessments is integrated assessment modelling (IAM)[85], yet considered limited in covering nonlinear social and behavioural processes[86, 87]. Social tipping processes can be more suitably captured by an emerging group of simpler, aggregate IAMs developed with descriptive, rather than optimization-based, dynamic modelling methods. Current examples of such models include those developed with agent-based modelling (ABM) of different economic sectors[88, 89], and those developed with system dynamics (SD) modelling based on aggregate representation of cross-sectorial feedbacks[90, 91]. This emerging group of models that incorporate social tipping processes intersects with social climate models that focus on representing human and Earth system feedbacks[92]. These simple models represent nonlinear relationships and feedbacks, are more flexible to scope extensions compared to conventional IAMs, can be more easily calibrated to emerging data from the monitoring systems, and facilitate computational analyses with large numbers of simulations. ABMs are powerful in modelling social contagion, and often used with threshold models[93] where the decision of a given actor is formulated conditional on the number of others who make that decision[80], hence also used in modelling social tipping dynamics[94]. Recent evidence, though, shows that threshold models may not represent the nonlinearities of real life[95], and ABMs are often prone to overcomplexity and are weaker in terms of statistical validation of results, making sometimes hard to identify policy recommendations. Stock-flow consistent (SFC) models[95] models merge desirable behavioural features of ABM with robust balance sheet accounting in which heterogeneous agents, sectors and their financial flows are represented as a network of interconnected balance sheets, allowing for tracing causal relations and validation of results, and contributing to overcome the limitations of ABMs. SD modelling can suit exploring the global social tipping dynamics, primarily because interconnected feedbacks within and across multiple systems can be better represented in the aggregate and feedback-rich view in SD. ABMs require micro-level data for calibration and validation[96, 97], and the computational requirements for micro modelling at global scale might hinder uncertainty analysis and interactive simulations[98, 66]. Therefore, an aggregate modelling view can better suit the available data at a global level. Since complexity of micro-phenomena on a global scale impedes relating the model behaviour to the structure[97, 99], SD models can also allow deriving cognitively grounded insights. In previous global modelling studies, based on coupling social and behavioural feedback mechanisms with those of land use and climate dynamics, [27] showed that triggering social norm feedbacks at an early stage of diffusion is the most influential driver of widespread shifts to plant-based diets. Moore _et al.[14]_ presented a prominent example of cross-system modelling found that low-emission trajectories consistent with Paris Agreement targets can emerge through positive tipping dynamics for which social conformity, technological learning, political responsiveness to public opinion, and cognitive biases in perception of climate impacts are the key. Similar modelling studies can cover additional high-leverage systems and connections, such as energy and finance, with more nuanced and policy-relevant representation of tipping elements. Quantification of these models with globally representative data[92], including those from Global South and disenfranchised populations of the Global North, can enable defining trajectories against which actual change is monitored, so that system structure and behaviour can be better understood. Subsequently, this better understanding and empirical grounding enhance the usefulness of models in analysing the effectiveness of social tipping interventions. ## Way forward Social tipping points have gained wide attention in the scientific and policy debate as high-leverage and cost-efficient options to accelerate emission reductions. The growing scientific literature on social (positive) tipping points, though, is dominated by a narrative account of the social tipping dynamics, lacking a clear empirical basis, and by a focus on single technologies, systems, scales, and feedback mechanisms that originate from the Global North that often exclude a social justice perspective. Harnessing the promising potential of social tipping dynamics, though, requires wide-ranging and systematic analyses with multiple empirical methods and with a broad systems outlook that involves multiple systems, agency scales and interconnected feedback loops. Here, we outlined a dynamic systems approach that involves a systems outlook with positive and negative feedback loops withing and across multiple systems and scales; concurrent data collection in multiple systems not only to provide empirical evidence for tipping dynamics but also to monitor them; dynamic simulation modelling to consolidate conceptual and empirical knowledge and for ex-ante analysis of tipping interventions. We argue that it is critical to use such a systems approach to better understand social tipping dynamics, to ensure climate policy relevance. This approach can help solidify the popularity of the social tipping concept in better-informed policies and practices.
2309.05869
On dissipation time scales of the basic second-order moments: the effect on the Energy and Flux-Budget (EFB) turbulence closure for stably stratified turbulence
The dissipation rates of the basic turbulent second-order moments are the key parameters controlling turbulence energetics and spectra, turbulent fluxes of momentum and heat, and playing a vital role in turbulence modelling. In this paper, we use the results of Direct Numerical Simulations (DNS) to evaluate dissipation rates of the basic turbulent second-order moments and revise the energy and flux-budget turbulence closure model for stably stratified turbulence. We delve into the theoretical implications of this approach and substantiate our closure hypotheses through DNS data. We also show why the concept of down-gradient turbulent transport becomes incomplete when applied to the vertical turbulent flux of potential temperature under very stable stratification. We reveal essential feedback between turbulent kinetic energy, the vertical flux of buoyancy and turbulent potential energy, which is responsible for maintaining shear-produced stably stratified turbulence up to extreme static stability.
Evgeny Kadantsev, Evgeny Mortikov, Andrey Glazunov, Nathan Kleeorin, Igor Rogachevskii
2023-09-11T23:26:37Z
http://arxiv.org/abs/2309.05869v1
On dissipation time scales of the basic second-order moments: the effect on the Energy and Flux-Budget (EFB) turbulence closure for stably stratified turbulence ###### Abstract The dissipation rates of the basic turbulent second-order moments are the key parameters controlling turbulence energetics and spectra, turbulent fluxes of momentum and heat, and playing a vital role in turbulence modelling. In this paper, we use the results of Direct Numerical Simulations (DNS) to evaluate dissipation rates of the basic turbulent second-order moments and revise the energy and flux-budget turbulence closure model for stably stratified turbulence. We delve into the theoretical implications of this approach and substantiate our closure hypotheses through DNS data. We also show why the concept of down-gradient turbulent transport becomes incomplete when applied to the vertical turbulent flux of potential temperature under very stable stratification. We reveal essential feedback between turbulent kinetic energy, the vertical flux of buoyancy and turbulent potential energy, which is responsible for maintaining shear-produced stably stratified turbulence up to extreme static stability. ## 1 Introduction Turbulence and associated turbulent transport have been studied theoretically, experimentally, observationally and numerically during decades [see books by Batchelor (1953); Monin and Yaglom (1971, 2013); Tennekes and Lumley (1972); Frisch (1995); Pope (2000); Davidson (2013); Rogachevskii (2021), and references therein], but some important questions remain. This is particularly true in applications to atmospheric physics and geophysics where Reynolds and Peclet numbers are very large, so that the governing equations are strongly nonlinear. The classical Kolmogorov's theory (Kolmogorov 1941a,b; 1942; 1991) has been formulated only for a neutrally stratified homogeneous and isotropic turbulence. In atmospheric boundary layers temperature stratification causes turbulence to become anisotropic and inhomogeneous making some assumptions underlying Kolmogorov's theory questionable. Numerous alternative turbulence closure theories [see reviews by Weng and Taylor (2003); Umlauf and Burchard (2005); Mahrt (2014)] have been formulated using the budget equations not only for turbulent kinetic energy (TKE), but also for turbulent potential energy (TPE) (see, e.g., Holloway, 1986; Ostrovsky and Troitskaya, 1987; Dalaudier and Sidi, 1987; Hunt et al., 1988; Canuto and Minotti, 1993; Schumann and Gerz, 1995; Hanazaki and Hunt, 1996; Keller and van Atta, 2000; Canuto et al., 2001; Stretch et al., 2001; Cheng et al., 2002; 2002; Hanazaki and Hunt, 2004; Rehmann and Hwang, 2005; Umlauf, 2005). The budget equations for all three energies, TKE, TPE and total turbulent energy (TTE), were considered by Canuto and Minotti (1993), Elperin et al. (2002, 2006), Zilitinkevich et al. (2007), and Canuto et al. (2008). The energy and flux budget (EFB) turbulence closure theory that is based on the budget equations for the densities of TKE, TPE and turbulent fluxes of momentum and heat, was developed for stably stratified atmospheric flows (Zilitinkevich et al., 2007, 2008, 2009, 2013; Kleeorin et al., 2019), for surface layers in atmospheric convective turbulence (Rogachevskii et al., 2022) and for passive scalar transport (Kleeorin et al., 2021). The EFB closure theory has shown that strong atmospheric stably stratified turbulence is maintained by large-scale shear (mean wind) for any stratification, and the "critical" Richardson number, considered many years as a threshold between the turbulent and laminar states of the flow, actually separates two turbulent regimes: the strong turbulence typical of atmospheric boundary layers and the weak three-dimensional turbulence typical of the free atmosphere, and characterized by strong decrease in the heat transfer in comparison to the momentum transfer. Some other turbulent closure models (Mauritsen et al., 2007; Canuto et al., 2008; Sukoriansky and Galperin, 2008, Li et al., 2016) do not imply the critical Richardson number so that shear-generated turbulent mixing may persist for any stratification. In particular, Mauritsen et al. (2007) have developed a turbulent closure based on the budget equation for TTE (instead of TKE) and different observational findings to take into account the mean flow stability. Using this turbulent closure model, Mauritsen et al. (2007) have studied the turbulent transfer of heat and momentum under very stable stratification. In their model, whereas the turbulent heat flux tends toward zero beyond a certain stability limit, the turbulent stress stays finite. However, the model by Mauritsen et al. (2007) has not used the budget equation for TPE and the vertical heat flux. L'vov et al. (2008) have performed detailed analyses of the budget equations for the Reynolds stresses in the turbulent boundary layer (relevant to the strong turbulence regime) taking explicitly into consideration the dissipative effect in the horizontal heat flux budget equation, in contrast to the EFB "effective-dissipation approximation". However, the theory by L'vov et al. (2008) still contains the critical gradient Richardson number for the existence of the shear-produced turbulence. Sukoriansky and Galperin (2008) apply a quasi-normal scale elimination theory that is similar to the renormalization group analysis. Sukoriansky and Galperin (2008) do not use the budget equations for TKE, TPE and TTE in their analysis. This theory correctly describes the dependence of the turbulent Prandtl number versus the gradient Richardson number and does not imply the critical gradient Richardson number for the existence of the turbulence. However, this approach does not allow obtaining detailed Richardson number dependences of the other non-dimensional parameters, like the ratio between TPE and TTE, dimensionless turbulent flux of momentum or dimensionless vertical turbulent flux of potential temperature. Their background non-stratified shear-produced turbulence is assumed to be isotropic and homogeneous. Canuto et al. (2008) have generalized their previous model (see Cheng et al., 2002) introducing the new parameterization for the buoyancy time scale to accommodate the existence of stably stratified shear-produced turbulence at arbitrary Richardson numbers. Li et al. (2016) have developed the co-spectral budget (CSB) closure approach which is formulated in the Fourier space and integrated across all turbulent scales to obtain flow variables in physical space. CSB models allow turbulence to exist at any gradient Richardson number, however, CSB yields different (from EFB) predictions for the vertical anisotropy versus Richardson number. All state-of-art turbulent closures follow the so-called Kolmogorov hypothesis: all dissipation time scales of turbulent second-order moments are assumed to be proportional to each other, which at first glance looks reasonable but, in fact, hypothetical for stably stratified conditions. Our DNS results are limited to gradient Richardson numbers up to Ri = 0.2, but despite this constraint, we aim to disprove this proportionality and instead propose that the stability dependency is inherent in the ratios of dissipation time scales. ## 2 Problem setting and basic equations We consider plane-parallel, stably stratified dry-air flow and employ the familiar budget equations underlying turbulence-closure theory (e.g., Kleeorin et al. 2021; Zilitinkevich et al., 2013; Kaimal and Fennigan, 1994; Canuto et al., 2008) for the Reynolds stress, \(\tau_{ij}=\left\langle u_{i}u_{j}\right\rangle,\) the potential temperature flux, \(F_{i}=\left\langle\theta u_{i}\right\rangle,\) and the intensity of potential temperature fluctuations, \(E_{\theta}=\left\langle\theta^{2}\right\rangle/2\): \[\frac{\partial{r_{ij}}}{\partial t}+\frac{\partial}{\partial x_{k}}\Phi_{ij}^ {(\tau)}=-\tau_{ik}\frac{\partial v_{j}}{\partial x_{k}}-\tau_{jk}\frac{ \partial v_{i}}{\partial x_{k}}-\left[\varepsilon_{ij}^{(\tau)}-\beta\big{(}F _{j}\delta_{i3}+F_{i}\delta_{j3}\big{)}-Q_{ij}\right], \tag{1}\] \[\frac{\partial F_{i}}{\partial t}+\frac{\partial}{\partial x_{j}}\Phi_{ij}^{( F)}=\beta\delta_{i3}(\theta^{2})-\frac{1}{\rho_{0}}\left\langle\theta\frac{ \partial p}{\partial x_{i}}\right\rangle-\tau_{ij}\frac{\partial\theta}{ \partial x}-F_{j}\frac{\partial v_{i}}{\partial x_{j}}-\varepsilon_{i}^{(F)}, \tag{2}\] \[\frac{\partial E_{\theta}}{\partial t}+\frac{\partial}{\partial x_{i}}\Phi_{i} ^{(\theta)}=-F_{x}\frac{\partial\theta}{\partial x}-\varepsilon_{\theta}. \tag{3}\] Here, \(x_{1}=x\) and \(x_{2}=y\) are horizontal coordinates, \(x_{3}=z\) is the vertical coordinate; \(t\) is time; \(\mathbf{U}=\left(U_{1},U_{2},U_{3}\right)=\left(U,V,W\right)\) is the vector of mean wind velocity; \(\mathbf{u}=\left(u_{1},u_{2},u_{3}\right)=\left(u,v,w\right)\) is the vector of velocity fluctuations; \(\mathbf{\Theta}=T(P_{0}/P)^{1-1/\gamma}\) is mean potential temperature (expressed through absolute temperature, \(T\), and pressure, \(P\)); \(T_{0}\), \(P_{0}\) and \(\rho_{0}\) are reference values of temperature, pressure and density, respectively; \(\gamma=c_{p}/c_{v}=1.41\) is the specific heats ratio; \(\theta\) and \(p\) are fluctuations of potential temperature and pressure; \(D/Dt=\partial/\partial t+U_{k}\partial/\partial x_{k}\) is the operator of full derivative over time \(t\); angle brackets denote averaging; \(\beta=g/T_{0}\) is the buoyancy parameter; \(g\) is the acceleration due to gravity; \(\delta_{ij}\) is the unit tensor (\(\delta_{ij}=1\) for \(i=j\) and \(\delta_{ij}=0\) for \(i\neq j\)); \(\Phi_{ijk}^{(\tau)}\), \(\Phi_{ij}^{(F)}\) and \(\Phi_{i}^{(\theta)}\) are the third-order moments, which define turbulent transports of the second-order moments under consideration: \[\Phi_{ijk}^{(\tau)} = \langle u_{i}u_{j}u_{k}\rangle+\frac{1}{\rho_{0}}\big{(}\langle pu_ {i}\rangle\delta_{jk}+\langle pu_{j}\rangle\delta_{ik}\big{)}-\nu\left(\langle u _{i}\frac{\partial u_{j}}{\partial x_{k}}\rangle+\langle u_{j}\frac{\partial u _{i}}{\partial x_{k}}\rangle\right), \tag{4}\] \[\Phi_{ij}^{(F)} = \langle u_{i}u_{j}\theta\rangle-\nu\left(\frac{\partial u_{i}}{ \partial x_{j}}\right)-\kappa\left\langle u_{i}\frac{\partial\theta}{\partial x _{j}}\right\rangle,\] (5) \[\Phi_{i}^{(\theta)} = \frac{1}{2}\langle\theta^{2}u_{i}\rangle-\frac{\kappa}{2}\frac{ \partial}{\partial x_{j}}\langle\theta^{2}\rangle; \tag{6}\] \(Q_{ij}\) terms represent the correlations between fluctuations of pressure and strain-rate tensor, which control the interaction between the Reynolds stress components: \[Q_{ij}=\frac{1}{\rho_{0}}\langle p\left(\frac{\partial u_{i}}{\partial x_{j}} +\frac{\partial u_{j}}{\partial x_{i}}\right)\rangle. \tag{7}\] Here, \(\varepsilon_{ij}^{(\tau)}\), \(\varepsilon_{i}^{(F)}\) and \(\varepsilon_{\theta}\) are the second-order moments dissipation rate terms: \[\varepsilon_{ij}^{(\tau)} = 2\nu\,\langle\frac{\partial u_{i}}{\partial x_{k}}\frac{ \partial u_{j}}{\partial x_{k}}\rangle, \tag{8}\] \[\varepsilon_{i}^{(F)} = (\nu+\kappa)\,\langle\frac{\partial u_{i}}{\partial x_{j}}\frac{ \partial\theta}{\partial x_{j}}\rangle,\] (9) \[\varepsilon_{\theta} = \kappa\,\left(\frac{\partial\theta}{\partial x_{j}}\right)^{2}\rangle, \tag{10}\] where \(\nu\) is kinematic viscosity and \(\kappa\) is thermal conductivity. The budgets of TKE components, \(E_{i}=\langle u_{i}^{2}\rangle/2\) (\(i=1,2,3\)), are expressed by Eq. (1) for \(i=j\), summing them up yields the familiar TKE budget equation: \[\frac{DE_{K}}{dt}+\frac{\partial}{\partial x}\left(\frac{1}{2}\langle u_{i}^{ 2}w\rangle+\frac{1}{\rho_{0}}\langle pw\rangle-\frac{\nu}{2}\frac{\partial(u _{i}^{2})}{\partial x}\right)=-\tau\cdot\frac{\partial\up}{\partial x}+ \beta F_{2}-\varepsilon_{K}, \tag{11}\] where \(E_{K}=\sum E_{i}\) is TKE and \(\varepsilon_{K}=\sum\varepsilon_{ii}^{(\tau)}/2\) is the TKE dissipation rate. The sum of the terms \(Q_{ii}\) (the trace of the tensor \(Q_{ij}\)) is equal to zero because of the incompressibility constraint on the flow velocity field, \(\partial u_{i}/\partial x_{i}=0\), i.e. \(Q_{ij}\) only redistribute energy between TKE components. Likewise, \(\varepsilon_{\theta}\) is the dissipation rate of the intensity of potential temperature fluctuations, \(E_{\theta}\); and \(\varepsilon_{i}^{(F)}\) are the dissipation rates of the three components of the turbulent flux of potential temperature, \(F_{i}\). Following Kolmogorov (1941, 1942), the dissipation rates \(\varepsilon_{K}\) and \(\varepsilon_{\theta}\) are taken proportional to the dissipating quantities divided by corresponding time scales, \(\varepsilon_{K}=\frac{E_{K}}{t_{K}}\), \(\varepsilon_{\theta}=\frac{E_{\theta}}{t_{\theta}}\), where \(t_{K}\) is the TKE dissipation time scale and \(t_{\theta}\) is the dissipation time scale of \(E_{\theta}\). Here, the formulation of the dissipation rates is not hypothetical: it merely expresses one unknown (dissipation rate) through another (dissipation time scale). In this study, we consider the EFB model in its simplest, algebraic form, neglecting non-steady terms in all budget equations and neglecting divergence of the fluxes of TKE, TPE and fluxes of \(F_{x}\) (determined by third-order moments). This approach is reasonable because, e.g., the characteristic times of variations of the second moments are much larger than the turbulent time scales for large Reynolds and Peclet numbers. We also assume that the terms related to the divergence of the fluxes of TKE and TPE for stably stratified turbulence are much smaller than the rates of production and dissipation in budget equations (3) and (11). In this case the TKE budget equation, Eq. (11), and the budget equation for \(E_{\theta}\), Eq. (3), become \[0 = -\tau\frac{\partial v}{\partial x}+\beta F_{x}-\varepsilon_{K}, \tag{13}\] \[0 = -F_{x}\frac{\partial\theta}{\partial x}-\varepsilon_{\theta}. \tag{14}\] \(E_{\theta}\) determines TPE: \[E_{p}=\frac{\beta E_{\theta}}{\partial\theta/\partial x^{2}} \tag{15}\] so that Eq. (14) becomes \[0=-\beta F_{x}-\varepsilon_{p}, \tag{16}\] where \(\varepsilon_{p}=E_{p}/t_{\theta}\) is TPE dissipation time scale. The first term on the right-hand side (r.h.s.) of Eq. (13), \(-\tau\,\partial U/\partial z\), is the rate of the TKE production, while the second term, \(\beta F_{x}\), is the buoyancy which in stably stratified flow causes decay of TKE, i.e., it results in conversion of TKE into TPE. The ratio of these terms is the flux Richardson number: \[\mathrm{Ri}_{f}\equiv-\frac{\beta F_{x}}{\tau\partial v/\partial z}, \tag{17}\] and this dimensionless parameter characterises the effect of stratification on turbulence. Taking into account Eq. (17), the steady-state versions of TKE and TPE budget equations, Eqs. (13) and (14), can be rewritten as \[E_{K} = \tau\frac{\partial v}{\partial x}\big{(}1-\mathrm{Ri}_{f}\big{)} t_{K}, \tag{18}\] \[E_{p} = \tau\frac{\partial v}{\partial x}\mathrm{Ri}_{f}t_{\theta}. \tag{19}\] Thus, the ratio of TPE to TKE is: \[\frac{E_{P}}{E_{K}}=\frac{\mathrm{Ri}_{f}}{1-\mathrm{Ri}_{f}}\frac{t_{g}}{t_{K}}. \tag{20}\] Zilitinkevich et al. (2013) suggested the following relation linking \(Ri_{f}\) with another stratification parameter, \(z/L\): \[\mathrm{Ri}_{f}=\frac{kz/L}{1+kR_{\infty}^{z}z/L},\qquad\qquad\frac{z}{L}=\frac {R_{\infty}}{k}\frac{\mathrm{Ri}_{f}}{R_{\infty}-\mathrm{Ri}_{f}}, \tag{21}\] where \(L=-\tau^{3/2}/\beta F_{z}\) is the Obukhov length-scale, \(k=0.4\) is the von Karman constant, and \(R_{\infty}=0.2\) is the maximum value of the flux Richardson number. On the r.h.s. of Eq. (20), there is an unknown ratio of two dissipation time scales, \(t_{\theta}/t_{K}\). The Kolmogorov hypothesis suggests that it is a universal constant. We do not imply this assumption, but instead investigate a possible stability dependency of dissipation time scales ratios and improve the EFB turbulence closure model accounting for it. ## 3 Methods and data used for empirical validation For the purpose of our study, we performed a series of DNS of stably stratified turbulent plane Couette flow. In Couette flow, the total (turbulent plus molecular) vertical fluxes of momentum and potential temperature are constant (i.e., they are independent of the height), which, in particular, assures a very certain fixed value of the Obukhov length scale, \(L\). We recall that all our derivations are relevant to the well-developed turbulence regime where molecular transports are negligible compared to turbulent transports so that turbulent fluxes practically coincide with total fluxes. This is the case in our DNS, except for the narrow near-wall viscous-turbulent flow-transition layers. Data from these layers, obviously irrelevant to the turbulence regime we consider, are shown by grey points in the figures and are ignored in fitting procedures. Numerical simulation of stably stratified turbulent Couette flow was performed using the unified DNS-, LES- and RANS-code developed at the Moscow State University (MSU) and the Institute of Numerical Mathematics (INM) of the Russian Academy of Science (see, Mortikov, 2016; Mortikov et al., 2019; Bhattacharjee et al., 2022; Debolskiy et al., 2023; Gladskikh et al., 2023) designed for high-resolution simulations on modern-day HPC systems. The DNS part of the code solves the finite-difference approximation of the incompressible Navier-Stokes system of equations under Boussinesq approximation. Conservative schemes on staggered grid (Morinishi et al., 1998; Vasilyev, 2000) of \(4^{\mathrm{th}}\) order accuracy are used in horizontal directions and in the vertical direction the spatial approximation is restricted to \(2^{\mathrm{nd}}\) order accuracy with near wall grid resolution refinement sufficient to resolve near-wall viscous region. Projection method (Brown et al., 2001) is used for time-advancement of momentum equations coupled with the incompressibility condition, while the multigrid method is applied to solve the Poisson equation to ensure that the velocity is divergence-free at each time step. For the Couette flow periodic boundary conditions are used in the horizontal directions, and no-slip/no-penetration conditions are set on the channel walls for the velocity. The stable stratification is maintained by prescribed Dirichlet boundary conditions on the potential temperature. In all experiments the value of molecular Prandtl number (ratio of kinematic viscosity and thermal diffusivity of the fluid) was fixed, 0.7. The simulations were performed for a wide range of Reynolds numbers, Re, defined by the wall velocity difference, channel height and kinematic viscosity: from 5200 up to 120 000. All experiments were carried out using the resources of MSU and CSC HPC centers. For the maximum Re values achieved the numerical grid consisted of more than \(2\times 10^{8}\) cells and the calculations used about 10 000 of CPU cores. For a fixed value of Reynolds number, we considered a series of experiments: starting from neutral conditions (no imposed stability), the stable stratification was increased gradually in each subsequent experiment, eventually resulting in flow laminarization. For each stability condition the turbulent flow was allowed sufficient time to develop and reach statistical steady-state conditions (e.g., total momentum flux is constant and TKE balance is in steady state), while all the terms in the second-order moments budget equations, Eq. (1)-(3), were evaluated in a manner consistent with the finite-difference approximation resulting in negligible residual. This allowed us to study the features of shear produced stably stratified turbulence up to extreme static stability explicitly resolving all dissipation times scales of turbulent second-order moments. ## 4 Novel formulation for the steady-state regime of turbulence In the steady-state, Eq. (1) for the vertical component of the turbulent flux of momentum, \(\tau\), becomes \[0=-2E_{x}\frac{\partial v}{\partial x}-[\varepsilon_{\tau}-\beta F_{x}-Q_{13}]. \tag{22}\] Following Zilitinkevich et al. (2007) we define the sum of all terms in square brackets on the r.h.s. of Eq. (22) as the "effective dissipation": \[\varepsilon_{\tau}^{(eff)}=\varepsilon_{\tau}-\beta F_{x}-Q_{13}\equiv\frac{ \tau}{t_{\tau}}. \tag{23}\] Thus, Eq. (22) becomes \[0=-2E_{x}\frac{\partial v}{\partial x}-\frac{\tau}{t_{\tau}}, \tag{24}\] yielding the well-known down-gradient formulation of the vertical turbulent flux of momentum: \[\tau=-K_{M}\frac{\partial v}{\partial x},\ K_{M}=2A_{x}E_{K}t_{\tau}, \tag{25}\] where \(A_{x}\equiv E_{x}/E_{K}\) is the vertical share of TKE (the vertical anisotropy parameter). Substituting Eq. (25) into Eq. (18), we obtain \[\left(\frac{\tau}{E_{K}}\right)^{2}=\frac{2A_{x}}{1-\mathrm{i}R_{f}}\frac{t_{ \tau}}{t_{K}}. \tag{26}\] In Eq. (26) all the variables are exactly resolved numerically in DNS making a detailed investigation on \(t_{\tau}/t_{K}\) possible. Figure 1 demonstrates \(t_{\tau}/t_{K}\) to be a function of the stratification parameter \(z/L\) rather than a constant. We propose to approximate this function with a ratio of two first-order polynomials: \[\frac{t_{\tau}}{t_{K}}=\frac{c_{1}^{t_{K}\,\,R_{\tau}/L+c_{2}^{t_{K}}}}{z/L+c_{3} ^{t_{K}}}. \tag{27}\] Here, the dimensionless empirical constants are obtained from the best fit of Eq. (27) to DNS data: \(c_{1}^{t_{K}}=0.08\), \(c_{2}^{t_{K}}=0.4\), \(c_{3}^{t_{K}}=2\). The fitting is done using rational regression model. Proceeding to the vertical flux of potential temperature, \(F_{z}\), we derive its budget equation from Eq. (2): \[\frac{\partial}{\partial z}\,\Phi_{33}^{(P)}=\beta(\theta^{2})-\frac{1}{\rho_ {0}}(\theta\frac{\partial p}{\partial z})-2E_{x}\,\frac{\partial\theta}{ \partial z}-\varepsilon_{F}. \tag{28}\] Figure 1: The ratio of the effective dissipation time scale of \(\tau\) and the dissipation time scale of TKE, \(t_{\tau}/t_{K}\), versus \(z/L\). Empirical data used for the calibration are obtained in DNS experiments employing the MSU/INM unified code (red dots). Dark grey dots belong to the viscous sub-layer (very narrow near-surface layer essentially affected by molecular viscosity): \(0<z<50v/\tau^{1/2}\). The black solid line shows Eq. (27) with empirical constants \(C_{1}^{t_{K}}=0.08\), \(c_{2}^{t_{K}}=0.4\) and \(c_{3}^{t_{K}}=2\), obtained from the best fit of Eq. (27) to DNS data in the turbulent layer: \(z>50v/\tau^{1/2}\). DNS modelling showed \(\frac{\partial}{\partial z}\Phi_{33}^{(F)}\) term to be of the same order of magnitude as \(\varepsilon_{F}\) and of the same sign, so we introduce the 'effective dissipation rate' \(\varepsilon_{F}^{(eff)}\): \[\varepsilon_{F}^{(eff)}=\varepsilon_{F}+\frac{\partial}{\partial z}\Phi_{33}^{ (F)}\equiv\frac{\varepsilon_{F}}{t_{F}}. \tag{29}\] Consequently, Eq. (28) reduces to \[0=\beta(\theta^{2})-\frac{1}{\rho_{0}}(\theta\frac{\partial p}{\partial z})-2 E_{z}\frac{\partial\Theta}{\partial x}-\frac{\varepsilon_{F}}{t_{F}}. \tag{30}\] Traditionally, the pressure term was either assumed to be negligible or declared to be proportional to \(\beta(\theta^{2})\) term. Unfortunately, our DNS data proved it to be neither negligible nor proportional to any other term in the budget equation (30). Instead, we found it to be well approximated by a linear combination of production and transport terms of Eq. (30) (see Figure 2): \[\frac{1}{\rho_{0}}(\theta\frac{\partial p}{\partial z})=C_{\theta}\beta( \theta^{2})+C_{\triangledown}2E_{x}\frac{\partial\Theta}{\partial z}. \tag{31}\] The dimensionless constants \(C_{\theta}=0.76\) and \(C_{\triangledown}=0.78\) are obtained from the best fit of Eq. (31) to DNS data. Substituting Eq. (31) into Eq. (30), we rewrite the budget equation as \[0=(1-C_{\theta})\beta(\theta^{2})-(1+C_{\theta})2E_{x}\frac{\partial\theta}{ \partial x}-\frac{F_{x}}{t_{F}}. \tag{32}\] Substituting Eq. (15) for \((\theta^{2})\) into Eq. (32) allows expressing \(F_{x}\) thought familiar temperature-gradient expression: \[F_{x}=-K_{H}\frac{\partial\theta}{\partial x^{\prime}},\ K_{H}=\left[(1+C_{ \theta})-(1-C_{\theta})\frac{E_{P}}{E_{K}}\frac{1}{x_{x}}\right]2A_{x}E_{K}t_ {F}. \tag{33}\] Then substituting Eq. (33) into Eq. (14) gives \[\frac{{F_{x}}^{2}}{E_{\theta}E_{K}}=2\left[(1+C_{\theta})A_{x}-(1-C_{\theta}) \frac{E_{P}}{E_{K}}\right]\frac{t_{F}}{t_{\theta}}. \tag{34}\] Similarly to \(t_{\tau}/t_{K}\) approximation (27), we approximate \(t_{F}/t_{\theta}\) as a universal function of \(z/L\) (see Figure 3): \[\frac{t_{\tau}}{t_{\theta}}=\frac{c_{i}^{F\theta}z/L+C_{\theta}^{F\theta}}{z/ L+C_{\theta}^{F\theta}}. \tag{35}\] Here, the dimensionless empirical constants are obtained from the best fit of Eq. (35) to DNS data just like before: \(C_{1}^{F\theta}=0.015\), \(C_{2}^{F\theta}=0.7\), \(C_{3}^{F\theta}=2.7\). The turbulent Prandtl number, defined as \(\Pr_{T}=K_{M}/K_{H}\), is given by \[\Pr_{T}=\frac{t_{\tau}}{t_{P}}/\Big{[}(1+C_{\Psi})-(1-C_{\theta}) \frac{E_{P}}{A_{x}E_{K}}\Big{]}. \tag{36}\] As shown, e.g., by Zilitinkevich et al. (2013), \(\Pr_{T}|_{(\alpha/L=0)}=0.8\) and \(\Pr_{T}|_{(\alpha/L\to\infty)}\to Ri/R_{\infty}\). It leads to the following equations: \[\frac{t_{\tau}}{t_{P}}|_{(\alpha/L=0)}=(1+C_{\Psi})\Pr_{T}|_{( \alpha/L=0)}\approx 1.4. \tag{37}\] \[\left[(1+C_{\Psi})-(1-C_{\theta})\left(\frac{E_{P}}{A_{x}E_{K}} \right)\right|_{(\alpha/L\to\infty)}\right]=0. \tag{38}\] Figure 3: The ratio of the effective dissipation time scale of \(F_{z}\) and dissipation time scale of \((\theta^{2})\), \(t_{F}/t_{\theta}\), versus \(z/L\). Empirical data are from the same sources as in Figure 1. The black solid line shows Eq. (35) with empirical constants \(C_{1}^{F\theta}=0.015\), \(C_{2}^{F\theta}=0.7\) and \(C_{3}^{F\theta}=2.7\), obtained from the best fit of Eq. (35) to DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\). To proceed further, it is important to point out that we currently lack any additional information or constraints regarding the energetics of \(z/L\rightarrow\infty\) asymptotic regime. Therefore, to close our system of equations, we have to make certain assumptions. Based on DNS data available, we assume that the vertical share of TKE, \(A_{Z}\), either remains constant or undergoes minimal changes as the stratification increases (see Figure 4). The available data suggests an average value of \(A_{Z}=0.17\). Consequently, the asymptotic value of TPE to TKE ratio would be \(\left(\frac{E_{P}}{E_{K}}\right)\Big{|}_{\left(z/L+\infty\right)}\approx 1.26\), corresponding to extremely strong stratification. If future modelling results or natural observations reliably indicate a different value for this asymptote, it would imply that assuming a constant \(A_{Z}\) is an oversimplified approximation. In such a case, a parameterization for \(A_{Z}\) would need to be introduced. However, since we currently lack evidence to support any alternative scenarios, we have chosen the simplest option available. Figure 4: The vertical share of TKE \(A_{Z}\), versus stratification parameter \(z/L\). Empirical data are from the same sources as in Figure 1. The black solid line corresponds to \(A_{Z}=0.17\), which is an average value of \(A_{Z}\) in the turbulent layer, \(z>50\nu/\tau^{1/2}\). Now we may revisit the ratio between the dissipation time scale of TKE, \(t_{K}\) and dissipation time scale of \((\theta^{2})\), \(t_{\theta}\): \[\frac{t_{K}}{t_{\theta}}=\frac{t_{\tau}\,t_{F}}{t_{\rho}\,t_{\theta}}/\frac{t_{ \tau}}{t_{K}^{\prime}}, \tag{39}\] where \(t_{\tau}/t_{K}\) and \(t_{F}/t_{\theta}\) are defined by Eqs. (27) and (35). We approximate \(t_{K}/t_{\theta}\) with a ratio of two first-order polynomials as before, \[\frac{t_{K}}{t_{\theta}}=\frac{c_{1}^{K\theta}z_{L/L+}c_{2}^{K\theta}c_{3}^{K \theta}}{z_{L/L+}c_{3}^{K\theta}}. \tag{40}\] Here we have only one unknown dimensionless empirical constant, \(C_{3}^{K\theta}\), since we know that \(C_{1}^{K\theta}=(t_{K}/t_{\theta})|_{(z/L+\infty)}\approx 0.2\) and \(C_{2}^{K\theta}=(t_{K}/t_{\theta})|_{(z/L=0)}\approx 1.85\) from Eqs. (37) and (38). The best fit to DNS data gives \(C_{3}^{K\theta}=11\) (see Figure 5). Figure 5: The ratio of TKE and \(\langle\theta^{2}\rangle\) dissipation time scales, \(t_{K}/t_{\theta}\), versus \(z/L\). Empirical data are from the same sources as in Figure 1. The black solid line shows Eq. (40) with empirical constant \(C_{3}^{K\theta}=11\) obtained from the best fit of Eq. (40) to DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\). With the inclusion of Eq. (40), our turbulence closure is now complete, allowing us to proceed with the validation process using independent energetic dimensionless ratios and DNS results. Figure 6 provides empirical evidence supporting the stability dependencies given by Eqs. (27) and (35). For the practical reasons, most of operational numerical weather prediction and climate models parameterize these dimensionless ratios as functions of the gradient Richardson number rather than \(z/L\). This preference arises from the fact that the gradient Richardson number is defined by mean quantities only, e.g., square of buoyancy and shear frequencies, which in practice imposes lesser computational restrictions on the model's time step. Since \(\mathrm{Ri}=\mathrm{Pr}\,_{\mathrm{T}}\,\mathrm{Ri}_{\mathrm{f}}\) and both \(\mathrm{Pr}\,_{\mathrm{T}}\) and \(\mathrm{Ri}_{\mathrm{f}}\) are known functions of \(z/L\), \(\mathrm{Ri}\) is also a known function of \(z/L\). Unfortunately, solving this dependency explicitly every time step Figure 6: **Resulting energetic dimensionless ratios. Panel (a) shows TPE to TKE ratio, \(E_{p}/E_{K}\), versus \(z/L\). The black solid line (Eq. 20) shows a good agreement with the DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\). Panel (b) shows the squared dimensionless turbulent flux of momentum, \((\tau/E_{K})^{2}\), versus \(z/L\). The black solid line (Eq. 26) fits the DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\) very well. Panel (c) shows the squared dimensionless turbulent flux of potential temperature, \(F_{z}^{2}/E_{\theta}E_{K}\), versus \(z/L\). The black solid line (Eq. 34) shows an agreement with the DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\). Panel (d) shows the turbulent Prandtl number, \(\mathrm{Pr}\,_{\mathrm{T}}\), versus \(z/L\). The black solid line (Eq. 36) shows a good agreement with the DNS data in the turbulent layer: \(z>50\nu/\tau^{1/2}\). Empirical data are from the same sources as in Figure 1. There has been no fitting here.** at every grid point might be computationally expensive (it is a polynomial equation of the 5th degree), so we propose to use yet another approximation. Zilitinkevich et al. (2013) demonstrated that in near-neutral stratification \(\Pr\tau\) can be treated as constant, meaning that \(\Rif\sim\Rii\), while in the strong-turbulence regime \(\Rif\) is limited by its maximum value of 0.2. We propose to link these regimes through the following interpolation: \[\Rif=\left(\frac{1}{(\alpha\Ri)^{n}}+\frac{1}{(\alpha\Ri)^{n}}\right)^{-1/n}, \tag{41}\] where \(a\) and \(n\) are fitting constants. Figure 7 shows the best fit with \(a=1.2\) and \(n=5.5\). The relative error for this approximation does not exceed 5% and allows to considerably cut down the computational expenses. ## 5 Concluding remarks For many years, our understanding of dissipation rates for turbulent second-order moments has been hindered by a lack of direct observations in fully controlled conditions, particularly in very stable stratification. To address this limitation, we conducted DNS (Direct Numerical Simulation) of stably stratified Couette flows. This allowed us to show that the ratios of dissipation time scales depend on the static stability (e.g., characterized by the gradient Richardson number), contrary to the traditional assumption of them being proportional to one master scale. Subsequently, we proposed the empirical approximations for these, which serve as simple universal functions of stability parameters across a range of stratifications from neutral to extremely stable conditions. This allowed us to correct the EFB turbulent closure accounting for dissipation time scales shown to be inherent to the basic second-order moments. This approach follows the methodology initially introduced by Zilitinkevich et al. (2007, 2013, 2019). As a result, the revised formulations Figure 7: Proposed \(\Rif\) vs \(\Ri\) approximation, Eq. (41), compared to the exact solution (panel a) and relative error of this approximation as a function of gradient Richardson number, \(\Ri\) (panel b). for eddy viscosity and eddy conductivity reveal greater physical consistency in strongly stratified conditions, thereby enhancing the representation of turbulence in numerical weather prediction and climate modelling. It is important to note that our DNS experiments were limited to gradient Richardson numbers up to \(\mathrm{Ri}=0.2\). Any data reliably indicating different asymptotic values of the time scale dimensionless ratios or demonstrating their different dependency on the static stability would pose the need for readjusting the proposed parameterization. Moving forward, the most challenging step will be to explicitly explore the transitional region between traditional weakly-stratified turbulence and extremely stable stratification, where the behaviour of the turbulent Prandtl number shifts from nearly constant to linear one with respect to the gradient Richardson number. Investigating this phenomenon would require unprecedented computational resources for DNS or specialized in-situ or laboratory experiments. ## Acknowledgements This paper was not only inspired by, but also conducted under the supervision of the esteemed Prof. Sergej Zilitinkevich, who unfortunately is no longer with us. We wish to express our profound gratitude to Sergej for the incredible honour of collaborating with him and for the immense inspiration he generously bestowed upon us. The authors would like to acknowledge the following funding sources for their support in conducting this research: the project "Research Infrastructures Services Reinforcing Air Quality Monitoring Capacities in European Urban & Industrial AreaS" (RI-URBANS, grant no. 101036245) and the Academy of Finland project HEATCOST (grant no. 334798). This work was also partially supported by FSTP project "Study of processes in the boundary layers of the atmosphere, ocean and inland water bodies and their parameterization in Earth system models", RSCF grant no. 21-71-30003 (development of the DNS model) and by MESRF as part of the program of the Moscow Center for Fundamental and Applied Mathematics under the agreement no. 075-15-2022-284 (DNS of stably stratified Couette flow). DNS experiments were carried out using the CSC HPC center infrastructure and the shared research facilities of the HPC computing resources at MSU. ## Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2306.17430
Multi-votes Election Control by Selecting Rules
We study the election control problem with multi-votes, where each voter can present a single vote according different views (or layers, we use "layer" to represent "view"). For example, according to the attributes of candidates, such as: education, hobby or the relationship of candidates, a voter may present different preferences for the same candidate set. Here, we consider a new model of election control that by assigning different rules to the votes from different layers, makes the special candidate p being the winner of the election (a rule can be assigned to different layers). Assuming a set of candidates C among a special candidate "p", a set of voters V, and t layers, each voter gives t votes over all candidates, one for each layer, a set of voting rules R, the task is to find an assignment of rules to each layer that p is acceptable for voters (possible winner of the election). Three models are considered (denoted as sum-model, max-model, and min-model) to measure the satisfaction of each voter. In this paper, we analyze the computational complexity of finding such a rule assignment, including classical complexity and parameterized complexity. It is interesting to find out that 1) it is NP-hard even if there are only two voters in the sum-model, or there are only two rules in sum-model and max-model; 2) it is intractable with the number of layers as parameter for all of three models; 3) even the satisfaction of each vote is set as dichotomous, 1 or 0, it remains hard to find out an acceptable rule assignment. Furthermore, we also get some other intractable and tractable results.
Fengbo Wang, Aizhong Zhou, Jianliang Xu
2023-06-30T07:03:20Z
http://arxiv.org/abs/2306.17430v1
# Multi-votes Election Control by Selecting Rules ###### Abstract We study the election control problem with multi-votes, where each voter can present a single vote according different views (or layers, we use "layer" to represent "view"). For example, according to the attributes of candidates, such as: education, hobby or the relationship of candidates, a voter may present different preferences for the same candidate set. Here, we consider a new model of election control that by assigning different rules to the votes from different layers, makes the special candidate p being the winner of the election (a rule can be assigned to different layers). Assuming a set of candidates C among a special candidate "p", a set of voters V, and t layers, each voter gives t votes over all candidates, one for each layer, a set of voting rules R, the task is to find an assignment of rules to each layer that p is acceptable for voters (possible winner of the election). Three models are considered (denoted as sum-model, max-model, and min-model) to measure the satisfaction of each voter. In this paper, we analyze the computational complexity of finding such a rule assignment, including classical complexity and parameterized complexity. It is interesting to find out that 1) it is NP-hard even if there are only two voters in the sum-model, or there are only two rules in sum-model and max-model; 2) it is intractable with the number of layers as parameter for all of three models; 3) even the satisfaction of each vote is set as dichotomous, 1 or 0, it remains hard to find out an acceptable rule assignment. Furthermore, we also get some other intractable and tractable results. Keywords:Multi-votes Computational complexity Control. ## 1 Introduction Elections are a commonly used mechanism to achieve preference aggregation and have applications in multi-agent settings and political domains. This problem also plays a fundamental role in artificial intelligence and social choice [1, 2]. Most cases studied are set to find out a single winner, voting can also be used to select a fixed-size set of winners (multi-winner), called committee. The first innovation of our work is that we consider the condition where each voter can present multi-votes. The traditional election only allows each voter to provide a single vote, which is insufficient in many real applications. For example, a single person has different attributes in different scenes, such as, he is a husband when he accompanies with his wife, and is a teacher when he faces to the students. It is natural for us to present different preferences among agents from the different viewpoints. Such as, from a romantic perspective, giving rose to others is better than a candy, while from a perspective of filling the stomach, we often prefer candy to rose. The conditions where each voter is allowed to present multi-votes studied in Aziz et al. [3], Chen et al. [4], Miyazaki and Okamoto [5] and Robert Bredereck et al. [25]. Wen et al. [6] studied the multi-preference model of matching. A related work of Jain and Talmon [7] studied committee selection with multi-modal preferences, which assuming a set of candidates \(A\), a set of voters \(V\), and \(\ell\) layers, where each voter \(v\in V\) has ordinal preferences over the alternatives for each layer separately, the task is to select an acceptable committee \(S\subset A\) of size \(k\). We also consider the election with uncertainty, which is another hot topic in the research of social choice. In the context of winner determination, perhaps the most prominent problem in this category is vote uncertainty, the possible/necessary winner problem [8], where the voting rule is public information, but for each voter, only a partial order over the candidates is known; the goal is to determine if a candidate wins the election for some way (the possible winner) or for every way (the necessary winner) of completing the voters' preferences; a probabilistic variant of this problem has been considered [9]. Kocot considered if there is a committee that meets or exceeds the respective lower bound with respect to each of the rules [10]. Uncertainty about the voting rules has been recently investigated by Baumeister et al. [11], who also consider the situation where the voting rule will be chosen from a fixed set. Maciej Kocot et al. [10] has studied winner determination and voting manipulation under uncertainty. Edith Elkind and Gabor Erdelyi [12] studied the complexity of manipulation for the setting where there is uncertainty about the voting rule: the manipulator(s) know that the election will be conducted using a voting rule from a given list, and need to select their votes so as to succeed no matter which voting rule will eventually be chosen. A similar work has been in Conitzer et al [13]. We follow this line and continue to consider the scene where the voting rules are uncertain, and our work is to find a set of satisfying rules assigned to each layer. Another contribution of this paper is that we consider a new model of election control where assigning rules to each layer to determine the election winner (the satisfaction of the vote is achieved by the assigned rule). It can be seen as an attack to control the winner of the election. The computational complexity of elections under attacks has been studied extensively, since Bartholdi et al. [14] introduced the usage of computational complexity as a barrier to protect elections against different manipulative actions. The common attacks among manipulation, control, and bribery. See the book chapters [15, 16] for recent surveys of related results. Here, we focus on the control attacks on elections, where an election chair attempts by doing some operations to make a special candidate win the election, the _constructive control_ model [14], or lose the election, the _destructive control_ model [17]. The common operations include adding candidates, adding votes, deleting candidates, deleting votes, partition of candidates, and partition of votes et al. Complexity results of control problems have been obtained for many election systems such as Plurality, Condorcet, Approval Voting, Copeland, Schulze Voting and Borda [15, 18, 19, 20]. Operations such as partitioning the candidates or votes have also been consider [20, 21]. Furthermore, control problems have also been studied in connection to some special vote structures such as single-peaked or single-dived [22]. In this paper, we consider another operation that by selecting rules to different votes to make a special candidate being the winner of the election (the constructive control case). Considering the computational complexity of this new model of election control can reduce the impact on the fairness of election and ensure the rationality of the winner. In summary, our work combines the characteristic of multi-votes, uncertainty and control together, and studies the computational complexity of the problem, called _Multi-votes Election Control By Selecting Rules, MECSR_ for short. The multi-votes election provides rule uncertainty with a well existence opportunity. When each voter provides a single vote, only a rule is applied to this voter. So, the task is to chose a rule from the rule set or to determine which vote is applied with the chosen rule. However, when each voter provides multi-votes, it presents a possibility that multi-votes are applied with different rules, and the task is to achieve a satisfying rule assignment (the satisfaction of voters is enough) to different rules. To measure the satisfaction of each voter, we consider three models, _Sum-Model_, _Max-Model_, and _Min-Model_, and find out that the MECSR problem is NP-hard for all of the three models. We continue to study the parameterized complexity with the three models, and get some tractable and intractable results (shown in table.1). It is interesting to find out that 1) it is NP-hard even if there are only two voters in sum-model, or two rules in sum-model or max-model, 2) it is intractable with the number of layers as parameter for all of the three rules, 3) even the satisfaction of each vote is set as dichotomous, 1 or 0, it is still hard to find out an acceptable rule assignment. In the following of this paper, we first present the preliminaries in section 2, and show the details of classical and parameterized complexity results in section 3. Finally, we summarize our work and present some interesting future work in section 4. ## 2 Preliminaries A traditional election denoted as \(E=(C,V)\) among \(m\) candidates in \(C\) and \(n\) votes in \(V\) from \(n\) voters. The aim of the election is to select a single satisfied candidate \(c\) from \(C\) to be the winner, according to the votes in \(V\). Here, we analyse a special model of election where each voter gives \(t\) votes over all candidates \(C\) from in \(t\) layers, such as experience or education, with each vote corresponding to one layer. The vote set \(V\) contains \(n\) subset \(V_{i}(1\leq i\leq n)\), where each subset corresponds to a voter, \(V=\bigcup_{1\leq i\leq n}V_{i}\); and each subset \(V_{i}\) contains \(t\) votes \(v_{i}^{j}(1\leq j\leq t)\), each vote corresponds to a layer of the voter, \(V_{i}=\bigcup_{1\leq j\leq t}v_{i}^{j}\). The vote \(v_{i}^{j}\) is presented by the \(i-\)th voter from the \(j-\)th layer. To measure the satisfaction of vote \(v\) with the chosen winner \(c\) being the winner, we often think about the rules (such as: Borda) which can calculate a value, \(\mathtt{Sat}(c,v,r)\) with rule. The satisfaction of a voter \(\mathtt{Sat}(V)\) is obtained according to the \(t\) vote satisfactions \(\mathtt{Sat}(c,v,r)\) with \(v\in V\). When \(\mathtt{Sat}(\mathrm{V})\) reaches a given threshold \(\mathrm{d}\), we call the voter _accepts_ the winner \(c\), otherwise, we call the voter _rejects_ the winner \(c\). The satisfaction of each voter is determined by combination of his \(t\) votes and the chosen winner \(c\). Hereby, we consider the condition called as _Multi-votes Election Control By Selecting Rules(MECSR)_ that given a set of rules \(R=\{r_{1},\cdots,r_{\ell}\}\), the satisfaction threshold \(d\) and a special candidate \(p\in C\), is there an assignment of rules to each layer to make sure that the satisfaction of each voter is at least \(d\) with \(p\) being the winner? There are some notes about our work described as follows: 1). Same layers of different voters share common rules, and a single rule can be assigned to different layers; 2). We do not require \(p\) to be the unique winner, which means the rule assignment may potentially result in another candidate being the winner, such an outcome is acceptable; 3). Here, we consider the rules which can calculate \(\mathtt{Sat}(c,v)\) in polynomial time with given candidate \(c\) and vote \(v\); 4). Although we just consider the special candidate \(p\) being the winner here, our work can be applied to the committee election with the rules which can get a value of each candidate from the votes directly, such as Borda, plurality or veto. ### Problem Definition Here, We define the central problem of this paper. **Multi-votes Election Control By Selecting Rules(MECSR)** **Input**: An election \(E=(C,V,R,t)\), each voter provides \(t\) votes over all \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & Classical Complexity & \multicolumn{4}{|c|}{Parameterized Complexity} \\ \hline & & \(n\) & \(t\) & \(\ell\) & \(\alpha\) \\ \hline \multirow{3}{*}{Sum} & **NP-hard** & \(n\leq t:\textbf{W[2]-hard}\) & **W[2]-hard** & \(\ell=2\)**:NP-hard** & **W[1]-hard** \\ & **[Theorem 1]** & **[Theorem 4]** & **[Theorem 1]** & **[Theorem 2]** & **[Theorem 3]** \\ \hline \multirow{3}{*}{Max} & **NP-hard** & \(n\leq t:\textbf{W[2]-hard}\) & **W[2]-hard** & \(\ell=2\)**:NP-hard** & **W[1]-hard** \\ & **[Theorem 5]** & **[Theorem 8]** & **[Theorem 5]** & **[Theorem 7]** & **[Theorem 6]** \\ \hline \multirow{2}{*}{Min} & **NP-hard** & \multirow{2}{*}{FPT} & **W[1]-hard** & \(\ell\leq t:\textbf{W[1]-hard}\) & **W[1]-hard** \\ & **[Theorem 9]** & & **[Theorem 9]** & **[Theorem 10]** & **[Theorem 9]** \\ \hline \end{tabular} \end{table} Table 1: In this table, we summarize our results including classical and parameterized complexity. The \(n\) denotes the number of voters, \(t\) denotes the number of votes presented by each voter (the number of layers), \(\ell\) denotes the number of rules, and \(\alpha\) denotes the number of satisfied voters. It is trivially in P when there is only one voter, one layer, or one rule. And when all of the voters all voters accept the winner with min-model (\(n=\alpha\)), we can obviously find out an acceptable rule assignment in polynomial-time. Therefore, it is FPT of MECSR problem with the number of voters as parameter. All of the results shown in this table are reached, even if the satisfaction of each vote is set as dichotomous, 0 or 1. candidates in \(C\) where each vote for each layer, a set \(R\) of rules, a special candidate \(p\in C\), and two positive integers \(d\) and \(\alpha\). **Question**: Is there an assignment of rules in \(R\) for each layer that the at least \(\alpha\) voters accept the winner \(p\) (\(\mathtt{Sat}(V)\geq d\))? Since the special candidate \(p\in C\), each vote \(v\in V\), and each rule \(r\in R\) are part of the input, we can calculate \(\mathtt{Sat}(p,v,r)\) in polynomial time, denoted as \(\mathtt{Sat}(v,r)\). Therefore, we can consider \(\mathtt{Sat}(v,r)\) as part of the input directly, without specifying the formats of each vote \(v\) and each rule \(r\). In this paper, we investigate three models of calculating voter satisfaction, which have also been studied by Aziz et al. [3]: * Sum-Model: The satisfaction of each voter is the total satisfaction of all \(t\) layers, \(\mathtt{Sat}(V_{i})=\sum_{j=1}^{t}\mathtt{Sat}(v_{i}^{j})\); * Max-Model: The satisfaction of each voter is the maximal satisfaction among the \(t\) layers, \(\mathtt{Sat}(V_{i})=\max\{\mathtt{Sat}(v_{i}^{j})|1\leq j\leq t\}\); * Min-Model: The satisfaction of each voter is the minimal satisfaction among the \(t\) layers, \(\mathtt{Sat}(V_{i})=\min\{\mathtt{Sat}(v_{i}^{j})|1\leq j\leq t\}\). The sum-model measures the total satisfaction of a voter and does not consider the individual satisfaction of each vote; in the max-model, voters accept the chosen candidate when the satisfaction is enough from at least one vote; and for the min-model, voters accept the chosen candidate only if the satisfaction is enough from all votes. ### Parameterized Complexity Parameterized complexity allows us to give a more refined analysis of computational problems and in particular, can provide a deep exploration of the connection between the problem complexity and various problem-specific parameters. A fixed-parameter tractable (FPT) problem admits an \(O(f(k)\cdot|I|^{O(1)})\)-time algorithm, where \(I\) denotes the whole input instance, \(k\) is the parameter, and \(f\) can be any computable function. Fixed-parameter intractable problems can be classified into many complexity classes, where the most fundamental ones are W[1] and W[2]. A problem is para-NP-hard with respect to parameter \(k\), when the problem remains NP-hard even if \(k\) is a fixed constant. For more details on parameterized complexity, we refer to [23, 24]. ## 3 Classical and parameterized complexity In this section, we show the computational complexity of MECSR problem with sum-model, max-model, or min-model. It is trivial MECSR problem is in P when there is only one rule, one layer, or one voter. Otherwise, MECSR problem is NP-hard for all of the three models. Furthermore, we achieve some intractable results and two tractable results. For ease of the description, we use \(j\in[n]\) to represent \(1\leq j\leq n\), use \(j\in[n_{1},n_{2}]\) to represent \(n_{1}\leq j\leq n_{2}\), where \(j\) is a non-negative integer; use \(N[v]\) to represent the neighborhood set of \(v\), which includes the vertex \(v\) itself along with its adjacent vertices. ### Complexity with Sum model In this section, we present the complexity results for the MECSR problem with sum-model. In sum-model, the satisfaction of each voter is calculated by the sum of satisfactions from all \(t\) layers, denoted as \(\mathtt{Sat}(V_{i})=\sum_{j=1}^{t}\mathtt{Sat}(v_{i}^{j})\). We achieve 1) MECSR problem with sum-model is NP-hard even when there are only two rules; 2) it is W[2]-hard with respect to the number of layers \(t\); 3)it is W[1]-hard with the number of satisfied voters \(\alpha\) as the parameter. Theorem 3.1: _The MECSR problem with sum-model is NP-hard and is W[2]-hard with respect to the number of layers \(t\)._ Proof: We prove the theorem by reducing from Dominating Set problem, which given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and an integer \(k^{\prime}\) where \(|\mathcal{V}|=m^{\prime}\) and \(|\mathcal{E}|=n^{\prime}\), asks for a size-\(\leq k^{\prime}\) vertex subset \(\mathcal{V}^{\prime}\subseteq\mathcal{V}\) where \(\forall v\in\mathcal{V},\exists v^{\prime}\in\mathcal{V}^{\prime},v\in N[v^{ \prime}]\). It is known that Dominating set problem is NP-hard and is W[2]-hard with respect to the size of \(\mathcal{V}^{\prime}\)[24]. We construct an MECSR instance (\(E=(C,V,R,t),\alpha,d\)) from \((\mathcal{G}=(\mathcal{V},\mathcal{E}),k^{\prime})\) as follows. For each vertex \(v_{i}\in\mathcal{V},i\in[m^{\prime}]\), we construct a voter \(V_{i}\) and a rule \(r_{i}\), \(V=\bigcup_{i=1}^{m^{\prime}}V_{i}\), \(R=\bigcup_{i=1}^{m^{\prime}}r_{i}\). There are \(k^{\prime}\) layers in total, \(t=k^{\prime}\). For each voter \(V_{i},i\in[m^{\prime}]\), we construct \(k^{\prime}\) votes, \(V_{i}=\bigcup_{j=1}^{k^{\prime}}v_{i}^{j}\). The satisfaction of vote \(v_{i}^{j}(j\in[k^{\prime}])\) with rule \(r_{k}\) is set to 1, if the corresponding vertices \(v_{i}\) and \(v_{k}\) satisfy \(v_{i}\in N[v_{k}]\) in \(\mathcal{G}\), \(\mathtt{Sat}(v_{i}^{j},r_{k})=1\); otherwise, the satisfaction is set to 0, \(\mathtt{Sat}(v_{i}^{j},r_{k})=0\). \[\mathtt{Sat}(v_{i}^{j},r_{k})=\left\{\begin{array}{rl}&1,\quad v_{i}\in N[ v_{k}],\\ &0,\quad v_{i}\notin N[v_{k}].\end{array}\right.\] Note that all \(t=k^{\prime}\) votes of one voter are the same. Let \(d:=1\), \(\alpha:=n^{\prime}\). Now we prove that there is a size-\(k^{\prime}\) dominating set in \(\mathcal{G}\) if and only if there is a rule assignment solution of MECSR problem with sum-model. "\(\Longrightarrow\)": If there is a size-\(\leq k^{\prime}\) dominating set \(DS\) in \(\mathcal{G}\), \(|DS|\leq k^{\prime}\) and \(\forall v\in\mathcal{V},\exists v^{\prime}\in DS,v\in N[v^{\prime}]\). Let \(R^{\prime}\) be the set of rules corresponding to the vertices in \(DS\), \(|R^{\prime}|\leq k^{\prime}\), that is, \(\forall v_{i}\in DS,r_{i}\in R^{\prime}\). For each vertex \(v_{i}\in DS\), \(r_{i}\) is assigned to one layer. It means each rule in \(R^{\prime}\) is assigned to one layer. Since all the \(k^{\prime}\) votes of one voter are same, the assignment order of the chosen \(k^{\prime}\) rules has no effect on the satisfaction of each voter. Since each vertex \(v_{i}\in\mathcal{V}\) is adjacent to at least one vertex \(v_{k}\in DS\), it means for each voter the satisfaction of at least one layer with the rule \(r_{k}\) is 1, that is \(\exists r_{k}\in R^{\prime},\mathtt{Sat}(p,v_{i}^{j},r_{k})=1\). So, for each voter \(V_{i}\), the total satisfaction of \(V_{i}\) is at least \(d=1\): \[\mathtt{Sat}(V_{i})=\sum_{j=1}^{k^{\prime}}\mathtt{Sat}(v_{i}^{j},r_{j^{ \prime}})\geq\mathtt{Sat}(v_{i}^{j},r_{k})=1.\] Therefore, the MECSR instance has a rule assignment to make \(p\) being the possible winner of the election. "\(\Longleftarrow\)": Suppose there is a rule assignment of MECSR, where the satisfaction of each voter \(V_{i}\) is at least \(d=1\). Let \(R^{\prime}\) be the set of rules of the rule assignment, \(|R^{\prime}|\leq k^{\prime}\) and \(r_{j^{\prime}}\in R^{\prime}\) is the rule assigned to \(j-\)th layer. Then, for each voter \(V_{i}\), it must hold \(\mathtt{Sat}(V_{i})=\sum_{j=1}^{k^{\prime}}\mathtt{Sat}(v_{i}^{j},r_{j^{ \prime}})\geq d=1\). Since the satisfaction for a voter of each layer can only be 1 or 0, there must be a layer where the satisfaction is 1 with the assigned rule \(r_{j^{\prime}}\). It means: \[\exists r_{j^{\prime}}\in R^{\prime},\mathtt{Sat}(v_{i}^{j},r_{j^{\prime}})=1.\] So, for each voter \(V_{i}\), the corresponding vertex \(v_{i}\) must hold \(v_{i}\in[v_{j^{\prime}}]\) with satisfying \(\mathtt{Sat}(v_{i}^{j},r_{j^{\prime}})=1\). Therefore, the set of vertices corresponding to the rules in \(R^{\prime}\) is a size-\(\leq k^{\prime}\) dominating set of \(\mathcal{G}\). This completes the proof of this theorem. Theorem 3.1: _The MECSR problem with sum-model is NP-hard even if there are only two rules._ Proof: We prove the theorem by reducing from Dominating Set problem. We construct an MECSR instance (\(E=(C,V,R,t),\alpha,d\)) from \((\mathcal{G}=(\mathcal{V},\mathcal{E}),k^{\prime})\) as follows. For each vertex \(v_{i}\in\mathcal{V}(i\in[m^{\prime}])\), we construct a voter \(V_{i}\). We also construct another voter \(V_{m^{\prime}+1}\), \(V=\bigcup_{i\in[m^{\prime}+1]}V_{i}\). There are two rules, \(r_{1}\) and \(r_{2}\), \(R=\{r_{1},r_{2}\}\), and \(2m^{\prime}\) layers, \(t=2m^{\prime}\). The satisfaction of each vote is set as follows: * For the \(j-\)th layer with \(j\in[m^{\prime}]\): * \(\mathtt{Sat}(v_{i}^{j},r_{1})=1\), \(i\in[m^{\prime}]\) and \(v_{i}\in N[v_{j}]\); * \(\mathtt{Sat}(v_{i}^{j},r_{1})=0\), \(i\in[m^{\prime}]\) and \(v_{i}\notin N[v_{j}]\); * \(\mathtt{Sat}(v_{i}^{j},r_{2})=0\), \(i\in[m^{\prime}]\); * \(\mathtt{Sat}(v_{i}^{j},r_{1})=0\), \(\mathtt{Sat}(v_{i}^{j},r_{2})=1\), \(i=m^{\prime}+1\). * For the \(j-\)th layer with \(j\in[m^{\prime}+1,2m^{\prime}-1]\): * \(\mathtt{Sat}(v_{i}^{j},r_{1})=\mathtt{Sat}(v_{i}^{j},r_{2})=1\), \(i\in[m^{\prime}]\); * \(\mathtt{Sat}(v_{i}^{j},r_{1})=\mathtt{Sat}(v_{i}^{j},r_{2})=1\), \(i=m^{\prime}+1\) and \(j\in[m^{\prime}+1,m^{\prime}+k^{\prime}]\); * \(\mathtt{Sat}(v_{i}^{j},r_{1})=\mathtt{Sat}(v_{i}^{j},r_{2})=0\), \(i=m^{\prime}+1\) and \(j\in[m^{\prime}+k^{\prime}+1,2m^{\prime}-1]\); * For the \(j-\)th layer with \(j=2m^{\prime}\): * \(\mathtt{Sat}(p,v_{i}^{2m^{\prime}},r_{1})=\mathtt{Sat}(p,v_{i}^{2m^{\prime}}, r_{2})=0\), \(i\in[m^{\prime}+1]\). Let \(\alpha:=m^{\prime}+1\), \(d:=m^{\prime}\). Now, we show that the Dominating set instance has a size-\(\leq k^{\prime}\) dominating set if and only if there is a rule assignment of \(r_{1}\) and \(r_{2}\) of MECSR instance with sum-model such that \(\forall i\in[m^{\prime}+1]\), \(\sum_{j=1}^{2m^{\prime}}\mathtt{Sat}(v_{i}^{j})\geq m^{\prime}\). "\(\Longrightarrow\)": Suppose that there exists a size-\(\leq k^{\prime}\) dominating set \(\mathcal{V}^{\prime}\) in \(\mathcal{G}\). For \(j\in[m^{\prime}]\), \(r_{1}\) is assigned to the \(k^{\prime}\) layers corresponding to the vertices in \(\mathcal{V}^{\prime}\), while \(r_{2}\) is assigned to the other layers. For example, if \(v_{2}\) is in \(V^{\prime}\), \(r_{1}\) is assigned to the second layer; otherwise, \(v_{2}\) is not in \(V^{\prime}\), \(r_{2}\) is assigned to the second layer. For \(j\in[m^{\prime}+1,2m^{\prime}]\), the allocation of \(r_{1}\) and \(r_{2}\) is random. Since \(k^{\prime}\) layers corresponding to the vertices in \(\mathcal{V}^{\prime}\) are assigned with rule \(r_{1}\), and \(m-k^{\prime}\) layers are assigned with rule \(r_{2}\), it holds: * When \(j\in[m^{\prime}]\): * \(\sum\limits_{v_{j}\in V^{\prime}}\mathtt{Sat}(v_{i}^{j},r)=\sum\limits_{v_{j}\in V ^{\prime}}\mathtt{Sat}(v_{i}^{j},r_{1})+\sum\limits_{v_{j}\notin V^{\prime}} \mathtt{Sat}(v_{i}^{j},r_{2})\geq 1+0=1\), \(i\in[m^{\prime}]\); * \(\sum\limits_{v_{j}\in V^{\prime}}\mathtt{Sat}(v_{i}^{j},r)=\sum\limits_{v_{j} \in V^{\prime}}\mathtt{Sat}(v_{i}^{j},r_{1})+\sum\limits_{v_{j}\notin V^{\prime }}\mathtt{Sat}(v_{i}^{j},r_{2})=0+(m^{\prime}-k^{\prime})=m^{\prime}-k^{\prime}\), \(i=[m^{\prime}+1]\). * When \(j\in[m^{\prime}+1,2m^{\prime}]\): * \(\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)=\sum \limits_{j\in[m^{\prime}+1,2m^{\prime}-1]}\mathtt{Sat}(v_{i}^{j},r)+\mathtt{ Sat}(v_{i}^{2m^{\prime}},r)=m^{\prime}-1+0=m^{\prime}-1\), \(i\in[m^{\prime}]\); * \(\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)=\sum \limits_{j\in[m^{\prime}+1,m^{\prime}+k]}\mathtt{Sat}(v_{i}^{j},r)+\sum \limits_{j\in[m^{\prime}+k^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)=k ^{\prime}+0=k^{\prime}\), \(i=m^{\prime}+1\). * For the \(j-\)th layer with \(j=2m^{\prime}\): * \(\mathtt{Sat}(p,v_{i}^{2m^{\prime}},r_{1})=\mathtt{Sat}(p,v_{i}^{2m^{\prime}}, r_{2})=0\), \(i\in[m^{\prime}+1]\). Note that, the satisfaction \(v_{i}^{j}\) remains constant regardless of the \(r_{1}\) or \(r_{2}\) is assigned to \(j-\)th layer, when \(j\in[m^{\prime}+1,2m^{\prime}]\). So, we can get: * \(\mathtt{Sat}(V_{i})=\sum\limits_{j\in[m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)+ \sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)\geq 1+m^{ \prime}-1=m^{\prime}=d\), \(i\in[m^{\prime}]\) * \(\mathtt{Sat}(V_{i})=\sum\limits_{j\in[m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)+ \sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)\geq 1+m^{ \prime}-1=m^{\prime}=d\), \(i=[m^{\prime}+1]\) Therefore, the total satisfaction of \(v_{i}^{j}\) is at least \(m^{\prime}\) for all \(i\in[m^{\prime}+1]\), \(j\in[m^{\prime}+1,2m^{\prime}]\). "\(\Longleftarrow\)": Suppose there is a rule assignment of MECSR, where the satisfaction of each voter \(V_{i}\) is at least \(m^{\prime}\). According to the votes of \(V_{m^{\prime}+1}\), the total satisfaction of \(v_{m^{\prime}+1}^{j}\) is always \(k^{\prime}\) regardless of \(r_{1}\) or \(r_{2}\) is assigned to the \(j\)-th layer, \(j\in[m^{\prime}+1,2m^{\prime}]\). That is: \[\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{m^{ \prime}+1}^{j},r_{1})=\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat} (v_{m^{\prime}+1}^{j},r_{2})\] \[=\sum\limits_{j\in[m^{\prime}+1,m^{\prime}+k^{\prime}]}\mathtt{ Sat}(v_{m^{\prime}+1}^{j},r_{1})+\sum\limits_{j\in[m^{\prime}+k^{\prime}+1,2m^{ \prime}]}\mathtt{Sat}(v_{m^{\prime}+1}^{j},r_{2})\] \[=k^{\prime}+0=k^{\prime}.\] \(\mathtt{Sat}(p,v_{m^{\prime}+1}^{j},r_{1})=\mathtt{Sat}(p,v_{m^{\prime}+1}^{j},r_{2})\). To reach the threshold \(d=m^{\prime}\), for the \(j\)-th layer, \(j\in[m^{\prime}]\), at most \(k^{\prime}\) layers can be assigned with \(r_{1}\). For the voters of \(V_{i}\), \(i\in[m^{\prime}]\), it always holds: * \(\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)+\sum \limits_{j\in[m^{\prime}+1,2m^{\prime}-1]}\mathtt{Sat}(v_{i}^{2m^{\prime}},r)= m^{\prime}-1\), \(i\in[m^{\prime}]\) * \(\sum\limits_{j\in[m^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)=\sum \limits_{j\in[m^{\prime}+1,m^{\prime}+k^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)+ \sum\limits_{j\in[m^{\prime}+k^{\prime}+1,2m^{\prime}]}\mathtt{Sat}(v_{i}^{j},r)\) \(=k^{\prime}+0=k^{\prime}\), \(i=[m^{\prime}+1]\) Therefore, for each \(j\)-th layer with \(j\in[m^{\prime}]\), at least one layer is assigned with \(r_{1}\) to receive the satisfaction of \(1\) for \(V_{i}(i\in[m^{\prime}])\), and at most \(k^{\prime}\) layers can be assigned with \(r_{1}\) to ensure that the total satisfaction \(\sum\limits_{j\in[m^{\prime}]}\mathtt{Sat}(v_{m^{\prime}+1}^{j},r)\geq m^{ \prime}-k^{\prime}\). Since the \(j\)-th layer (\(j\in[m^{\prime}]\)) corresponds to the vertex \(v_{j}\) in the graph \(\mathcal{G}\), we have \(\mathtt{Sat}(v_{i}^{j},r_{1})=1\) only when the corresponding vertex \(v_{i}\in N[v_{j}]\), where \(N[v_{j}]\) represents the neighborhood set of \(v_{j}\). Therefore, the instance \((E=(C,V,R,t),\alpha,d)\) has a solution of rule assignment if and only if the graph \((\mathcal{G}=(\mathcal{V},\mathcal{E}),k^{\prime})\) has a size-\(\leq k^{\prime}\) dominating set. Theorem 3.1: _The MECSR problem with sum-model is W[1]-hard with respect to the number of satisfied voters \(\alpha\)._ Proof: We prove this theorem by giving a reduction from 3-Set Packing, which given a set of elements \(X\), \(X=\bigcup_{i\in[m^{\prime}]}x_{i}\), a set of sets \(\mathcal{S}\), \(\mathcal{S}=\bigcup_{i\in[n^{\prime}]}\mathcal{S}_{i}\), where \(\forall i\in[n^{\prime}]\),\(\mathcal{S}_{i}\subset X\), \(|\mathcal{S}_{i}|=3\), and asks for a size-\(k^{\prime}\) subset \(\mathcal{S}^{\prime}\) of \(\mathcal{S}\) that \(\forall\mathcal{S}_{i},\mathcal{S}_{i^{\prime}}\in\mathcal{S}^{\prime}\), \(\mathcal{S}_{i}\cap\mathcal{S}_{i^{\prime}}=\emptyset\). It is known that 3-set packing is NP-hard and is W[1]-hard with respect to \(k^{\prime}\). We construct an MECSR instance \((E=(C,V,R,t),\alpha,d)\) from \(((X,\mathcal{S}),k^{\prime})\) where \(|X|=m^{\prime}\) and \(|\mathcal{S}|=n^{\prime}\) as follows: For each element \(x_{i}\in X\), we construct a voter \(V_{i}\), \(V=\bigcup_{i\in[m^{\prime}]}V_{i}\). There are totally \(k^{\prime}\) layers, \(t=k^{\prime}\), \(V_{i}=\bigcup_{j\in[k^{\prime}]}v_{i}^{j}\). For each set \(\mathcal{S}_{i}\in\mathcal{S}\), we construct a rule \(r_{k}\), \(R=\bigcup_{k\in[n^{\prime}]}\{r_{i}\}\). For each vote \(v_{i}^{j}(j\in[k^{\prime}])\) and rule \(r_{k}\), if the corresponding element \(x_{i}\in\mathcal{S}_{k}\), \(\mathtt{Sat}(v_{i}^{j},r_{k})\) is set to \(1\); otherwise, \(\mathtt{Sat}(v_{i}^{j},r_{k})\) is set to \(0\). \[\mathtt{Sat}(v_{i}^{j},r_{k})=\left\{\begin{array}{rl}&1,\;\;\;x_{i}\in S_{ k},\\ &0,\;\;\;x_{i}\notin S_{k}.\end{array}\right.\] Note that the \(k^{\prime}\) votes of one voter are all the same. Let \(\alpha:=3k^{\prime}\), \(d:=1\). Now we prove that there is a size-\(k^{\prime}\) subset \(\mathcal{S}^{\prime}\) if and only if there is a solution of MECSR instance with sum-model. "\(\Longrightarrow\)": Suppose that there is a size-\(k^{\prime}\) subset \(\mathcal{S}^{\prime}\subset\mathcal{S}\), where \(\forall\mathcal{S}_{i},\mathcal{S}_{i^{\prime}}\in\mathcal{S}^{\prime}\), \(\mathcal{S}_{i}\cap S_{i^{\prime}}=\emptyset\). Let \(\mathcal{S}^{\prime}=\{\mathcal{S}_{\beta 1},\mathcal{S}_{\beta 2},\cdots, \mathcal{S}_{\beta k^{\prime}}\}\). For the \(j-\)th layer, the rule \(r_{\beta j}\) is assigned. Let \(R^{\prime}\) be the set of rules corresponding to the sets in \(S^{\prime}\). In this way, \(\forall x_{i}\in\mathcal{S}_{\beta j}\subset\mathcal{S}^{\prime}(j\in[k^{ \prime}])\), the satisfaction of vote \(v_{i}^{j}\) with rule \(r_{\beta j}\) is \(1\), \(\mathtt{Sat}(v_{i}^{j},r_{\beta j})=1\). So, the satisfaction of the voter \(V_{i}\) which corresponds to the element \(x_{i}\) in \(\mathcal{S}^{\prime}\) is at least \(1\), that is, \(\mathtt{Sat}(p,V_{i})\geq\mathtt{Sat}(p,v_{i}^{j},r_{\beta j})\geq 1=d\). There are exactly \(3k^{\prime}\) elements in the set of \(\mathcal{S}^{\prime}\), because each two sets in \(\mathcal{S}^{\prime}\) do not share a common element and each set contains exactly \(3\) elements, \(|\mathcal{S}_{i}|=|\mathcal{S}_{i^{\prime}}|=3\). That is, there are at least \(\alpha=3k^{\prime}\) voters whose satisfaction is at least \(d=1\). Therefore, if \(((X,\mathcal{S}),k^{\prime})\) has a size-\(k^{\prime}\) subset \(\mathcal{S}^{\prime}\) of \(\mathcal{S}\) satisfying \(\forall\mathcal{S}_{i},\mathcal{S}_{i^{\prime}}\in\mathcal{S}^{\prime}\), \(\mathcal{S}_{i}\cap\mathcal{S}_{i^{\prime}}=\emptyset\), MECSR instance has a solution of rule assignment \(R^{\prime}\). "\(\Longleftarrow\)": Suppose that there is an assignment of rules for each layer in which there are at least \(\alpha=3k^{\prime}\) voters whose satisfaction is at least \(d=1\). Let \(r_{\gamma j}\in R\) be the rule assigned to \(j-\)th layer, \(j\in[k^{\prime}]\). For each satisfied voter \(V_{i}\), it must hold \(\mathtt{Sat}(V_{i})=\sum_{j\in[k^{\prime}]}\mathtt{Sat}(v_{i}^{j},r_{\gamma j })\geq 1=d\). It means that there are at least one vote \(v_{i}^{j}\) satisfying \(\mathtt{Sat}(v_{i}^{j},r_{\gamma j})=1\). Without loss of generality, let the \(j^{\prime}-\)th layer satisfies \(\mathtt{Sat}(v_{i}^{j^{\prime}},r_{\gamma j^{\prime}})=1\), \((j^{\prime}\in[k^{\prime}])\). In this way, the corresponding element \(x_{i}\) must be in the set \(S_{\gamma j^{\prime}}\). Since there are \(k^{\prime}\) layers and at least \(\alpha=3k^{\prime}\) satisfied voters, the corresponding elements must be in the \(k^{\prime}\) corresponding sets. Because each set \(\mathcal{S}_{i}\) contains exactly 3 elements, \(\forall i\in[n^{\prime}],|\mathcal{S}_{i}|=3\). So, there are exactly \(3k^{\prime}\) elements in exact \(k^{\prime}\) sets, denote the set of the \(k^{\prime}\) sets as \(S^{\prime}\). Therefore, if MECSR instance has a solution of rule assignment, \(((X,\mathcal{S}),k^{\prime})\) has a size-\(k^{\prime}\) subset \(\mathcal{S}^{\prime}\) of \(\mathcal{S}\) satisfying \(\forall\mathcal{S}_{i},\mathcal{S}_{i^{\prime}}\in\mathcal{S}^{\prime}\), \(\mathcal{S}_{i}\cap\mathcal{S}_{i^{\prime}}=\emptyset\). This completes the proof of this theorem. Next, we continue consider the parameterized complexity of MECSR with sum-model with the number of voters \(n\) as parameter. When the satisfaction of each vote is set dichotomous, 0 or 1, according to the proof of Theorem 3.1, we can do some modifications on the constructions of layers and get an intractable result when \(n\leq t\). The other condition when \(n>t\), we get a tractable result by constructing an ILP. Otherwise, we find out that even there are only two votes, it is NP-hard to find out an acceptable rule assignment. So, we get the following theorem. Theorem 4.1: _The MECSR problem with sum-model is NP-hard even if there are only 2 voters, \(n=2\). When the satisfaction of each vote is set dichotomous, 0 or 1, the MECSR problem with sum-model is W[2]-hard when \(n\leq t\), and is FPT when \(n>t\), with the number of voter \(n\) as parameter._ Proof: We firstly prove the result that MECSR problem with sum-model is NP-hard even if there are only 2 voters, \(n:=2\), by reducing Partition problem to MECSR. Given a set \(X\) of \(n^{\prime}\) elements, where each element \(x_{i}\in X\) is associated with a value \(s_{i}\in S\), the problem asks for a partition of elements in \(X\) into two disjoint subsets \(X_{1}\) and \(X_{2}\), \(X=X_{1}\cup X_{2}\), \(X_{1}\cap X_{2}=\emptyset\), such that the sum of values assigned to the elements in \(X_{1}\) is equal to the sum of values assigned to the elements in \(X_{2}\), \(\sum_{x_{i}\in X_{1}}s_{i}=\sum_{x_{i^{\prime}}\in X_{2}}s_{i^{\prime}}\). It is well-known that Partition problem is NP-hard [26]. We construct an MECSR instance (\(E=(C,V,R,t),\alpha,d\)) from \((X,S)\) as follows: We construct two voters \(V_{1}\), \(V_{2}\) and two rules \(r_{1}\), \(r_{2}\), \(V=V_{1}\bigcup V_{2}\), \(R=\{r_{1},r_{2}\}\). There are \(n^{\prime}\) layers in total, \(t:=n^{\prime}\). For each layer of voter \(V_{1}\), the satisfaction of vote \(v_{1}^{j}(j\in[n^{\prime}])\) with \(r_{1}\) is set to \(s_{j}\), and the satisfaction of vote \(v_{1}^{j}\) with \(r_{2}\) is set to 0. For each layer of voter \(V_{2}\), the satisfaction of vote \(v_{2}^{j}\) with rule \(r_{1}\) is set to 0, and the satisfaction of vote \(v_{2}^{j}\) with rule \(r_{2}\) is set to \(s_{j}\). \[\mathtt{Sat}(v_{i}^{j},r_{k})=\left\{\begin{array}{rl}s_{i},&i=1,r_{k}=r_{1},j\in[k^{\prime}],\\ 0,&i=1,r_{k}=r_{2},j\in[k^{\prime}],\\ 0,&i=2,r_{k}=r_{1},j\in[k^{\prime}],\\ s_{i},&i=2,r_{k}=r_{2},j\in[k^{\prime}].\end{array}\right.\] Note that for the \(j-\)th layer, if the rule \(r_{1}\) is chosen, then the satisfaction of \(V_{1}\) is improved by \(s_{j}\); otherwise, the rule of \(r_{2}\) is chosen, and the satisfaction of is improved by \(s_{j}\). We set \(\alpha:=2\), \(d:=\frac{1}{2}N=\sum_{x_{i}\in X}s_{i}\). It means each layer can be assigned with \(r_{1}\) or \(r_{2}\), corresponding to the assignment of elements to either \(X_{1}\) or \(X_{2}\); and \(V_{1}\) and \(V_{2}\) are both satisfied with a value \(d=\frac{1}{2}N\), corresponding to partition \(X\) into \(X_{1}\), \(X_{2}\), and the total value of elements in \(X_{1}\) and \(X_{2}\) are both \(d=\frac{1}{2}N\). Therefore, there is a partition for \((X,S)\) if and only if there is a solution of \((E=(C,V,R,t),\alpha,d)\). Next, we show the reason of why MECSR problem is W[2]-hard with respect to the number of voter \(n\) when \(n\leq t\). According to the proof of Theorem 1, when \(n=m^{\prime}\) (\(m^{\prime}\) is the number of vertex) and \(t=k^{\prime}\) (\(k^{\prime}\) is the size of dominating set), MECSR problem is w[2]-hard with respect to the number of layers \(t\). We can do the following modifications to the proof in Theorem 1: * Add \(\lambda\) layers for each voter(\(\lambda\geq m^{\prime}-k^{\prime}\)); * Set the satisfaction of vote \(v_{i}^{j}\) with each rule is 0, \(\mathtt{Sat}(v_{i}^{j},r)=0,i\in[m^{\prime}],j\in[k^{\prime}+1,k^{\prime}+ \lambda],r\in R\). Let \(R^{\prime}:=R,d^{\prime}:=d,t^{\prime}=t+\lambda\geq m^{\prime}\), and \((E^{\prime}=(C^{\prime},V^{\prime},R^{\prime},t^{\prime}),\alpha^{\prime},d^{ \prime})\) be the modified MECSR instance. It holds \(n=m^{\prime}\leq k^{\prime}+\lambda=t^{\prime}\). Since the added layers have no influence on the the solution of this problem (the satisfaction of each added layers is 0). So, the modified instance \((E^{\prime}=(C^{\prime},V^{\prime},R^{\prime},t^{\prime}),\alpha^{\prime},d^{ \prime})\) has a solution if and only if the original instance \((E=(C,V,R,t),\alpha,d)\) has a rule assignment solution. This means MECSR is w[2]-hard with respect to the number of \(t\) when \(n\leq t\). Since if a problem is FPT with respect to \(n\), this problem must be FPT with repect to \(t\) when \(n\leq t\). Therefore, MECSR problem is W[2]-hard with respect to the number of \(n\) when \(n\leq t\). In the following, we show the FPT result when \(n>t\). Firstly, we can enumerate all conditions whose satisfaction achieve the threshold \(d\). The number of all conditions is \(2^{n}\), and we consider the conditions of voters when the number of satisfied voter is at least \(\alpha\). For each of this condition, we construct an ILP formulation. Let \(V^{\prime}\) be the set of voter whose satisfaction is at least \(d\). We say two rules are of the same type if they can make the same voters satisfied in \(j-\)th layer(\(j\in[t]\)). Let \(RT\) be the set of all rule types. There are \(n\) voters and \(t\) layers in total, so there are at most \(2^{n}\times t\) different rule types, \(|RT|\leq 2^{n}\times t\). For each rule type \(rt\in RT\), let \(n_{rt}\) be the number of rules in \(R\) of type \(rt\). Let \(f(j,rt)\) be the set of index of the voters whose satisfaction of vote \(v_{i}^{j}\) is 1 with the rule \(r\) of type \(rt\), \(i\in f(j,rt)\), \(\mathtt{Sat}(v_{i}^{j},r)=1\). If \(i\in f(j,rt)\), we define \(h(i,j,rt)=1\); otherwise \(h(i,j,rt)=0\). In the following, we define the variables of ILP. For each rule type \(rt\), we define an integer variable \(x_{j,rt}\), where \(x_{j,rt}\in\{0,1\}\) that \(x_{j,rt}=c\) means there are \(c\) rules of type \(rt\) assigned to \(j-\)th layer. The ILP instance consists of the following constraints: \[1. \sum_{rt\in RT}x_{j,rt}=1,j\in[t];\] \[2. \sum_{rt\in RT}\sum_{j\leq[t]}x_{j,rt}\times h(i,j,rt)\geq d,\forall v _{i}\in V^{\prime};\] \[3. x_{j,rt}=0,1,j\in[t],\forall rt\in RT.\] The first equality guarantees that for each layer, there is exactly one rule assigned to this layer. The second inequality means the chosen rules can make all voters in \(V^{\prime}\) satisfied. Therefore, the solution of the ILP instance gives a rule assignment to make all voter in \(V^{\prime}\) satisfied. The number of the variable is in \(O(2^{n}\times t\times t)\). For each condition, we construct such an ILP and there are totally \(2^{n}\) conditions. Since \(n>t\), we can solve this problem in \(O^{*}(2^{n}\times 2^{n}\times n\times n)\). Therefore, the MECSR problem is FPT with respect to \(n\) when \(n>t\). This completes the proof of this theorem. ### Complexity with Max-model In this section, we show the complexity results of MECSR problem with max-model. In max-model, the satisfaction of a voter is the maximal satisfaction from all \(t\) votes, \(\mathtt{Sat}(V_{i})=max\{\mathtt{Sat}(v_{i}^{j})\}(j\in[t])\). Therefore, by comparing the satisfaction value \(s\) of each vote to the threshold \(d\), we can assign the satisfaction value as follows: if \(s\geq d\), the satisfaction value is set to \(1\); if \(s<d\), the satisfaction value is set to \(0\). Here, we set \(d=1\). Therefore, according to Theorem 1.1 and Theorem 1.1, we can directly obtain the following results: When the satisfaction of each vote is set to either \(1\) or \(0\), and the threshold \(d\) is set to \(1\), the MECSR problem with the max-model has a solution if and only if the MECSR problem with the sum-model has a solution. Theorem 3.1: _The MECSR problem with max-model is NP-hard, is W[2]-hard with respect to the number of layers \(t\)._ Theorem 3.2: _The MECSR problem with max-model is W[1]-hard with respect to the number of satisfied voters \(\alpha\)._ In the following, we continue to analyze the effect of the number of rules \(\ell\), the number of \(\alpha\), the number of voters \(n\) on the complexity of MECSR problem with max-model, and get the following result. Theorem 3.3: _The MECSR problem with max-model is NP-hard even if there are only two rules._ Proof: We prove this theorem by giving a reduction from 3-SAT problem, which given a set of boolean variables \(X\) and a set of clauses \(\mathcal{C}\) where each clause \(\mathcal{C}_{i}\in\mathcal{C}\) is of the form: \(\mathcal{C}_{i}=x_{j}\lor x_{j^{\prime}}\lor x_{j^{\prime\prime}}\) with \(x_{j},x_{j^{\prime}},x_{j^{\prime\prime}}\in\{x,\overline{x}\}(x\in X)\), and asks for an assignment of all variables to makes all clauses true. It is well-known that 3-SAT problem is NP-hard. We construct an MECSR instance \((E=(C,V,R,t),\alpha,d)\) from \((X,\mathcal{C})\) where \(|X|=m^{\prime}\) and \(|\mathcal{C}|=n^{\prime}\) as follows. For each clause \(\mathcal{C}_{i}\in\mathcal{C}(i\in[n^{\prime}])\), we construct a voter \(V_{i}\), \(V=\bigcup_{i\in[n^{\prime}]}V_{i}\). There are \(m^{\prime}\) layers in total, \(t:=m^{\prime}\). For each voter \(V_{i}\), we construct \(m^{\prime}\) votes \(V_{i}^{j}(j\in[m^{\prime}])\), which are corresponding to \(m^{\prime}\) boolean variable \(x_{j}\in X\) one by one. There are two rules \(r_{1}\) and \(r_{2}\), \(R=\{r_{1},r_{2}\}\). For each vote \(v_{i}^{j}\), if the boolean variable \(x_{j}\) occurs in clause \(\mathcal{C}_{i}\), the satisfaction of \(v_{i}^{j}\) is set to \(1\) with \(r_{1}\) and is set to \(0\) with \(r_{2}\), if the boolean variable \(\overline{x_{j}}\) occurs in clause \(\mathcal{C}_{i}\), the satisfaction of \(v_{i}^{j}\) is set to 0 with \(r_{1}\) and is set to 1 with \(r_{2}\), if neither of \(x_{j}\) and \(\overline{x_{j}}\) occur in \(\mathcal{C}_{i}\), the satisfaction of \(v_{i}^{j}\) is set to 0 with \(r_{1}\) or \(r_{2}\). \[\mathtt{Sat}(v_{i}^{j},r)=\begin{cases}&1,\quad r=r_{1},x_{j}\in\mathcal{C}_{i} \ \mathtt{or}\ r=r_{2},\overline{x_{j}}\in\mathcal{C}_{i},\\ &0,\quad r=r_{1},\overline{x_{j}}\in\mathcal{C}_{i}\ \mathtt{or}\ r=r_{2},x_{j}\in \mathcal{C}_{i},\\ &0,\quad\quad\quad\quad\quad\quad\quad\quad\quad x_{j}\notin\mathcal{C}_{i}\ \mathtt{and}\ \overline{x_{j}}\notin\mathcal{C}_{i}.\end{cases}\] It means that if setting \(x_{j}\) to true makes \(\mathcal{C}_{i}\) true, then \(\mathtt{Sat}(v_{i}^{j},r_{1})=1\), \(\mathtt{Sat}(v_{i}^{j},r_{2})\) = 0; if setting \(x_{j}\) to false makes \(\mathcal{C}_{i}\) true, then \(\mathtt{Sat}(v_{i}^{j},r_{2})=1\), \(\mathtt{Sat}(v_{i}^{j},r_{1})=0\); if the value of \(x_{j}\) have no influence on \(\mathcal{C}_{i}\), then \(\mathtt{Sat}(v_{i}^{j},r_{1})=\mathtt{Sat}(v_{i}^{j},r_{2})=0\). Let \(d:=1\), \(\alpha:=n^{\prime}\). Now we prove that there is an assignment of each boolean variable in \(X\) to make all clauses in \(\mathcal{C}\) true for \((X,\mathcal{C})\) if and only if there is a rule assignment of MECSR with max-model. "\(\Longrightarrow\)": Suppose there is an assignment of all variables which makes all clauses true. For the \(j-\)th layer(\(j\in[m^{\prime}]\)), if the corresponding boolean variable \(x_{j}\) is set to true (or 1), \(r_{1}\) is assigned to this layer; otherwise, if the corresponding variable \(x_{j}\) is set to false (or 0), \(r_{2}\) is assigned to this layer. The rule assignment is denoted as \(R^{\prime}\). Since the assignment of \(X\) can make all clause true, for each clause \(\mathcal{C}_{i}=x_{j}\lor x_{j^{\prime}}\lor x_{j^{\prime\prime}}(x_{j},x_{j^{ \prime}},x_{j^{\prime\prime}}\in\{x,\overline{x}\},x\in X)\), at least one of \(x_{j}\), \(x_{j^{\prime}}\) and \(x_{j^{\prime\prime}}\) is true. Without of generality, we say \(x_{k}(k\in\{j,j^{\prime},j^{\prime\prime}\})\) is true. If \(x_{k}\) is in the form of \(x\), Then, the rule \(r_{1}\) is assigned to this layer, and the satisfaction of \(v_{i}^{k}\) must be 1. Otherwise, if \(x_{k}\) is true with the form of \(\overline{x}\), then the rule \(r_{2}\) is assigned to this layer, and the satisfaction of \(v_{i}^{k}\) is 1 as well. Since all clause are true according to the assignment of \(X\), for each voter \(V_{i}\) (corresponding to each clause), it holds: \[\mathtt{Sat}(V_{i})=max\{\mathtt{Sat}(v_{i}^{j})\}\geq\mathtt{Sat}(v_{i}^{k}, r)=1=d.\] Therefore, \(R^{\prime}\) is a solution of MECSR with max-model. "\(\Longleftarrow\)": Supposed there is a rule assignment \(R^{\prime}\) of MECSR. Since \(\alpha=n^{\prime}=|V|\), the satisfaction of each voter is at least \(d=1\). It means, for each voter \(V_{i}\), it holds \(\mathtt{Sat}(V_{i}))=max\{\mathtt{Sat}(v_{i}^{j})\}\geq d=1\). For the \(j-\)th layer(\(j\in[m^{\prime}]\)), if \(r_{1}\) is assigned to this layer in \(R^{\prime}\), the corresponding boolean variable \(x_{j}\) is set to true; otherwise, \(r_{2}\) is assigned to this layer in \(R^{\prime}\), the corresponding boolean variable is set to false. Let \(X^{\prime}\) be the boolean variables assignment. Since it holds \(\mathtt{Sat}(p,V_{i}))=max\{\mathtt{Sat}(v_{i}^{j})\}\geq d=1\), there are at least one layer \(j^{\prime}\), the satisfaction of \(v_{i}^{j^{\prime}}\) is 1. For each voter \(V_{i}\), whose corresponding clause is \(\mathcal{C}_{i}=x_{j}\lor x_{j^{\prime}}\lor x_{j^{\prime\prime}}(x_{j},x_{j^{ \prime}},x_{j^{\prime\prime}}\in\{x,\overline{x}\},x\in X)\), only the \(j-\)th, \(j^{\prime}-\)th, and \(j^{\prime\prime}-\)th layers can receive a satisfaction of value 1. Without of generality, we set the satisfaction of vote \(v_{i}^{k}(k\in\{j,j^{\prime},j^{\prime\prime}\})\) to 1. That is: * If the satisfaction of vote \(v_{i}^{k}\) with the rule \(r_{1}\) is 1, then \(x_{k}\) is the form of \(x\) and \(x_{k}\) is true, so the clause \(\mathcal{C}_{i}\) must be true; * If the satisfaction of vote \(v_{i}^{k}\) with the rule \(r_{2}\) is 1, \(x_{k}\) is the form of \(\overline{x}\) and \(x_{k}\) is false, therefore the clause \(\mathcal{C}_{i}\) must be true.. So, the corresponding clause \(\mathcal{C}_{i}\) of voter \(V_{i}\) must be true. Therefore, \(X^{\prime}\) is a solution of \((X,\mathcal{C})\) to make all clause true. This completes the proof of this theorem. Next, we continue consider the parameterized complexity of MECSR with max-model with the number of voters \(n\) as parameter. According to the proof of theorem 3.1, we can change the second inequality to \(\sum_{rt\in RT}\sum_{j\leq[t]}x_{j,rt}\times h(j,rt)\geq 1,\forall v_{i}\in V ^{\prime}\), to make all voters in \(V^{\prime}\) satisfied. So, we get the following theorem. Theorem 3.2: _The MECSR problem with max-model is W[2]-hard when \(n\leq t\), and is FPT when \(n>t\), with the number of voter \(n\) as parameter._ ### Complexity with Min-model In this section, we show the complexity results of MECSR problem with min-model. In min-model, the satisfaction of a voter is the minimal satisfaction from all \(t\) votes, \(\mathtt{Sat}(V_{i})=\min\{\mathtt{Sat}(v_{i}^{j})\},j\in[t]\). When all voters accept the winner with min-model (\(\alpha=n\)), we can obviously find out an acceptable rule assignment in polynomial-time, that is, examining all \(\ell\) rules to check whether the rule can make the satisfaction of all votes reach \(d\). It runs in \(O(n\times t\times\ell)\) time. So, MECSR problem with min-model is in P when \(\alpha=n\). So, we continue consider the condition where \(\alpha<n\). For the parameterized complexity with the number of voters \(n\) as parameter, we can enumerate all conditions which voters are satisfied. There are totally \(O(2^{n})\) conditions. For each condition, we just need to make the satisfaction of each votes reach \(d\). So, we can get a rule assignment or there is no such acceptable rule assignment in \(O(2^{n}\times n\times t\times\ell)\) time. Therefore, MECSR is FPT with the number of voters \(n\) as parameter. In the following, we show the details of intractable and tractable results when with the number of satisfied voters \(\alpha\), the number of layers \(t\), or the number of rules \(\ell\) as parameter. Theorem 3.3: _The MECSR problem with min-model is NP-hard when \(\alpha\leq n\), and is W[1]-hard with respect to the number of satisfied voters \(\alpha\) and the number of layers \(t\)._ Proof: We prove this theorem by reducing from Multi-color Clique problem, in which we are given a multi-color graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where each vertex is assigned with a color. The graph \(\mathcal{G}\) has a total of \(k^{\prime}\) colors, and \(q\) vertices with the same color. Additionally, no two vertices with the same color are adjacent in the graph. The aim is to find out a size-\(k^{\prime}\) clique \(CL\) that each two vertices in \(CL\) are adjacent. It is known that Multi-color Clique problem is NP-hard and is W[1]-hard with respect to the clique size \(k^{\prime}\). We construct an MECSR instance (\(E=(C,V,R,t),\alpha,d\)) from \((\mathcal{G}=(\mathcal{V},\mathcal{E}),k^{\prime})\) where each vertex is denoted as \(v_{i,i^{\prime}}\) meaning the \(i^{\prime}-\)th vertex with the \(i-\)th color, \(\mathcal{V}=\bigcup_{i\in[k^{\prime}],i^{\prime}\in[q]}v_{i,i^{\prime}}\), \(|\mathcal{V}|=q\times k^{\prime}\) and \(|\mathcal{E}|=n^{\prime}\). The main ideals of the constructions are as follows: * Construct a voter for each vertex; * Construct a layer for each color; * Construct a rule for the index of each vertex of a color. Now, we show the detail of the constructions. For each vertex \(v_{i,i^{\prime}}\in V(i\in[k^{\prime}],i^{\prime}\in[q])\), we construct a voter \(V_{\sigma}\) with \(\sigma=q\times(i-1)+i^{\prime}\), \(V=\bigcup_{\sigma=1}^{k^{\prime}\times q}V_{i}\). There are \(q\) rules in total, \(R=\bigcup_{i\in[q]}r_{i}\). There are \(k^{\prime}\) layers in total, \(t:=k^{\prime}\). For each voter \(V_{\sigma}\), we construct \(k^{\prime}\) votes, \(V_{\sigma}=\bigcup_{j\in[k^{\prime}]}v_{\sigma}^{j}\). For each vote \(v_{\sigma}^{j}\) and rule \(r_{k}\) with \(\sigma=q\times(i-1)+i^{\prime}\), if the corresponding vertices \(v_{i,i^{\prime}}\) and \(v_{j,k}\) satisfy \(v_{i,i^{\prime}}\in N[v_{j,k}]\), then \(\mathtt{Sat}(v_{\sigma}^{j},r_{k})\) is set to \(1\); otherwise, the \(\mathtt{Sat}(v_{\sigma},r_{k})\) is set to \(0\). \[\mathtt{Sat}(v_{\sigma}^{j},r_{k})=\begin{cases}&1,\quad v_{i,i^{\prime}}\in N [v_{j,k}],\\ &0,\quad v_{i,i^{\prime}}\notin N[v_{j,k}].\end{cases}\] Let \(\alpha:=k^{\prime}\), \(d:=1\). At most one rule can be assigned to each layer, corresponding to at most one vertex can be chosen for each color in \(\mathcal{G}\); at least \(\alpha=k^{\prime}\) votes are needed to be satisfied, corresponding to at least \(k^{\prime}\) vertices be chosen to constitute a clique; the minimal satisfaction value of each votes for the satisfied voters is at least \(d=1\), corresponding to each two chosen vertices are adjacent in \(\mathcal{G}\). Therefore, there is a size-\(k^{\prime}\) clique in \(\mathcal{G}\) if and only if there is a rule assignment solution of \((E=(C,V,R,t),\alpha,d)\). Next, we continue consider the parameterized complexity of MECSR with min-model with the number of rules \(\ell\) as parameter. According to the proof of theorem 4.1, we can do some modifications on the constructions of layers and get an intractable result when \(\ell<t\). The other condition when \(\ell\geq t\), we get a tractable result. So, we get the following theorem. Theorem 3.1: _The MECSR problem with min-model is W[1]-hard when \(\ell<t\), and is FPT when \(\ell\geq t\), with the number of rules \(\ell\) as parameter._ ## 4 Conclusion In this paper, we study the _Multi-votes Election Control By Selecting Rules_ problem, which allows each voter to present different votes in each layers among the set of candidates. We study the computational complexity of this problem from the viewpoint of constructive control by assigning rules to each layer to make a special candidate \(p\) being an acceptable winner of the election. We find out that this problem is NP-hard for sum-model, max-model, or min-model. Furthermore, we get the results that it is NP-hard even if there are only two voters in sum-model, or there are only two rules in sum-model or max-model; it is intractable with the number of layers as parameter for all of the three models, and even the satisfaction of each voter is set as dichotomous, either \(1\) or \(0\), it is remains hard to find out an acceptable rule assignment. We also get some other tractable and intractable results, including fixed-parameter tractable, W[1]-hard and W[2]-hard. For the future work, at first, we just consider the constructive cases here, the destructive control case may be a meaningful work. And, it is interesting to analyze the complexity of making special candidate \(p\) being the unique winner of the election. This work needs to consider the satisfaction of each candidate and the format of votes, which are ignored in this paper. And, it is also interesting to make a fixed-size set of candidates being an acceptable committee with other rules, such as PAV, CCAV, and SAV for approval voting. Another promising directing for future work is to embed the uncertainty rules to other models, such as the iterative elections.
2309.08395
Learning by Self-Explaining
Much of explainable AI research treats explanations as a means for model inspection. Yet, this neglects findings from human psychology that describe the benefit of self-explanations in an agent's learning process. Motivated by this, we introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX). LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning. The underlying idea is that a learner model, in addition to optimizing for the original predictive task, is further optimized based on explanatory feedback from an internal critic model. Intuitively, a learner's explanations are considered "useful" if the internal critic can perform the same task given these explanations. We provide an overview of important components of LSX and, based on this, perform extensive experimental evaluations via three different example instantiations. Our results indicate improvements via Learning by Self-Explaining on several levels: in terms of model generalization, reducing the influence of confounding factors, and providing more task-relevant and faithful model explanations. Overall, our work provides evidence for the potential of self-explaining within the learning phase of an AI model.
Wolfgang Stammer, Felix Friedrich, David Steinmann, Manuel Brack, Hikaru Shindo, Kristian Kersting
2023-09-15T13:41:57Z
http://arxiv.org/abs/2309.08395v3
# Learning by Self-Explaining ###### Abstract Artificial intelligence (AI) research has a long track record of drawing inspirations from findings from biology, in particular human intelligence. In contrast to current AI research that mainly treats explanations as a means for model inspection, a somewhat neglected finding from human psychology is the benefit of self-explaining in an agents' learning process. Motivated by this, we introduce a novel learning paradigm, termed **Learning by Self-Explaining** (LSX). The underlying idea is that a learning module (learner) performs a base task, e.g. image classification, and provides explanations to its decisions. An internal critic module next evaluates the quality of these explanations given the original task. Finally, the learner is refined with the critic's feedback and the loop is repeated as required. The intuition behind this is that an explanation is considered "good" if the critic can perform the same task given the respective explanation. Despite many implementation possibilities the structure of any LSX instantiation can be taxonomized based on four learning modules which we identify as: Fit, Explain, Reflect and Revise. In our work, we provide distinct instantiations of LSX for two different learner models, each illustrating different choices for the various LSX components. We broadly evaluate these on several datasets and show that Learning by Self-Explaining not only boosts the generalization abilities of AI models, particularly in small-data regimes, but also aids in mitigating the influence of confounding factors, as well as leading to more task-specific and faithful model explanations. Overall, our results provide experimental evidence of the potential of self-explaining within the learning phase of an AI model. ## 1 Introduction Self-reflection is considered an important building block of human intelligence and a crucial component in the learning process and knowledge development of humans (Glaser-Zikuda, 2012; Ellis et al., 2014). In fact, one aspect of self-reflection--self-explaining--has been identified in several psychological studies as greatly beneficial for the overall learning, problem-solving and comprehension abilities of human subjects (Chi, 2018; Chi et al., 1981; 1994; Chamberland & Mamede, 2015; Belobrovy, 2018; Larsen et al., 2013; Kwon & Jonassen, 2011; Bisra et al., 2018). Accordingly, self-explanations act as a means of making initially implicit knowledge explicit and thereby allow for iterative and critical _self-refinement_. Indeed, recent works in AI research have picked up on the idea of self-refining, either directly inspired by findings from human studies (Madaan et al., 2023) or otherwise motivated _e.g._, by the potential of pre-trained large language models (LLMs), _e.g._, on the topics of self-debiasing (Schick et al., 2021) and self-instructing (Wang et al., 2023). Although these works are quite specific in their form of self-refinement (_cf._Pan et al. (2023) for a recent survey) and far from the general idea of self-reflection from human psychology, they provide valuable first steps for building more _reflective AI_. However, none of these focus on the value and potential of (self-)explanations as the basis and means of such reflective processes. On the other hand, research on interactive machine learning (Teso et al., 2023; Gao et al., 2022) such as explanatory interactive learning (XIL) (Teso and Kersting, 2019; Schramowski et al., 2020) has long identified the value of explanations as a means of communication between human users and AI models and particularly as a means for model refinement. However, hereby explanations are only leveraged for refinement through human guidance. In this work, we therefore introduce a novel machine learning paradigm called Learning by Self-Explaining (LSX) which leverages explanations in the learning phase of an AI model prior to any form of explanatory human guidance. The main idea is that an AI model consists of two submodels, a _learner_ and a _critic_. The learning process in LSX is characterized by four learning modules sketched in Fig. 1. The learner is trained on a base task in Fit, after which it provides explanations for that task in Explain. The critic next performs the same task as the learner, but receives the input _and_ corresponding explanations, thereby assessing the quality of these explanations for performing the original task. Intuitively, a "usefull" explanation should thereby provide important information for the task at hand. Finally, the critic's feedback is returned to the learner for revision in the Revise module and the Explain, Reflect and Revise loop is repeated, if needed. In the context of introducing LSX, we further present two instantiations of LSX for training a convolutional neural network (CNN) and the neuro-symbolic concept learner of Stammer et al. (2021) (NeSyCL), thereby illustrating specific configurations of the submodels and learning modules of LSX as well as the flexibility of the paradigm. In the context of our experimental evaluations on multiple datasets, we show that LSX boosts the generalisability of base learners, particularly in the small data regime. Moreover, we show that LSX helps mitigate the influence of confounding factors and leads to more consolidated, task-specific and faithful model explanations. In summary, our contributions are the following: (i) We introduce a novel learning paradigm for machine learning, based on a model self-refining itself by evaluating its own explanations. (ii) We introduce two different instantiations of LSX, illustrating different submodel and learning module configuration choices. (iii) We provide extensive experimental evidence on various datasets and evaluation metrics, illustrating the benefits and potential of LSX. We proceed as follows. In section 2, we formally introduce the LSX paradigm. In section 3, we next introduce two specific instantiations that integrate LSX into their training procedure. In our experimental evaluations in section 4, we provide results on various datasets and metrics illustrated via both LSX instantiations. We finish our work with an extensive discussion on related work and leave the reader with a final conclusion. 1 Footnote 1: Code will be made available soon. Figure 1: Learning by Self-Explaining (LSX) is a general learning framework that can be integrated into any base learning task, _e.g._, image classification. It is characterized by two submodels, a _learner_ and _critic_, and four distinct training modules: Fit, Explain, Reflect and Revise. Briefly, the learner is optimized for a base task in Fit, after which it provides explanations to its decisions in Explain. In the Reflect module these explanations are passed to the critic, which “reflects” on the quality of these explanations. In other words, the critic evaluates how useful the explanations are for performing the base task. The resulting feedback from the critic, score, is used to update the learner’s representations in Revise. This loop can be repeated as needed. ## 2 Learning by Self-Explaining (LSX) LSX is not explicitly bound to any one type of model implementation, data domain or base learning task, but can be considered a general learning approach for any base learner. In this section, we therefore give an overview of the basic set of modules that characterize LSX before continuing with two specific instantiations in the next section. Let us first provide the background notations. For simplicity, we introduce LSX here in the context of supervised image classification as base task. More formally, let \(x_{i}\in X\) be an image, with the full dataset \(X:=[x_{1},...,x_{N}]\in\mathcal{R}^{N\times L\times M}\), and with corresponding class label to each \(x_{i}\) defined as \(y_{i}\in\{1,...K\}\). Hereby, \(N\) represents the number of data samples, \(L\) and \(M\) the image dimensions and \(K\) the maximal number of image classes. Furthermore, let \(X\) be split into a train and test split \(X=\{(X_{train},y_{train}),(X_{test},y_{test})\}\) with \(y_{train}\) representing the set of corresponding image class labels of the training set and \(y_{test}\) those of test set. The learner is provided the tuple set \(\tilde{X}_{f}=(X_{f},y_{f})=(X_{train},y_{train})\) where we denote \(y_{f}=y_{train}\) specifically as the label set provided to the learner. The critic set, \(\tilde{X}_{c}\), can represent a subset of \(\tilde{X}_{f}\)_e.g._, \(\tilde{X}_{c}\subseteq\tilde{X}_{f}\), but also a previously separated set from the test set (details below). Also for the critic we specifically denote \(y_{c}\) as the label set of the \(\tilde{X}_{c}\). The basic idea of LSX (_cf._ Fig. 1) is the following. An AI model consists of two submodels: a _learner_, \(f\), and an internal _critic_, \(c\). The learner can represent any desired learning model, e.g. a convolutional neural network (CNN). Its initial step is to perform a base task, _e.g._, supervised image classification. Next, the critic reflects on the learner's explanations by evaluating the quality of these explanations for performing the initial task itself. If an explanation is "good", having this explanation up front will help solving the same task. Hereby, \(c\) can represent any AI model. After the critic has given feedback on the explanations, the learner revises its representations accordingly. This general procedure can be described via four modules Fit, Explain, Reflect and Revise, where the last three modules describe the core of LSX and can be repeated until an iteration budget \(T\) is reached. Let us now describe these four modules (presented in pseudo-code in Alg. 1) in more detail. **Base task** (Fit). The Fit module describes the underlying, base learning task in which the learner is optimized to solve a particular problem, _e.g._, supervised image classification. Overall, this module is independent of LSX as it represents the standard, data-driven machine learning approach. More specifically, in Fit, \(f\) is provided with the training dataset, \(\tilde{X}_{f}\), makes predictions given the base task, \(\hat{y}_{f}=f(X_{f})\), and optimizes its latent representations given the corresponding loss function, \(l_{\text{B}}\). This loss function can _e.g._, correspond to the cross-entropy loss, \(l_{\text{B}}=l_{\text{CE}}(\hat{y}_{f},y_{f})\) when considering supervised classification. However, the underlying task can correspond to any form of learning (_e.g._, other forms of supervision) and can contain any bells and whistles of modern machine learning setups (_e.g._, hyperparameter optimization, learning rate schedulers, etc.). Finally, Fit returns a model optimized for the base task, \(f=\textsc{Fit}(f,\tilde{X}_{f})\). **Obtain explanations** (Explain). Explain represents the first of three core modules of LSX. In this module \(f\) provides explanations to its decisions given a set of data samples, \(X_{c}\). This is achieved via a pre-defined explanation method which returns an explanation for each sample, \(E_{c}=\textsc{Explain}(f,X_{c})\), where \(E_{c}:=\{e_{1},...,e_{|X_{c}|}\}\). If \(\tilde{X}_{c}\) contains ground-truth annotations, explanations can be queried given the ground-truth labels (\(E_{c}=\textsc{Explain}(f,X_{c}|y_{c})\)), otherwise the learner provides explanations given its predictions. Hereby \(\tilde{X}_{c}\) can be a subset of \(\tilde{X}_{f}\) (_i.e._, \(\tilde{X}_{c}\subseteq\tilde{X}_{f}\)), but can also represent a separate set that is with-held during the initial optimization of \(f\) (_i.e._, \(\bar{X}_{c}\cap\bar{X}_{f}=\emptyset\)). Overall, this explanation module can be realized with any explanation method from the vast literature of XAI (_cf._Guidotti et al. (2018); Ras et al. (2022b); Liao and Varshney (2021); Carvalho et al. (2019)). Given the architectural constraints of the learner, it can, _e.g._, be a post-hoc explanation method, but also an inherent explanation method if \(f\) is designed as such. Notably, this module is commonly also found in XIL (Friedrich et al., 2023) approaches. In summary, this module returns explanations, \(E_{c}\), from the learner. **Reflect on explanations (Reflect).** In the second core module, and arguably most distinctive module of LSX, the high-level role of the critic is to "reflect" on the quality of the explanations. This is an abstract measure and in LSX is quantified by the ability of the critic to perform the base task, given the learner's explanations. This explanation evaluation is performed in the Reflect module whereby the critic returns a score of the learner's explanations, given \(E_{c}\) and the corresponding original data that was used for generating these explanations, \(\bar{X}_{c}\). In other words, score represents an indication of how well the critic performs the base task on the data \(\bar{X}_{c}\) given the additional knowledge of the provided explanations, \(E_{c}\) (an idea that is related to (Pruthi et al., 2022)). What score exactly represents depends very much on the model type of the critic. For example, it can represent the signal from a classification loss function over \(\bar{X}_{c}\) and \(E_{c}\) or a scoring of how probable an explanation is given the evidence of \(\bar{X}_{c}\). By evaluating the quality of explanations based on their benefit in solving a task, the Reflect module represents one of the core aspects of LSX. Overall, explanations are thus treated not just as a one-way approach for indicating the importance of features when making a prediction (as often done in XAI). Rather, LSX specifically contributes to the view of explanations as verifiable rationales as in Fok and Weld (2023). **Integrate feedback on explanations (Revise).** In the last module of a Learning by Self-Explaining loop, the feedback signal, score, obtained from the critic in the Reflect module, is used to refine the learner within Revise. This revision module can be realized in one of the many ways provided in interactive learning settings (Teso and Kersting, 2019; Friedrich et al., 2023). Standard revision tools entail loss-based methods (Ross et al., 2017; Selvaraju et al., 2019) as well as more parameter-efficient finetuning alternatives (Houlsby et al., 2019; Hu et al., 2022). But also non-differentiable revision approaches can be considered, _e.g._, data augmentation approaches (Teso and Kersting, 2019) or retrieval-based setups that store the revisory feedback in an explicit revision engine (Friedrich et al., 2022; Tandon et al., 2022). In our LSX instantiations, we implement the revision step by adding an additional explanation loss, \(l_{\text{expl}}\) (_e.g._, a HINT-like loss (Selvaraju et al., 2019) and a cross-entropy loss of the critic's classification) to the optimization process. The learner is thus jointly optimized in Revise via \(L=l_{\text{B}}+\lambda l_{\text{expl}}\), where \(\lambda\in\mathbb{R}\) represents a scaling factor. ## 3 Instantiating LSX Many implementation details in the above overview depend strongly on the exact choice of the LSX components. Importantly, these are interdependent, _e.g._, the choice of explanation method influences how the critic can evaluate the explanations, the choice of critic module influences the way the feedback score is computed, but also how this is Figure 2: CNN-LSX: Learning by Self-Explaining instantiation for training CNNs for supervised image classification. Here CNNs represent both the _learner_ and _critic_. Explanations are generated via InputXGradient. The score represents the classification loss signal of the critic on these explanations. integrated into the learner, etc. To give a better understanding of these abstract modules, in this section, we introduce two instantiations of how to integrate different base setups into LSX, where we refer to App. A.1 and A.2 for further details. These instantiations will also be the point of investigation in our experimental evaluations. Both differ in several LSX components. Importantly, these two instantiations are by no means conclusive in how to integrate a base task into LSX, but serve as illustrations of how to integrate LSX into different models. We conclude this section with a more general perspective of Learning by Self-Explaining. ### Neural and Neurosymbolic Instantiations **CNN-LSX via differentiable feedback score.** We begin with a CNN-based setup shown in Fig. 2 (_cf._ App. A.1 for further details). This instantiation which we denote as CNN-LSX consists of a standard CNN as learner, \(f\), and a duplicate of the learner as critic, \(c\). Specifically, in this instantiation the learner is trained on raw images to predict the corresponding class labels and is optimized via a cross-entropy loss as \(l_{\text{B}}:=l_{\text{CE}}^{f}(f(X_{f}),y_{f})\), thus representing the Fit module. The explain module is realized with the post-hoc, differentiable InputXGradient method (Shrikumar et al., 2017; Hechthinger, 2016) and we compute \(E_{c}=\textsc{Explain}(X_{c}|y_{c})\) with \(\tilde{X}_{c}\subseteq\tilde{X}_{f}\). As an explanation that is based on InputXGradient also contains the input, it is not required to separately pass \(X_{c}\) to the critic. In the Reflect module, the critic therefore only receives these explanations as input and predicts the corresponding class labels from these \(\hat{y}_{c}=c(E_{c})\). The quality of the predictions is quantified via a second cross-entropy loss averaged over all samples in \(E_{c}\), denoted as \(l_{\text{CE}}^{c}(c(E_{c}),y_{c})\). Next, in Revise, \(f\) is optimized via the joint loss \(L=l_{\text{CE}}^{f}(f(X_{f}),y_{f})+\lambda l_{\text{CE}}^{c}(c(E_{c}),y_{c})\). In other words, the model parameters are updated based on both classification losses: the one from the learner given the training images \(X_{f}\)_and_ that from the critic given the explanations of \(X_{c}\). Lastly, we perform the final Revise step (_i.e._, when iteration budget \(T\) has been reached) with both loss functions evaluated on \(X_{f}\): \(L=l_{\text{CE}}^{f}(f(X_{f}),y_{f})+\lambda l_{\text{CE}}^{c}(c(E_{f}),y_{f})\). **NeSyCL-LSX via non-differentiable feedback score.** Next, we will introduce an instantiation around the neuro-symbolic concept learner (NeSyCL) of Stammer et al. (2021) as base model (learner). We denote this instantiation as NeSyCL-LSX (_cf._ Fig. 3 and App. A.2 for further details). The NeSyCL consists of a slot attention-based perception module (Locatello et al., 2020) and set-transformer-based reasoning module (Lee et al., 2019). An image, \(x_{i}\), is first processed by a pretrained perception module into a symbolic representation, \(z_{i}\in[0,1]^{O\times A}\), indicating the presence of objects and their attributes in the corresponding image. Here, \(O\) indicates the number of objects (or slots) and \(A\) the number of predicted attributes. The reasoning module next makes a final class prediction given \(z_{i}\). As in the CNN-LSX instantiation, NeSyCL-LSX performs supervised image classification. This is the base task in Figure 3: NeSyCL-LSX: Learning by Self-Explaining instantiation for supervised image classification via neuro-symbolic concept learner. The critic represents a neuro-symbolic forward reasoner, which computes all possible consequences in a differentiable manner given visual and logical input. An explanation represents a logical statement that is derived from concept-level saliency maps. The feedback score represents a probabilistic ranking of these logical explanations with which we identify the most likely explanation per image class and revise the learner to only produce this explanation. the Fit module with \(l_{\text{B}}\) representing a cross-entropy loss. Obtaining explanations (Explain) is based on the approach of (Stammer et al., 2021) where saliency maps of \(z_{i}\) are created via Integrated Gradients (Sundararajan et al., 2017). These thus indicate which objects and which of their attributes are relevant for a prediction, denoted here as \(e_{i}^{\prime}\in[0,1]^{O\times A}\). By thresholding via an additional hyperparameter, \(\delta\in[0,1]\), these saliency maps are binarized to \(e_{i}^{\prime\prime}\in\{0,1\}^{O\times A}\). We next propositionalize \(e_{i}^{\prime\prime}\) by representing the explanation as a set of logical rules that are present in \(e_{i}^{\prime\prime}\). These rules consist of conjunctive combinations of the important attributes and objects. We denote this set of candidate explanatory rules as \(e_{i}\). Finally, by iterating over all samples in \(X_{c}\) and grouping the resulting candidate rules by image class we receive a set of candidate rules per class as \(E_{c}=\{e^{1},...,e^{K}\}\). Moving on to the next modules, the critic in NeSyCL-LSX is represented by the neuro-symbolic forward reasoner of (Shindo et al., 2021, 2023), which computes all possible consequences in a differentiable manner given visual and logical input. In the Reflect module the forward reasoner evaluates the explanations, \(E_{c}\), by estimating the validity of each candidate explanation rule given the data in \(\bar{X}_{c}\). Finally, based on the resulting estimated probabilities (score), the most probable rule per class \(j\) is chosen, denoted as \(e_{\text{max}}^{\prime}\), and \(E_{\text{max}}=\{e_{\text{max}}^{1},...,e_{\text{max}}^{K}\}\) is passed back to the learner (Revise). Here, these logical explanations are next transformed back into binary vector form in the dimensions of the learner's original symbolic representation, \(\tilde{E}_{\text{max}}=\{\bar{e}_{\text{max}}^{1},...,\bar{e}_{\text{max}}^{K}\}\) with \(\bar{e}_{\text{max}}^{j}\in\{0,1\}^{O\times A}\). Finally, \(f\) is updated via an additional mean-squared-error loss between the learner's explanations and these transformed explanations leading to \(L=l_{\text{CE}}^{f}(f(X_{f}),y_{f})+\lambda l_{\text{MSE}}(\tilde{E}_{\text{ max}},E_{f})\). **Configuration choices.** As mentioned, many instantiations of LSX are possible, each with their own specific module and configuration choices. The instantiations introduced in this work, however, already sketch some interesting aspects and differences which we wish to highlight here. The most fundamental difference between CNN-LSX and NeSyCL-LSX lies within their Reflect modules, specifically how a score of the learner's explanations are computed. In CNN-LSX the score represents a differentiable signal, where in NeSyCL-LSX this represents a probabilistic ranking of logical explanations. This second form of critiquing allows to weigh explanations and _e.g._, identify the most "useful" explanation. The first form of critiquing, on the other hand, allows to fine-tune explanations. Related to this and concerning the Explain modules, where in CNN-LSX _continuous_ input-level explanations are being processed, the logical explanations in NeSyCL-LSX are _discrete_. As an effect of this, the form of revision in the Revise module differs. In CNN-LSX we can simply pass the backpropagated signal from the critic to the learner via a classification loss. In NeSyCL-LSX we identify the most-likely explanation. Additionally, in CNN-LSX the critic represents a duplicate CNN of its learner. In NeSyCL-LSX, on the other hand, the critic represents a different model type altogether compared to its learner. ### General Perspective In the following, we wish to provide a general perspective for LSX on important current challenges of AI. **Human-machine interactions.** Accurate and trustworthy human-machine interactions have been identified as important criteria for the future deployability of AI systems (Friedrich et al., 2023; Teso et al., 2023; Holzinger, 2021; Angerschmild et al., 2022). Whereas much of ML research is not developed with this in mind the LSX framework automatically facilitates the development and integration of mechanisms that allow for fruitful human-machine interactions. _E.g._, via the Explain module a human user can query the learner's reasons for a prediction and via the Revise module integrate feedback on these explanations into the model. As explanations in LSX explicitly act as the means for revision and bidirectional communication between the two submodels, LSX fits well into the line of research that has identified the importance of leveraging explanations for bi-directional communication between human user and AI model (Teso et al., 2023). Finally, LSX does not ultimately remove the need for human-machine interactions for a sustainable model deployment, rather it can be used for an AI model to reflect on its learned reasons before receiving important human feedback. Deploying LSX in this way potentially reduces the amount and costs of human resources required (Friedrich et al., 2023). **System 1 and 2 processing.** A prominent and well-studied hypothesis from cognitive psychology is that human cognition is mainly composed of two systems: an approximate, fast processing system (system 1) that handles the majority of familiar, daily situations and an embedded, slower, yet more exact system (system 2) that handles processing in more unfamiliar settings (Kahneman, 2011). This idea has recently gained much interest in the AI community (Goyal & Bengio, 2022; Kautz, 2022; Ganapini et al., 2022; Booch et al., 2021) and we here highlight several connections that can be made between the system 1 and 2 processing framework and Learning by Self-Explaining. Similar to the system 1 and 2 framework, within LSX there are two processing systems where one is embedded in the other. This differentiation is not necessarily based on the two submodels (_learner_ and _critic_, _cf._ Fig. 1), but rather on the learning modules (_cf._ Alg. 1). Hereby, Fit takes over the fast, initial processing phase, and the triad consisting of Explain, Reflect and Revise results in a slower, embedded processing phase. A major open question, particularly in AI research on system 1 and 2 processing, is what form of communication the two systems should be using (Goyal & Bengio, 2022; Kautz, 2022). In our work, we consider explanations to represent a valuable means of communication between the different systems. Specifically, the learner provides initial, associative explanations to its decisions which, in turn, are either directly interpreted as (_cf._ CNN-LSX) or transformed (_cf._ NeSyCL-LSX) into verifiable and refinable rationales. In line with Goyal & Bengio (2022), the process of explaining and reflecting on the learner's explanations can thus be seen as making the implicit knowledge of the learner explicit. At the same time, system 2 can also influence the processing of system 1. Evidence for this can be seen in our findings on explanation consolidation (_cf._ Tab. 3), where the critic's feedback leads to the learner adapting its explanations to represent more task-specific rationales. Lastly, relating to Henry Kautz's proposed Neuro[Symbolic] approach (Kautz, 2022) for AI system 1 and 2 processing, our NeSyCL-LSX has many parallels concerning the integration of neural and symbolic components. LSX in general, however, does not necessarily require a specific model type, but rather aims for an integration of any type of model via explanations. **Learning by self-explaining and causal inference.** Another important viewpoint to consider is the connection between Learning by Self-Explaining and (the many different notions of) causal inference. Explanations in the general context of XAI and causality have been discussed in several recent works (Zecevic et al., 2021; Heskes et al., 2020; Schwab & Karlen, 2019; Galhotra et al., 2021). However, in LSX they particularly play a role in querying and transforming implicit, associative knowledge into explicit knowledge. Through the explanation evaluation step in Reflect LSX is based on the idea that an explanation is "useful" if it represents a true rationale that, if applied, allows for an internal critic to perform a given task. This idea is related to the formalization of causal explanations by Woodward (2005) and picked up by Beckers (2022). In short, the main component of Woodward's formalization is that a "successful explanation" should provide us with information to perform interventional and counterfactual experiments. Lastly, the inherent idea of true rationale generation from explanation refinement found in LSX is closely related to findings of Fok & Weld (2023) which argue that the largest benefit of AI model explanations in hybrid decision making is given when the model's explanation allows the human decision maker to explicitly verify the model's prediction. ## 4 Experimental Evidence In the following experiments, we investigate the benefits of Learning by Self-Explaining with the help of two instantiations, CNN-LSX and NeSyCL-LSX. Specifically, we compare the performances of LSX to the standard training setup (_i.e._, supervised learning). Over the course of our evaluations, we will investigate the potential benefits of LSX concerning test-set generalization, explanation consolidation, explanation faithfulness and shortcut learning mitigation. ### Experimental Setup **Data.** We provide experimental results on four different datasets. Particularly, we evaluate CNN-LSX on the MNIST (LeCun et al., 1989) and ChestMNIST (Yang et al., 2023; Wang et al., 2017) datasets and NeSyCL-LSX on the concept-based datasets CLEVR-Hans3 (Stammer et al., 2021) and a variant of Caltech-UCSD Birds-200-2011 dataset (Wah et al., 2011), CUB-10 (_cf._ App. B). Overall, the number of training images in MNIST corresponds to 60k, in ChestMNIST to 78k, in CLEVR-Hans3 to 9k and in CUB-10 to 300 images. Finally, for investigating the effect of confounding factors as a form of shortcut learning (Geirhos et al., 2020), we evaluate accuracies on CLEVR-Hans3 and DecoyMNIST (Ross et al., 2017), a confounded variant of the MNIST dataset. **Metrics.** We provide evaluations of LSX based on five metrics where we briefly describe these here and refer to App. C for details. **(1)** The first metric is the standard _balanced classification accuracy_ on a held-out test set. **(2)** For investigating the revised explanations via LSX we provide the classification accuracy of a linear, ridge regression model. This model is optimized to classify a set of the learner's explanations, given their corresponding ground-truth class labels and evaluated on a held-out set of explanations. **(3)** We further provide a cluster analysis based metric over all explanations, similar to the Dunn index (Dunn, 1973; 1974). This metric, which we denote as _Inter-vs. Intraclass Explanation Similarity_ (IIES), quantifies how similar explanations are within one class, but dissimilar between classes (lower values indicate better separability). For investigating whether the learner in fact makes a decision based on the reported explanations, we analyse the faithfulness (Hooker et al., 2019; Chan et al., 2022) of the learner's explanations via two metrics as introduced by (DeYoung et al., 2020), namely **(4)**_sufficiency_ and **(5)**_comprehensiveness_. Both metrics measure the impact on the model's performances of removing specific parts of the input based on the explanations. For comprehensiveness, parts of the input are removed which correspond to important features as identified by the explanation. For sufficiency, parts of the input are removed which correspond to unimportant features as identified by the explanation. For continuous input settings (MNIST and ChestMNIST) we modify the computation of these two metrics slightly by subtracting the impact when randomly chosen features are removed. This way we compensate for the potential influence of such out-of-distribution input. Notably, this can lead to negative values. In both formulations, however, higher comprehensiveness and lower sufficiency scores are better. Both metrics are not normalized and provide a relative comparison. **Setup.** In all evaluations, we compare the performances of the LSX instantiations with the performances of the base learners that were trained for the same overall number of epochs, but only in the standard supervised manner. These are denoted as CNN and NeSyCL. We evaluate CNN-LSX on the MNIST and ChestMNIST datasets and NeSyCL-LSX on the CLEVR-Hans3 and CUB-10 datasets. The baselines were trained on the same data as the LSX versions, _i.e._, \(\bar{X}_{f}^{\text{baseline}}=\bar{X}_{f}^{\text{LSX}}\cup\bar{X}_{c}^{\text{ LSX}}\). Note that for NeSyCL-LSX for CUB-10, we replaced the slot-attention perception module with a pretrained Inception-v3 network (Szegedy et al., 2016) and the reasoning module with a single linear layer as in (Koh et al., 2020). We provide all results as mean values with standard deviations over five runs with different random seeds. Lastly, for investigating shortcut behavior we provide balanced accuracy scores on the unconfounded held-out test sets of Decoy-MNIST and CLEVR-Hans3 while being trained on the confounded train set. In the CLEVR-Hans3 evaluations that do not target evaluating the effect of confounding factors, the original confounded evaluation set of CLEVR-Hans3 was used as held-out test set. ### Experimental Results **Improved (few-shot) generalisation.** An intuitive aspect of psychological findings on learning via self-explaining is that reflecting on one's learned explanations leads to improved generalizable knowledge (Chi et al., 1994). We investigate LSX in our first evaluation by measuring the held-out test set accuracy of CNN-LSX on the MNIST and ChestMNIST datasets and of NeSyCL-LSX on the CLEVR-Hans3 and CUB-10 datasets. In the rightmost column of Tab. 1, we present the respective test set accuracies for all learning configurations when trained on the full training size of each dataset. We can observe that on average, _i.e._, over all datasets and over both LSX instantiations, there is a substantial boost in test set accuracy (last row). In the remaining columns of Tab. 1, we present the test-set accuracy in \begin{table} \begin{tabular}{l|c c c} \hline \hline & \multicolumn{3}{c}{MNIST} \\ & 1.2k & 3k & full (60k) \\ CNN & \(89.83\pm 0.2\) & \(93.83\pm 0.08\) & **98.79\(\pm 0.1\)** \\ CNN-LSX & **91.59\(\pm 0.91\)** & **94.31\(\pm 0.43\)** & 98.03\(\pm 0.2\) \\ \multicolumn{3}{c}{ChestMNIST} \\ & 1.6k & 4k & full (78k) \\ CNN & \(58.68\pm 0.15\) & \(58.49\pm 0.31\) & 60.86\(\pm 0.08\) \\ CNN-LSX & **61.16\(\pm 0.54\)** & **61.77\(\pm 0.75\)** & **63.41\(\pm 1.3\)** \\ \hline \multicolumn{3}{c}{CLEVR-Hans3} \\ & 150 & 375 & full (9k) \\ NeSyCL & \(91.40\pm 1.80\) & \(96.81\pm 0.94\) & \(99.00\pm 0.28\) \\ NeSyCL-LSX & **94.51\(\pm 1.94\)** & **97.34\(\pm 0.44\)** & **99.08\(\pm 0.17\)** \\ \multicolumn{3}{c}{CUB-10} \\ & 100 & 150 & full (300) \\ NeSyCL & \(83.57\pm 1.67\) & \(87.14\pm 0.4\) & \(93.13\pm 0.4\) \\ NeSyCL-LSX & **84.49\(\pm 1.18\)** & **93.05\(\pm 1.72\)** & **96.33\(\pm 0.31\)** \\ \hline avg. improvement & **2.07** & **2.55** & **1.29** \\ \hline \hline \end{tabular} \end{table} Table 1: Improved (few-shot) generalization via LSX on various datasets and models. We here present the accuracy in % on a held-out test set across varying training set sizes. \begin{table} \begin{tabular}{l|c c} \hline \hline & \multicolumn{2}{c}{DecoyMNIST} \\ & w/ conf. & w/ deconf. \\ CNN & \(63.52\pm 1.39\) & \(86.88\pm 0.68\) \\ CNN-LSX & **78.99\(\pm 2.71\)** & **88.43\(\pm 2.34\)** \\ \hline \multicolumn{3}{c}{CLEVR-Hans3} \\ & w/ conf. & w/ deconf. \\ NeSyCL & \(85.96\pm 4.20\) & \(91.23\pm 1.2\) \\ NeSyCL-LSX & **90.90\(\pm 4.38\)** & **95.64\(\pm 2.21\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Mitigating confounders via LSX: Test set performances on confounded datasets, both with deconfounded samples during training (_w/ deconf._) and without (_w/ conf._). the smaller-data regime, _i.e._, when the models were trained on different-sized subsets of the original training set 2. We can observe large performance gains with LSX for all configurations and datasets. Particularly, these improvements are on average greater than those observed on the full training set sizes. Altogether these results suggest that learning via self-explaining leads to improved test-set generalization performances with larger improvements, particularly in small-data regimes. Footnote 2: Hereby, the size of these subsets varies over the different datasets due to different specifics of each dataset, e.g. the original training set sizes. **Self-unconfounding.** In the second set of evaluations, we are interested in how far LSX can help mitigate shortcut behaviour (Geirhos et al., 2020). We particularly focus on confounded behaviour as a form of shortcut learning in which a learner picks up on spurious correlations within the training dataset that are not present in the test set (Schramowski et al., 2020). We investigate two settings. (i) With the first (denoted as _w/ conf._), we investigate the performance of LSX in mitigating confounding behaviour without additional knowledge on the confounding factors. To this end, we train the two LSX instantiations (and baselines) as in the previous setups with \(\bar{X}_{c}\subseteq\bar{X}_{f}\), and \(\bar{X}_{f}\) representing a confounded dataset. (ii) In the second setting, \(\bar{X}_{c}\) represents a dataset that is with-held from training the learner (_i.e._, \(\bar{X}_{c}\cap\bar{X}_{f}=\emptyset\)) and represents a dataset that is explicitly deconfounded (_cf._ App. C.3 for details on the sizes of \(\bar{X}_{c}\)). In other words, the spurious correlation found in \(\bar{X}_{f}\) is not present in \(\bar{X}_{c}\). We denote this case as _w/ deconf._ For the standard training scheme the models have access to \(\bar{X}_{c}\) within their training phase. We evaluate the CNN-LSX configuration on the Decoy-MNIST dataset and the NeSyCL-LSX configuration on the CLEVR-Hans dataset. In Tab. 2, we present the held-out test set accuracies of all configurations. We observe a strong improvement in performances when training LSX on the deconfounded critic sets (_w/ deconf._), indicating that reflecting on the explanations of an explicitly deconfounded critic set can lead to much improved shortcut mitigation behaviour compared to the baseline learning setup. Even more interesting is the result of the _w/ conf._ setting. We observe that the LSX trained models, though never having seen deconfounded data, lead to strong mitigation improvements. This result suggests great practical implications, as it does not require prior knowledge on the confounding factors. Overall, our results suggest a large beneficial effect of Learning by Self-Explaining in mitigating the issue of shortcut learning, specifically confounded behaviour. **Explanation consolidation.** In the next evaluation, we wish to analyze how the critic's feedback signal influences the learner's representations, specifically its explanations. Based on the intuition behind the Reflect module concerning "good" explanations, we hypothesize that the explanations of a LSX trained model represent more task-specific rationales. We present our results based on the Inter- vs. Intraclass Explanation Similarity (IIES) and the accuracy of a ridge regression model that was trained to classify a set of explanations and evaluated on a second, held-out set. Both of these metrics measure the class-based separability of a model's explanations. In Tab. 3, one can observe that over both metrics training via LSX leads to much more separable and distinct explanations. This effect appears less pronounced for the NeSy datasets, which is likely due to the sparseness and low dimensionality of their concept-level data and therefore also of the explanations. In Fig. 4, we also provide qualitative results of the explanation consolidation for MNIST, where explanations from four randomly sampled input samples are presented for each digit class. These visualizations undermine the quantitative results of Tab. 3 and particularly indicate the distinctness of the explanations from a LSX trained model within one data class from those of other classes. Overall, we observe that LSX leads to more consistent explanations across samples of one class, yet distinctly separate explanations to samples of other classes. We refer to such an effect as _explanation consolidation_. **Explanation faithfulness.** Although the performance improvements of the first evaluation suggest that LSX learners do make use of the critic's _explanatory feedback_, and the evaluations regarding explanation consolidation indicate that models learn more distinct explanations via LSX, an open question remains whether the learner's in fact make use of these _explanations_ for making their decisions. In other words: are a learner's explanations faithful to its decision process? This is a relevant question on its own, particularly in the field of XAI (Hooker et al., 2019; DeYoung et al., 2020) and XIL (Schramowski et al., 2020) as models that produce unfaithful explanations are detrimental for building trust between human users and machines and at the same time make potential revisions via these explanations difficult (Schramowski et al., 2020; Tesco et al., 2023). To investigate the faithfulness of LSX-learned explanations we turn to established faithfulness metrics of AI literature. Specifically, we use the _sufficiency_ and _comprehensiveness_ metrics of DeYoung et al. (2020). In Tab. 4, we present the results of these metrics over all four datasets and both LSX implementations3. One can observe a strong improvement via LSX in both metrics across all models and datasets. Specifically, the comprehensiveness results indicate that the information, considered as relevant by the explanations learned via LSX, are indeed important for the model to make its prediction. At the same time, the sufficiency results indicate that less important information based on the explanations has a decreased impact on the learner's decisions. Overall, we conclude that indeed training via LSX leads to more faithful explanations. Footnote 3: For CNN-LSX, we adapt these for handling continuous data (_cf_. App. C). Overall, our evaluations demonstrate the benefits of training via LSX on a variety of important tasks and metrics that go beyond standard evaluations of ML research. ## 5 Related Works LSX is related to research in explainable AI (XAI), leveraging explanations in ML and, importantly, model refinement via forms of self-refinement or feedback from a second model. Let us highlight these works in the following. ### (Leveraging) Explanations in ML Receiving explanations to an AI model's decision has become a heavily advocated and researched topic in recent years, culminating in the field of _explainable AI_ (XAI) (_cf_. Guidotti et al. (2019); Ras et al. (2022); Roy et al. (2022); Saeed & Omlin (2023) for valuable overviews) and _interpretable AI_, which focuses on developing models that are _explicitly_ interpretable by design (Rauker et al., 2023; Li et al., 2018; Rudin et al., 2022; Rudin, 2019). An additional branch of \begin{table} \begin{tabular}{l|c c} \hline \hline & Comp. (\(\uparrow\)) & Suff. (\(\downarrow\)) \\ \hline \hline \multicolumn{3}{c}{MNIST} \\ CNN & \(-1.34\pm 0.39\) & \(23.11\pm 1.18\) \\ CNN-LSX & **16.49\(\pm 2.79\)** & **-0.21\(\pm 4.18\)** \\ & ChestMNIST & \\ CNN & \(13.98\pm 0.43\) & \(-4.2\pm 1.84\) \\ CNN-LSX & **18.84\(\pm 0.38\)** & **-8.55\(\pm 1.92\)** \\ \hline \hline \end{tabular} \end{table} Table 4: Explanation faithfulness via LSX: Comprehensiveness and sufficiency results of explanations for models trained on all training samples. research can be placed between explainable AI and interpretable AI, namely that of _self-explaining models_(Alvarez-Melis & Jaakkola, 2018; Lee et al., 2022; Roy et al., 2022; Camburu et al., 2018; Bastings et al., 2019; Majumder et al., 2022). In all of these works and in contrast to LSX, explanations are only provided in a one-way communication as a means of model inspection for humans and not considered as a means of model refinement. The idea of leveraging explanations in the training process has only recently been picked up by parts of the ML community. In the field of explanatory interactive learning (XIL) (Teso & Kersting, 2019; Schramowski et al., 2020; Stammer et al., 2021; Friedrich et al., 2023) human users provide revisory feedback on the explanations of an ML model. Similar ideas can also be identified in other works of human-machine interactive learning (Teso et al., 2023; Gao et al., 2022), _e.g._, in preference selection based interactions for learning vision language models (Brack et al., 2022). Compared to these, we argue for the importance of leveraging explanations in the training loop even before the necessity of human-machine interactions and advocate for the potential of explanations in a form of self-refinement in a model's initial learning process. In contrast, several works have identified the value of leveraging explanations outside of human-interactive learning (_e.g._, (Giunchiglia et al., 2022; Lampinen et al., 2021, 2022; Norelli et al., 2022)). In the works of Lei et al. (2016) and Bastings et al. (2019) (later categorized under the term _explain-then-predict models_ by Zhang et al. (2021)), the goal is for one model to learn to extract the rationale4 from an input and a second model to learn to predict the final class from these rationales. Similar ideas were picked up by (Zhang et al., 2021; Krishna et al., 2023). None of these works evaluate the correctness of explanations and particularly none use explanations as a means to _revise_ a model. Footnote 4: Here we mean the term “rationale” as adopted in research on explainability in NLP. ### (Self-)Refinement in ML A recent, but quickly growing field of research related to our work is that which we categorize under the term of _self-refining AI_. This roughly encompasses research that investigates forms of self-supervised refinement of an AI model, _e.g._, Wang et al. (2023) propose an approach for instruction-tuning. In the self-alignment approach of Sun et al. (2023), a LLM is aligned with few human provided principles. Schick et al. (2021), on the other hand, identify that LLMs can, to a certain degree, identify biases in their own generations and the authors leverage this characteristic in a finetuning process to mitigate biased generation in future prompts. In the work of Madaan et al. (2023) a model is used to provide feedback to its initial generations, where the feedback generation is guided via targeted, few-shot prompting. Zelikman et al. (2022), on the other hand, investigate finetuning a model by based on generated "chain-of-thought" rationales that lead to correct task predictions. Lastly, Paul et al. (2023) propose an approach in which a model learns to provide explicit intermediate reasoning steps for an answer via feedback from a critic model. Importantly in this work, the critic is specifically trained to identify false reasoning steps. In contrast to LSX only few of these mentioned approaches focus on refinement via explanations. Those that do require specifically trained modules for providing feedback on the explanations. In contrast in LSX explanations are quantified in how far they can help perform a task. Thus the evaluation and refinement of a model is performed without specific pretraining or prompt specification. In contrast to self-refining AI a different branch of research focuses on revising a model based on forms of feedback from a second model. Such et al. (2020) which represents a meta-learning training data generation process in which a data generator and learner model are optimized for the same goal of improving the learner's performance on a given task. Nair et al. (2023) propose a general chat framework that leverages two agents, _researcher_ and _decider_, to iteratively work through a task. The researcher plays the role of making task-specific suggestions to the decider, where the decider responds to the information provided by the researcher. In the student-teacher framework (Wang & Yoon, 2022) the goal is knowledge distillation, _i.e._, learned knowledge from a trained model should be conveyed to a second model, the student model. Somewhat related to this is the concept of self-peared learning within the field of curriculum learning (Kumar et al., 2010; Wang et al., 2022) in which a model provides a signal on how "fast" to learn. Interestingly, Pruthi et al. (2022) frame the utility of an explanation in a student-teacher setup in which the goal is for a student model to simulate a teacher's behaviour best possible. Also Schneider & Vlachos (2023) argue for the importance of explanations in reflective processes. However, the authors only propose an approach where a model makes a final prediction based on the input and explanation that is estimated by a second model, similar to (Lei et al., 2016; Bastings et al., 2019; Zhang et al., 2021; Krishna et al., 2023). Overall, these approaches have a different target and motivation than our work. Particularly, in LSX the role of the critic submodel is to represent an internal optimization loop based on whether the explanations provided from the learner are beneficial in performing the original task. ## 6 Conclusion In this work, we have introduced a novel learning framework, Learning by Self-Explaining (LSX), with which we argue for a novel perspective on the role of self-explaining in the process of learning in AI models. Thus, we claim that explanations are important not just for human users to understand or revise an AI model, but that they can play an even more important role in a form of self-reflection in which an agent assesses its own learned knowledge via its explanations. Our experimental evaluations highlight several benefits of training via LSX in the context of generalization, knowledge consolidation, explanation faithfulness and shortcut mitigation. Conclusively, with this work, we argue and provide evidence for the potential of explanations within a models learning process and, ultimately, for developing _reflective_ AI. Despite the promising results of our instantiations there is still great potential for other design choices. Investigating such instantiations and their benefits is an essential avenue for future research. _E.g._, applying LSX to settings with other modalities, such as natural language, but also with several modalities, _i.e._, based on current multimodal models (Radford et al., 2021; Li et al., 2022; Rombach et al., 2021; Alayrac et al., 2022). A more conceptual direction is the integration of a memory buffer of past LSX optimized explanations, allowing for models to re-iterate over previous generations of explanations (Chi et al., 1994). Another interesting point lies in adding apriori constraints onto the learner's explanations, _e.g._, by integrating background knowledge into the explanation reflection process. In this way, explanations can be assessed not just based on the critic's usability for performing the base task, but also based on the agreement of the true rationale with verified background knowledge. Related to this and to the discussion on the system 1 and 2 processing framework, future research should investigate the potential of LSX _e.g._, via a memory of explanations for detecting and handling out-of-distribution situations (Goyal and Bengio, 2022). As a first step, in the formulation of LSX and in our specific instantiations we have focused on supervised learning as base task. Another crucial avenue going forward is to apply LSX to other forms of supervision, such as self-supervised learning or reinforcement learning approaches _e.g._, integrating into actor-critic approaches or for guiding curiosity driven replay (Kauvar et al., 2023). ## Broader Impact Statement Our work aims to improve an AI model's performance based on internal evaluations of its explanations. It is thereby applicable to any base learning task. Apart from performance benefits for the AI model itself, with the integration of explanations into the learning process, this carries the benefits for human user's being able to obtain and evaluate the explanations of the model. Additionally, LSX also incorporates mechanisms for revising the model via human explanatory feedback. Integrating such modules and mechanisms into AI agents is a necessary step for fruitful and reliable human-machine interactions. Obviously, our work is not unaffected from the dual-use dilemma of foundational (AI) research. And a watchful eye should be kept on the implications of research with LSX. However, our work or implications thereof do not, to the best of our knowledge, pose an obvious direct threat to any individuals or society in general. ## Acknowledgments This work benefited from the Hessian Ministry of Science and the Arts (HMWK) projects "The Third Wave of Artificial Intelligence - 3AI", "The Adaptive Mind" and Hessian.AI, the "ML2MT" project from the Volkswagen Stiftung as well as from the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU Horizon 2020, GA No 952215). Furthermore, the authors thank Magdalena Wache and Alina Bohm for their preliminary results and insights on this research.
2307.16407
An analytical solution for supersonic flow over a circular cylinder using an optimized shock shape
An analytical solution for high supersonic flow over a circular cylinder based on Schneider's inverse method has been presented. In the inverse method, a shock shape is assumed and the corresponding flow field and the shape of the body producing the shock are found by integrating the equations of motion using the stream function. A shock shape theorised by Moeckel has been assumed and it is optimized by minimising the error between the shape of the body obtained using Schneider's method and the actual shape of the body. A further improvement in the shock shape is also found by using the Moeckel's shock shape in a small series expansion. With this shock shape, the whole flow field in the shock layer has been calculated using Schneider's method by integrating the equations of motion. This solution is compared against a fifth order accurate numerical solution using the discontinuous Galerkin method (DGM) and the maximum error in density is found to be of the order of 0.001 which demonstrates the accuracy of the method used for both plane and axisymmetric flows.
S R Siva Prasad Kochi, M Ramakrishna
2023-07-31T05:15:45Z
http://arxiv.org/abs/2307.16407v1
# An analytical solution for supersonic flow over a circular cylinder using an optimized shock shape ###### Abstract An analytical solution for high supersonic flow over a circular cylinder based on Schneider's inverse method [1] has been presented. In the inverse method, a shock shape is assumed and the corresponding flow field and the shape of the body producing the shock are found by integrating the equations of motion using the stream function. A shock shape theorised by Moeckel [2] has been assumed and it is optimized by minimising the error between the shape of the body obtained using Schneider's method and the actual shape of the body. A further improvement in the shock shape is also found by using the Moeckel's shock shape in a small series expansion. With this shock shape, the whole flow field in the shock layer has been calculated using Schneider's method by integrating the equations of motion. This solution is compared against a fifth order accurate numerical solution using the discontinuous Galerkin method (DGM) and the maximum error in density is found to be of the order of \(10^{-3}\) which demonstrates the accuracy of the method used for both plane and axisymmetric flows. **Keywords:** cylinder, supersonic flow, shock wave, shock shape, inverse method, discontinuous Galerkin method ## 1 Introduction Analytical methods for hypersonic flows past blunt bodies in plane and axisymmetric flows have been investigated quite extensively in the past. These analyses can be found in papers like those of Chester [3], [4] and Freeman [5] and in the books by Hayes and Probstein [6] and Rasmussen [7]. There are two different problems that are examined by them. The first one is the so called direct problem where the shape of the body is given and the shock shape and the flow field are to be found. The second one is the inverse problem where the shape of the shock wave is given, and it is required to find the corresponding flow field and the shape of the body producing the shock. We are interested in the inverse problem and particularly the method devised by Schneider in [1] for flow past blunt bodies to solve the problem. We use the shock shape theorised by Moeckel [2] for blunt bodies along with the method of Schneider [1] to find an accurate analytical solution for plane flow past a circular cylinder for high supersonic velocities. In a way, this procedure is similar to the shock fitting technique [8] used by numerical solvers where the shock shape is assumed and improved further based on the solution. The analysis developed by Schneider allows a very elegant treatment of the inviscid hypersonic blunt body problem. The fundamental advantage of this method is its applicability in the flow field from the stagnation region up to large distances from the nose of the body. This method for the solution in shock layer is based on two main assumptions. First, it is assumed that the density immediately behind a strong shock is much larger than in front of the shock. The second assumption is based on the pressure along a streamline. With these two assumptions, Schneider obtained the solution by integrating the equations of motion using the stream function. The flow properties immediately behind the shock are obtained using the Rankine-Hugoniot equations. As a result of these assumptions and the subsequent solution, it is not necessary to have a thin shock layer for this solution to hold. Due to these advantages, this method has been used extensively in various aerospace applications like in [9], [10], and [11]. Recently, this method has also been used for astrophysical calculations in [12] and [13]. This method was also extended to three-dimensional flows by Schwarze in [14]. Though this method was developed for hypersonic flows, we show that this works well even in the high supersonic flow regime. Our method of solution is as follows. We assume a shock shape in the form given by Moeckel in [2] as a single parameter family of hyperbolas. We determine that parameter by minimising the error between the shape of the body obtained using Schneider's method and the actual shape of the body. This gives a better shock shape \(f\). Now, we improve this shock shape further by writing it as a series expansion (\(af+bf^{2}\)) with unknown coefficients. We determine these coefficients using the same optimization and get a much better approximation for the shock shape. We use this shock shape to find the solution of the flow in the shock layer and compare the solution obtained with a numerical solution calculated using a fifth order accurate discontinuous Galerkin method (DGM) and overset grids where the shock has been captured within a grid line [15]. We find that the maximum error in density is of the order of \(10^{-3}\) (for the shock shape \(af+bf^{2}\)) demonstrating the accuracy of the solution method. We also found that using more terms in the series expansion for the shock shape (like \(af+bf^{2}+cf^{3}\), \(af+bf^{2}+cf^{3}+df^{4}\)) does not improve the accuracy of the solution. The paper is organized as follows. We describe the formulation of the Schneider's method used for all our results in Section 2, approximation of the shock shape is described in Section 3, the results obtained are given in Section 4 and we conclude the paper in Section 5. We also give the formulation and validation of the numerical method used (discontinuous Galerkin method) in Appendix A. ## 2 Formulation We now describe the formulation of Schneider's method in brief. We have mostly followed the notation used in [12] for the description of this method for the plane or axisymmetric supersonic flow past a blunt body. The shape of the bow shock wave (and hence the shock angle \(\beta\)) is given, and it is required to find the corresponding flow field and the shape of the body producing the shock. Let the flow be inviscid and without heat conduction, and let the gas be perfect. \(z\) and \(r\) are the Cartesian coordinates for plane flow, and also the cylindrical coordinates for axisymmetric flow. We also use a shock oriented curvilinear coordinate system, where \(x\) is the distance along the shock surface in the plane formed by the shock normal and the direction of the uniform fluid flow, and \(y\) is the distance normal to the shock surface as shown in Figure 1. Consider an arbitrary point Q in the shock layer. A streamline \(\psi=c\) passes through this point. The streamline intersects the shock at the point S. The point N on the shock is determined such that a normal to the shock at N also intersects the streamline, \(\psi=c\), at the point Q. We can now determine the \(x\) and \(y\) coordinates of Q and the corresponding velocity components \(u\) and \(v\). The \(z\)-axis is taken to be parallel to the direction of the incident flow. We emphasize that this is only assumed for convenience, but is not a general restriction. It can be deduced from Figure 1 that \[z=\hat{z}+y\sin\hat{\beta} \tag{1}\] \[r=\hat{r}-y\cos\hat{\beta} \tag{2}\] where \(\hat{\beta}\) is the shock inclination angle at the point N (\(\hat{z}\), \(r\)). The flow quantities immediately behind the shock at the point N are denoted by a hat (\(\wedge\)), and at the point S by an asterisk (*). Undisturbed flow quantities far upstream are denoted by the subscript \(\infty\). Since the functions \(\hat{z}(x)\) and \(\hat{r}(x)\) are known for a given shock shape, equations (1) and (2) can be used to calculate the coordinates \(z\) and \(r\) of a point Q from its coordinates \(x\) and \(y\). The curvature of the shock contour at the point N is denoted by \(\hat{\kappa}(x)\), defined as positive when the surface is concave on the side of positive \(y\). Now, the continuity equation can be written as \[\frac{\partial(r^{j}\rho u)}{\partial x}+\frac{\partial[(1-\hat{\kappa}y)r^{j} \rho v]}{\partial y}=0 \tag{3}\] where \(\rho\) is the fluid density. The parameter j is 0 for plane flow and 1 for axisymmetric flow. We now define a stream function \(\psi\) which satisfies (3) by \[\frac{\partial\psi}{\partial x}=(1-\hat{\kappa}y)r^{j}\rho v, \tag{4}\] Figure 1: The shock, body and a streamline are shown with a shock oriented coordinate system. Here, \(z\) and \(r\) are the Cartesian coordinates for plane flow with origin O, \(x\) and \(y\) are the distances along the shock surface and normal to it and Q is an arbitrary point in the shock layer. \(u\) and \(v\) are the velocities in \(x\) and \(y\) directions. \(U_{\infty}\) is the freestream velocity. \(\beta\) is the shock angle. \(\psi\) is the stream function. Flow quantities immediately behind the shock at the point N (point where a normal to the shock intersects Q) are denoted by a hat (\(\wedge\)), and at the point S (intersection of the streamline through Q with the shock) by an asterisk (*). This diagram is adapted from Schneider’s paper [1]. \[\frac{\partial\psi}{\partial y}=-r^{j}\rho u \tag{5}\] The stream function defined by (4) and (5) contains a constant of integration. This constant is chosen such that \(\psi=0\) is the body stream line. Then the stream function represents the mass flow between the streamline \(\psi=\) constant. and the surface of the projectile, per unit depth for plane flows, and per unit azimuthal angle (in radians) for axisymmetric flows. At the point N, the stream function therefore is \[\hat{\psi}_{N}=\rho_{\infty}U_{\infty}\frac{\hat{r}^{1+j}}{1+j} \tag{6}\] Also, \(\psi\) is connected to the coordinate \(r_{*}\) of the point S by \[\psi_{*}=\rho_{\infty}U_{\infty}\frac{r_{*}^{1+j}}{1+j} \tag{7}\] We now introduce a new coordinate system with \(\psi\) and \(\bar{x}=x\) as variables. With these new coordinates, the equations of energy, entropy, and momentum conservation can written as: \[u^{2}+v^{2}+2h=u_{*}^{2}+v_{*}^{2}+2h_{*}=\mbox{constant.}, \tag{8}\] \[\frac{\partial S}{\partial\bar{x}}=0\quad\mbox{or}\quad S=S_{*}(\psi) \tag{9}\] \[u\frac{\partial u}{\partial\bar{x}}+v\frac{\partial v}{\partial\bar{x}}+ \frac{1}{\rho}\frac{\partial P}{\partial\bar{x}}=0 \tag{10}\] \[(1-\hat{\kappa}y)(1-j\frac{y}{\hat{r}}\cos\hat{\beta})\hat{r}^{j}\frac{ \partial P}{\partial\psi}=\hat{\kappa}u+\frac{\partial v}{\partial\bar{x}} \tag{11}\] where \(P\) is the fluid pressure, \(S\) the entropy, and \(h\) is the specific enthalpy. The variable \(y\) is now a dependent variable and we now get the transformation equations using (4) and (5) as \[\frac{\partial y}{\partial\bar{x}}=(1-\hat{\kappa}y)\frac{v}{u} \tag{12}\] \[\frac{\partial y}{\partial\psi}=-\frac{1}{[1-j(y/\hat{r})\cos\hat{\beta}]\hat {r}^{j}\rho u} \tag{13}\] Now, the equations (10), (11), (12), (13), and (8) (or (9)) provide a set of five equations for the five unknown dependent variables \(u\), \(v\), \(y\), \(P\) and \(h\) (or \(S\)). The flow quantities immediately behind the shock may be obtained from the Rankine-Hugoniot jump conditions in terms of the inverse compression ratio across the shock \(\hat{\chi}=\rho_{\infty}/\hat{\rho}\). They are \[\hat{u}=U_{\infty}\cos\hat{\beta} \tag{14}\] \[\hat{v}=U_{\infty}\hat{\chi}\sin\hat{\beta} \tag{15}\] \[\hat{P}=P_{\infty}+\rho_{\infty}U_{\infty}^{2}(1-\hat{\chi})\sin^{2}\hat{\beta} \tag{16}\] \[\hat{h}=h_{\infty}+\frac{1}{2}U_{\infty}^{2}(1-\hat{\chi}^{2})\sin^{2}\hat{\beta} \tag{17}\] These equations maintain their validity even if the hats are replaced by asterisks. Schneider's method for the solution in shock layer is based on two main assumptions. First, it is assumed that the density immediately behind a strong shock is much larger than in front of the shock; i.e., \[\hat{\chi}=\frac{\rho_{\infty}}{\rho}\ll 1\quad\mbox{and}\quad\chi_{*}=\frac{ \rho_{\infty}}{\rho_{*}}=O(\hat{\chi}) \tag{18}\] Second, the pressure \(P\) at the point Q of the disturbed flow field is not much smaller than the pressure \(\hat{P}\) at the point N (see Figure 1); i.e., \[\frac{\hat{P}}{P}=O(1)\quad(\text{on $x=\text{const.}$, $y>0$}) \tag{19}\] The symbol \(f(x)=O(g(x))\) means that \(|f(x)|\) is not very large in comparison with \(|g(x)|\). Using these assumptions, Schneider in [1] obtained the pressure at an arbitrary point Q approximately as \[P=\hat{P}-\frac{\hat{\kappa}}{\hat{r}^{j}}\int_{\psi}^{\hat{\psi}}\left[u_{*}^ {2}+2\left(h_{*}-h(\hat{P},S_{*})\right)\right]^{1/2}d\psi^{\prime} \tag{20}\] Here, we can note that all quantities on the right hand side of equation (20) are given by the boundary conditions at the shock or by the equation of state. The terms excluded in making the approximation are coming from the stagnation region, as well as the region near the stagnation region where \(u\gg\hat{u}\) - are of the order of \(\hat{\chi}\) and hence they contribute only negligibly to the integral in equation (20). Therefore, the whole state of the gas is known in the streamline coordinate system \((\bar{x},\psi)\) after evaluating \(S=S_{*}(\psi)\) and \(P\) from equation (20). The actual location of the body in space can be determined by solving the differential equation (13). We do this by separation of variables giving the distance from the shock surface \(y\) as a function of \(\bar{x}\) and \(\psi\) as \[y\left(1-\frac{j\cos\hat{\beta}}{2\hat{r}}y\right)=\frac{1}{\hat{r}^{j}}\int_ {\psi}^{\hat{\psi}}\frac{d\psi^{\prime}}{\rho u} \tag{21}\] Neglecting errors of \(O(\chi)\) and following [6], we may replace the velocity component \(u\) by \[u^{2}=u_{*}^{2}+2[h_{*}-h(P,S_{*})] \tag{22}\] This is obtained from the energy equation (8). The integral in equation (21) can now be written as \[Y=\int_{\psi}^{\hat{\psi}}\frac{d\psi^{\prime}}{\rho(P,S_{*})[u_{*}^{2}+2(h_{* }-h(P,S_{*}))]^{1/2}} \tag{23}\] Solving the quadratic equation in \(y\) on the left hand side of equation (21), we have to distinguish between plane (\(j=0\)) and axisymmetric (\(j=1\)) flows. This gives \[\text{for $j=0$}:\quad y=Y; \tag{24}\] \[\text{for $j=1$}:\quad y=\frac{\hat{r}}{\cos\hat{\beta}}\left[1-\left(1-\frac{2Y \cos\hat{\beta}}{\hat{r}^{2}}\right)^{1/2}\right] \tag{25}\] Equations (23), (24), and (25) give us the required results in \(\bar{x}\) and \(\psi\) coordinates. We can now use the transformations (1) and (2) to get the required results in \(z\) and \(r\) coordinates. For a perfect gas with constant specific heats, the inverse compression ratio is given by \[\chi_{*}=\frac{\gamma-1}{\gamma+1}+\frac{2}{(\gamma+1)M_{\infty}^{2}\sin^{2} \beta_{*}} \tag{26}\] We point out that an analogous relation is valid for \(\hat{\chi}\), if all asterisks are replaced by hats in equation (26). Using the shock conditions (14)-(17), together with (26), the two integrals (20) and (23), which have to be evaluated, become \[P=\hat{P}-\frac{U_{\infty}\hat{\kappa}}{\hat{r}^{j}}\int_{\psi}^{\hat{\psi}} \left[\cos^{2}\beta_{*}+\left(\frac{2}{(\gamma-1)M_{\infty}^{2}}+\sin^{2} \beta_{*}\right)\left(1-\left(\frac{\sin^{2}\hat{\beta}}{\sin^{2}\beta_{*}} \right)^{\frac{\gamma-1}{\gamma}}\right)\right]^{1/2}d\psi^{\prime} \tag{27}\] \[Y=\frac{1}{\rho_{\infty}U_{\infty}}\int_{\psi}^{\hat{\psi}}\chi_{*}\left( \frac{\hat{P}\sin^{2}\beta_{*}}{P\sin^{2}\hat{\beta}}\right)^{1/\gamma}\left[ \cos^{2}\beta_{*}+\left(\frac{2}{(\gamma-1)M_{\infty}^{2}}+\sin^{2}\beta_{*} \right)\left(1-\left(\frac{P\sin^{2}\hat{\beta}}{\hat{P}\sin^{2}\beta_{*}} \right)^{\frac{\gamma-1}{\gamma}}\right)\right]^{-1/2}d\psi^{\prime} \tag{28}\] The curvature of a curve in space can be calculated using the formula \[\kappa=\frac{|d^{2}r/dz^{2}|}{\left[1+\left(dr/dz\right)^{2}\right]^{3/2}} \tag{29}\] The density is obtained from the formula \[\rho=\rho_{*}\left(\frac{P}{P_{*}}\right)^{1/\gamma}=\left(\frac{\gamma-1}{ \gamma+1}+\frac{2}{(\gamma+1)M_{\infty}^{2}\,\sin^{2}\beta_{*}^{2}}\right)^{-1 }\rho_{\infty}\left(\frac{P\sin^{2}\hat{\beta}}{\hat{P}\sin^{2}\beta_{*}} \right)^{1/\gamma} \tag{30}\] For simplicity we introduce dimensionless units following [12]. The normalized stream function then becomes \(\Psi=\psi/\rho_{\infty}U_{\infty}L^{j+1}\), where \(L\) is a characteristic length. On the surface of the body we need \(\psi=0\), so the pressure on the body surface \(P_{b}(x)\), as well as the shock layer thickness can be obtained by replacing the lower limits in equations (27) and (28) by zero as: \[\frac{P_{b}}{\rho_{\infty}U_{\infty}^{2}}=\frac{1}{\gamma M_{\infty}^{2}}+(1- \hat{\chi})\sin^{2}\hat{\beta}-\frac{\hat{\kappa}}{\hat{r}j}\int_{0}^{\hat{ \Phi}}\left[\cos^{2}\beta_{*}+\left(\frac{2}{(\gamma-1)M_{\infty}^{2}}+\sin^{ 2}\beta_{*}\right)\left(1-\left(\frac{\sin^{2}\hat{\beta}}{\sin^{2}\beta_{*}} \right)^{\frac{\gamma-1}{\gamma}}\right)\right]^{1/2}d\Psi \tag{31}\] where the relation \[\frac{\hat{P}}{\rho_{\infty}U_{\infty}^{2}}=\frac{P_{\infty}}{\rho_{\infty}U _{\infty}^{2}}+(1-\hat{\chi})\sin^{2}\hat{\beta}=\frac{1}{\gamma M_{\infty}^{ 2}}+(1-\hat{\chi})\sin^{2}\hat{\beta} \tag{32}\] has been used. Also, we get \[\mbox{for }j=0:\quad\Delta=Y\Big{|}_{\psi=0} \tag{33}\] \[\mbox{for }j=1:\quad\Delta=\frac{\hat{r}}{\cos\hat{\beta}}\left[1-\left(1- \frac{2Y\Big{|}_{\psi=0}\cos\hat{\beta}}{\hat{r}^{2}}\right)^{1/2}\right] \tag{34}\] with \[Y\Big{|}_{\psi=0}=\int_{0}^{\hat{\Psi}}\chi_{*}\left(\frac{\hat{P}\sin^{2} \beta_{*}}{P\sin^{2}\hat{\beta}}\right)^{1/\gamma}\left[\cos^{2}\beta_{*}+ \left(\frac{2}{(\gamma-1)M_{\infty}^{2}}+\sin^{2}\beta_{*}\right)\left(1- \left(\frac{P\sin^{2}\hat{\beta}}{\hat{P}\sin^{2}\beta_{*}}\right)^{\frac{ \gamma-1}{\gamma}}\right)\right]^{-1/2}d\Psi \tag{35}\] As soon as the bow shock wave is parameterized, the required quantities can be computed by evaluating two integrals (namely equations (31) and (33) or (34) and (35), together with the boundary values (9) and (14)-(17)). In deriving this result, Schneider [1] emphasizes that it is _not_ necessary to have a thin shock layer for these equations to hold. Also, throughout the rest of the paper, we consider only plane flow with \(j=0\). ## 3 Approximation of shock shape We now need an appropriate shock shape to use the results of Section 2. For this, we use the approximate relation given by Moeckel in [2]. This has the desirable property of yielding a single relation between freestream Mach number and distance of the shock wave from the body. Detached shock waves are normal to the freestream direction on the axis of symmetry, and are asymptotic to the freestream Mach lines at great distances from the axis of symmetry. Hence it seems plausible that the shape of the wave may be approximated by a hyperbola. Moeckel therefore postulated that the equation of the shock is \[\beta r=\sqrt{z^{2}-z_{0}^{2}} \tag{36}\] where \(\beta=\sqrt{M_{\infty}^{2}-1}\) and \(z_{0}\) is the location of the vertex of the wave. Transforming the coordinate system so that the origin is located on the shock, we get the shock shape to be \[\beta r=\sqrt{(z+z_{0})^{2}-z_{0}^{2}} \tag{37}\] Here \(z_{0}\) is the only unknown and it represents the shock offset distance (which is the distance between the stagnation point on the body and the vertex of the shock wave). Starting from a reasonable approximation for this \(z_{0}\), we can calculate the shape of the body using equation (35) (for plane flow). By using the error between the obtained shape of the body and the actual shape of the body (in this case, one quarter of a circle), we optimize the parameter \(z_{0}\) so that the error is minimised using secant method. After doing this, the shock, the body obtained from (35) (for plane flow), and the actual shape of the body for Mach number \(M=4.0\) are shown in Figure 2 where the radius of the circular cylinder is taken to be 0.5. To get a better approximation for the shock shape, we use the relation given by Moeckel as follows. We first define \[f=\frac{1}{\beta}\sqrt{(z+z_{0})^{2}-z_{0}^{2}} \tag{38}\] Now, we approximate the shape of the shock again by using two more parameters \(a\) and \(b\) as \[r=af+bf^{2} \tag{39}\] We now have three unknown parameters which are \(z_{0}\), \(a\) and \(b\). We again determine these three parameters by optimizing them while minimising the error between the obtained shape of the body (using equation (35)) and the actual shape of the body. After optimizing the three parameters \(z_{0}\), \(a\), and \(b\), the shock, the body obtained from (35) (for plane flow), and the actual shape of the body for Mach number \(M=4.0\) are shown in Figure 3. We can now clearly see the improvement over the previous result as the predicted body shape is very close to the actual shape of the body. Figure 2: The shock using equation (37) with \(z_{0}=17.615\) (thick solid line), the body obtained from (35) (for plane flow) (thin dashed line), and the actual shape of the body (thin solid line) after minimising the error between the obtained shape of the body and the actual shape of the body for Mach number \(M=4.0\). A zoomed-in view of the shock shape at two different locations ([0.24,0.275]\(\times\)[0,0.05], [0.58,0.62]\(\times\)[0.45,0.5]) is also shown in order to visualize the error clearly. Radius of the circular cylinder is 0.5 We can continue this procedure with higher degree approximations in \(f\) (i.e., \(af+bf^{2}+cf^{3}\), \(af+bf^{2}+cf^{3}+df^{4}\) and so on) by optimizing each of the coefficients of \(f\) by minimising the error between the obtained shape of the body and the actual shape of the body. We show the results obtained for body shape using such approximations for shock shapes in Figure 4 for Mach number \(M=4.0\). We can see from Figure 4 that using higher degree approximations (more than 2) does not really improve the body shape that much. Even the solution obtained using such higher degree approximations is not much better as will be explained in Section 4. For this reason, we have used equation (39) as the shock shape for all the remaining calculations unless otherwise specified. After obtaining the shock shape, we can now get the full flow field between the shock and the body on the streamlines by evaluating equations (27), (28), and (30). We present all our results for flow over a circular cylinder in the next section. ## 4 Results ### Plane flow: We consider the plane flow past a circular cylinder for high supersonic Mach numbers and approximate the shape of the shock using equation (39) as discussed in Section 3. We consider five different Mach numbers \(M=4.0,5.0,6.0,7.0\), and \(8.0\) to illustrate our results. For all these Mach numbers, the parameters \(z_{0}\), \(a\), and \(b\) are optimized by minimising the error between the obtained shape of the body (using equation (35)) and the actual shape of the body. The parameters obtained for the considered Mach numbers are tabulated in Table 1. After using this shock shape, we now calculate the full flow field between the shock and the body on the streamlines by evaluating equations (27), (28), and (30). We have used 200 streamlines for each Mach number. The density variations for Mach numbers 4.0, 5.0, 6.0, 7.0, and 8.0, obtained with this shock shape are shown in Figures 5, 6, 7, 8, and 9 respectively. All the plots are obtained by plotting the point data and using a delaunay triangulation to convert them into cells to cover the whole domain using par Figure 3: The shock using equation (39) with \(z_{0}=17.615\), \(a=0.998\), and \(b=-0.045\) (thick solid line), the body obtained from (35) (for plane flow) (thin dashed line), and the actual shape of the body (thin solid line) after minimising the error between the obtained shape of the body and the actual shape of the body for Mach number \(M=4.0\). A zoomed-in view of the shock shape at two different locations ([0.24,0.275]\(\times\)[0,0.05], [0.58,0.62]\(\times\)[0.45,0.5]) is also shown in order to visualize the error clearly. Radius of the circular cylinder is 0.5 \begin{table} \begin{tabular}{|c|c|c|c|} \hline Mach Number & \(z_{0}\) & a & b \\ \hline 4.0 & 17.615 & 0.998 & -0.045 \\ \hline 5.0 & 26.755 & 0.998 & -0.050 \\ \hline 6.0 & 38.495 & 0.998 & -0.052 \\ \hline 7.0 & 51.982 & 0.998 & -0.054 \\ \hline 8.0 & 67.984 & 0.998 & -0.058 \\ \hline \end{tabular} \end{table} Table 1: The optimized parameters \(z_{0}\), \(a\), and \(b\) in (39) obtained by minimising the error between the obtained shape of the body (using equation (35)) and the actual shape of the body. Figure 4: The body obtained from (35) (for plane flow) for shock shapes given by \(f\) (dashed line), \(af+bf^{2}\) (dotted line), \(af+bf^{2}+cf^{3}\) (dash and dot line), \(af+bf^{2}+cf^{3}+df^{4}\) (dash and double dot line) and the actual shape of the body (thin solid line) after minimising the error between the obtained shape of the body and the actual shape of the body for Mach number \(M=4.0\). Radius of the circular cylinder is 0.5 We now validate our procedure by using a numerically calculated solution. We have used a fifth order accurate (formally called \(\mathbf{P}^{4}\) based) discontinuous Galerkin method (DGM) along with overset grids with 9976 elements for accurate shock capturing [15] to solve the two-dimensional Euler equations over a circular cylinder and used this solution to calculate the density error in our solution. The formulation and validation of our solver using DGM, along with a brief outline of the shock capturing procedure are given in Appendix A. The density errors obtained using this fifth order accurate numerical solution for Mach number 4.0, 5.0, 6.0, 7.0, and 8.0, are shown in Figures 5, 11, 12, 13, and 14 respectively. We don't show the solution obtained by the discontinuous Galerkin method as it is quite similar to the solution obtained using our procedure and shown in Figures 5 - 9. We now explain the choice of the selected Mach numbers. We have calculated the maximum density error obtained using the procedure in Section 2 and tabulated them in Table 2. We also show the value of \(\rho_{\infty}/\rho\) across a normal shock in Table 2 for reference. We can clearly see that the error is increasing quite fast when we go below Mach 4.0. In fact, it has increased by two orders of magnitude when we go from Mach 4.0 to Mach 3.5. The reason for this can be the first assumption for this procedure given by equation (18) where we have assumed that the density immediately behind a strong shock is much larger than in front of the shock. The values of \(\rho_{\infty}/\rho\) (across a normal shock) given in Table 2 decrease when we increase the Mach number and we probably have to cross a threshold for this procedure to work well which can be between Mach numbers 3.5 and 4.0 as seen from Table 2. For this reason, we have selected Mach numbers \(\geq 4\) to illustrate our solution. To further validate the solution obtained, we calculate the functional - integrated surface pressure on the body of the cylinder and compare it with the numerical solution obtained. The error in integrated surface pressure is tabulated in Table 3. From the values obtained, we can see that the error in the integrated surface pressure is smaller than the maximum density error. This shows that the procedure is quite accurate near the body (except at some locations as will be discussed later) in predicting the surface pressure which is an important quantity. **Discussion of results:** We can see that the results obtained for each Mach number are very accurate Figure 5: Density variation for Mach number 4.0 obtained using (27), (28), and (30) with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 6: Density variation for Mach number 5.0 obtained using (27), (28), and (30) with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 7: Density variation for Mach number 6.0 obtained using (27), (28), and (30) with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 8: Density variation for Mach number 7.0 obtained using (27), (28), and (30) with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 9: Density variation for Mach number 8.0 obtained using (27), (28), and (30) with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 11: Density error obtained for Mach number 5.0 using a fifth order accurate (formally called \(\mathbf{P}^{4}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. Figure 10: Density error obtained for Mach number 4.0 using a fifth order accurate (formally called \(\mathbf{P}^{4}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. Figure 12: Density error obtained for Mach number 6.0 using a fifth order accurate (formally called \(\mathbf{P^{4}}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. Figure 13: Density error obtained for Mach number 7.0 using a fifth order accurate (formally called \(\mathbf{P^{4}}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. \begin{table} \begin{tabular}{|c|c|c|} \hline Mach Number & Maximum Density Error & \(\rho_{\infty}/\rho\) across a normal shock \\ \hline 3.5 & 1.1E-01 & 0.2347 \\ \hline 3.8 & 8.9E-02 & 0.2244 \\ \hline 3.9 & 2.1E-02 & 0.2214 \\ \hline 4.0 & 1.9E-03 & 0.2188 \\ \hline 5.0 & 2.0E-03 & 0.2 \\ \hline 6.0 & 3.8E-03 & 0.1898 \\ \hline 7.0 & 5.1E-03 & 0.1837 \\ \hline 8.0 & 5.9E-03 & 0.1797 \\ \hline \end{tabular} \end{table} Table 2: Maximum density error obtained using the procedure given in Section 2 for various Mach numbers. For these Mach numbers, the density ratio (\(\rho_{\infty}/\rho\)) across a normal shock is also shown. The error is calculated using a fifth order accurate discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15] Figure 14: Density error obtained for Mach number 8.0 using a fifth order accurate (formally called \(\mathbf{P^{4}}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. \begin{table} \begin{tabular}{|c|c|} \hline Mach Number & Error in integrated surface pressure \\ \hline 4.0 & 3.45E-04 \\ \hline 5.0 & 2.75E-04 \\ \hline 6.0 & 2.15E-04 \\ \hline 7.0 & 2.03E-04 \\ \hline 8.0 & 1.88E-04 \\ \hline \end{tabular} \end{table} Table 3: Error in the functional - integrated surface pressure for various Mach numbers using the shock shape given by \(af+bf^{2}\). The error is calculated using a fifth order accurate discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15] and the maximum error in comparison with a numerical solution for each case is only about the order of \(10^{-3}\) (using a shock shape given by \(af+bf^{2}\)). For all the Mach numbers, the error pattern is similar and the maximum error occurs near the upper portion of the body. To explain this, we show the pressure solution for Mach 4.0 in Figure 15. We also remember the assumption that the pressure at the point Q (Figure 1) of the disturbed flow field is not much smaller than the pressure at point N (see Figure 1). This assumption is given by equation (19). From Figure 15, we can see that for the points that are near the upper portion of the body, the pressure ratio \(\hat{P}/P\) is larger compared to the other points. Though we showed the pressure variation for Mach 4.0 only, this happens to be true for Mach numbers 5.0, 6.0, 7.0, and 8.0 as well. This can be why the error obtained is the largest in that region as we are slightly violating an assumption with which we derived the solution. We have also changed the shock shape further by looking at equations of the form \(af+bf^{2}+cf^{3},af+bf^{2}+cf^{3}+df^{4}\) with \(f\) defined by (38). We have optimized the parameters \(z_{0}\), \(a\), \(b\), \(c\), and \(d\) as mentioned in Section 2. We tabulate the maximum density errors for shock shapes with polynomial expansions in \(f\) till fourth degree in Table 3. From this table, we can see that the solution accuracy increases only till the second degree approximation in \(f\) and doesn't improve much for higher degree approximations. As a final shock shape, we consider the accurate numerical solution obtained in [15] where the shock has been captured aligned to a grid line. We use that solution and obtain a cubic spline for the shock shape. Using the cubic spline as the shock shape, we obtain the body and the full solution in the shock layer using the method given in Section 2. The shape of the body obtained along with the shock and the exact shape of the body are shown in Figure 16 for Mach 4.0 flow. We can see that the shape of the body obtained in this fashion is quite accurate. The density error for this solution is shown in Figure 17. We can see that the maximum density error is about \(1.4\times 10^{-3}\). We also tabulate the maximum density error obtained in this fashion for various Mach numbers in Table 4. From these results, we can conclude that using this kind of a shock shape, this might be the best accuracy we can get using this analytical method with the stated approximations and assumptions. Figure 15: Pressure variation for Mach number 4.0 with shock shape given in (39) with parameters given in Table 1 for 200 streamlines. Figure 16: The shock using the cubic spline from the numerical solution (thick solid line), the body obtained from (35) (for plane flow) (thin dashed line), and the actual shape of the body (thin solid line) for Mach number \(M=4.0\). Radius of the circular cylinder is 0.5 Figure 17: Density error obtained for Mach number 4.0 using the cubic spline shock shape from the numerical solution using a fifth order accurate (formally called \(\mathbf{P}^{4}\) based) discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15]. To see how well the shock shapes assumed work in a typical shock fitting solver, we use the shock shapes \(f\), \(af+bf^{2}\), \(af+bf^{2}+cf^{3}\), \(af+bf^{2}+cf^{3}+df^{4}\) and optimise the various parameters \((z_{0},a,b,c,d\) as required) based on the boundary condition at the circular cylinder (the no-penetration condition). We get different values of the optimised parameters to those obtained using Schneider's method (as expected) and we tabulate the maximum density error for each of the shock shapes in Table 5. Here unlike the case for Schneider's method, we see that the maximum density error decreases reasonably as we expand the shock shape in series expansion. The decrease is quite prominent (two orders of magnitude) when we go from \(f\) to \(af+bf^{2}\). The final column of the table is given just to show that the shock shape obtained from numerical solution as a cubic spline solves the problem very accurately. All these results show that the choice of shock shape used for our solution is quite good. ### Axisymmetric Flow: We have also tested Schneider's method given in Section 2 for the axisymmetric case. To obtain an accurate numerical solution for this case, we have converted our plane flow solver into an axisymmetric flow solver using the method given by Yu [17]. This gives us a fifth order accurate discontinuous Galerkin solution for axisymmetric flow. We have used this numerical solution with 7500 elements to calculate the errors in the analytical solution. We used the same shock shape given by Moeckel with \[r=f \tag{40}\] where \(f\) is given by equation (38). Here, \(r\) is the radial coordinate and \(z\) is the longitudinal coordinate. We now use the shock shapes \(f\), \(af+bf^{2}\), \(af+bf^{2}+cf^{3}\), \(af+bf^{2}+cf^{3}+df^{4}\) and optimise the various parameters \((z_{0},a,b,c,d\) as required) again using the error between the exact body shape and the body shape obtained using the method given in Section 2. We get different values of the optimised \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Maximum density error with shock shape using Schneider’s method for plane flow:} \\ \cline{2-5} Mach number & \(f\) & \(\begin{array}{c}af+bf^{2}\\ (shock shape used for all \\ our calculations)\end{array}\) & \(af+bf^{2}+cf^{3}\) & \(af+bf^{2}+cf^{3}+df^{4}\) & cubic spline obtained from numerical solution \\ \hline 4.0 & 1.8E-02 & 1.9E-03 & 1.8E-03 & 1.8E-03 & 1.4E-03 \\ \hline 5.0 & 1.9E-02 & 2.0E-03 & 1.7E-03 & 1.7E-03 & 1.1E-03 \\ \hline 6.0 & 2.1E-02 & 3.8E-03 & 3.5E-03 & 3.4E-03 & 2.4E-03 \\ \hline 7.0 & 2.2E-02 & 5.1E-03 & 4.9E-03 & 4.8E-03 & 3.6E-03 \\ \hline 8.0 & 2.8E-02 & 5.9E-03 & 5.8E-03 & 5.5E-03 & 4.3E-03 \\ \hline \end{tabular} \end{table} Table 4: Plane flow: Maximum density error obtained for various Mach numbers with various shock shapes obtained using an expansion in terms of \(f\) and also using a cubic spline obtained from the numerical solution. The error is calculated using a fifth order accurate discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Maximum density error with shock shape with shock fitting using DGM for plane flow:} \\ \cline{2-5} Mach number & \(f\) & \(af+bf^{2}\) & \(af+bf^{2}+cf^{3}\) & \(af+bf^{2}+cf^{3}+df^{4}\) & cubic spline obtained from numerical solution \\ \hline 4.0 & 3.3E-06 & 5.5E-08 & 3.7E-09 & 4.1E-10 & 1.8E-16 \\ \hline 5.0 & 2.4E-06 & 2.3E-08 & 4.4E-09 & 3.6E-10 & 2.1E-16 \\ \hline 6.0 & 4.1E-06 & 4.2E-08 & 3.2E-09 & 3.9E-10 & 1.3E-16 \\ \hline 7.0 & 5.2E-06 & 6.3E-08 & 8.3E-09 & 6.7E-10 & 2.2E-16 \\ \hline 8.0 & 4.8E-06 & 7.7E-08 & 6.9E-09 & 7.8E-10 & 3.2E-16 \\ \hline \end{tabular} \end{table} Table 5: Plane flow: Maximum density error obtained for various Mach numbers using shock fitting with DGM with various shock shapes obtained using an expansion in terms of \(f\) and also using a cubic spline obtained from the numerical solution. The error is calculated using a fifth order accurate discontinuous Galerkin method along with overset grids using 9976 elements for accurate shock capturing [15] parameters to those obtained for plane flow (as expected) and we tabulate the maximum density error for each of the shock shapes in Table 6. Here, we can see that the pattern for maximum density error is similar to that of plane flow decreasing from shock shapes \(f\) to \(af+bf^{2}\) and remaining almost the same for further series expansions. We also show the error obtained using a shock shock obtained from numerical solution in the last column of Table 6. From these results, we can again conclude that using this kind of a shock shape, this might be the best accuracy we can get for axisymmetric flow using the analytical method given in Section 2. ## 5 Conclusion We have calculated the high supersonic flow over a circular cylinder using Schneider's inverse method [1]. We assumed the shock shape of a hyperbola in the form given by Moeckel in [2] and optimized the shock shape by minimising the error between the shape of the body obtained using Schneider's method and the actual shape of the body. We have improved this shock shape further by writing it as a series expansion (\(af+bf^{2}\), where \(f\) gives the shock shape of Moeckel) with unknown coefficients. We have determined these coefficients using the same optimization and obtained a much better approximation for the shock shape. We have used this shock shape to find the solution of the flow in the shock layer using Schneider's method by integrating the equations of motion using the stream function. We have compared the solution obtained with a numerical solution calculated using a fifth order accurate discontinuous Galerkin method which captures the shock very accurately using overset grids [15]. We have found that the maximum error in density is only of the order of \(10^{-3}\) (with the shock shape \(af+bf^{2}\)) demonstrating the accuracy of the solution method. We also found that using more terms in the series expansion for the shock shape (like \(af+bf^{2}+cf^{3}\), \(af+bf^{2}+cf^{3}+df^{4}\)) does not improve the accuracy of the solution. We also used a cubic spline obtained from the numerical solution to find the solution and the maximum error in density is again of the order of \(10^{-3}\). This suggests that this might be the best accuracy we can get using this analytical method. We have also used the shock shape given by Moeckel and used it in a series expansion for axisymmetric flow case and optimised the parameters. The density errors again follow the same pattern of plane flow and the maximum density error is again of the order of \(10^{-3}\). ## Appendix A Formulation and Validation of discontinuous Galerkin Method Consider the Euler equations in conservative form as given by \[\frac{\partial\mathbf{Q}}{\partial t}+\frac{\partial\mathbf{F(Q)}}{\partial x }+\frac{\partial\mathbf{G(Q)}}{\partial y}=0\quad\text{in the domain}\quad\Omega \tag{41}\] where \(\mathbf{Q}=\left(\rho,\rho u,\rho v,E\right)^{T}\), \(\mathbf{F(Q)}=u\mathbf{Q}+(0,P,0,Pu)^{T}\) and \(\mathbf{G(Q)}=v\mathbf{Q}+(0,0,P,Pv)^{T}\) with \(P=(\gamma-1)(E-\frac{1}{2}\rho(u^{2}+v^{2}))\) and \(\gamma=1.4\). Here, \(\rho\) is the density, \((u,v)\) is the velocity, \(E\) is the total energy and \(P\) is the pressure. We approximate the domain \(\Omega\) by \(K\) non overlapping elements given by \(\Omega_{k}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{*}{ \begin{tabular}{c} Mach \\ number \\ \end{tabular} } & \multicolumn{3}{c|}{Maximum density error with shock shape using Schneider’s method for axisymmetric flow:} \\ \cline{2-6} & \multirow{2}{*}{\(f\)} & \multirow{2}{*}{\(af+bf^{2}\)} & \multirow{2}{*}{\(af+bf^{2}+cf^{3}\)} & \multirow{2}{*}{\(af+bf^{2}+cf^{3}+df^{4}\)} & cubic spline \\ & & & & & obtained from \\ & & & & & numerical solution \\ \hline 4.0 & 8.5E-02 & 4.3E-03 & 4.2E-03 & 3.8E-03 & 1.6E-03 \\ \hline 5.0 & 8.3E-02 & 3.4E-03 & 3.1E-03 & 2.8E-03 & 1.3E-03 \\ \hline 6.0 & 7.4E-02 & 3.2E-03 & 2.8E-03 & 2.6E-03 & 1.1E-03 \\ \hline 7.0 & 7.3E-02 & 2.8E-03 & 2.5E-03 & 2.3E-03 & 1.2E-03 \\ \hline 8.0 & 6.6E-02 & 2.6E-03 & 2.3E-03 & 2.2E-03 & 1.7E-03 \\ \hline \end{tabular} \end{table} Table 6: Axisymmetric Flow: Maximum density error obtained for various Mach numbers with various shock shapes obtained using an expansion in terms of \(f\) and also using a cubic spline obtained from the numerical solution. The error is calculated using a fifth order accurate discontinuous Galerkin solver for axisymmetric flow with 7500 elements. We look at solving (41) using the discontinuous Galerkin method (DGM). We approximate the local solution in an element \(\Omega_{k}\), where \(k\) is the element number, as a polynomial of order \(N\) which is given by: \[Q_{h}^{k}(r,s)=\sum_{i=0}^{N_{p}-1}Q_{i}^{k}\psi_{i}(r,s) \tag{42}\] where \(N_{p}=(N+1)(N+1)\) and \(r\) and \(s\) are the local coordinates. Here, the subscript \(i\) represents the particular degree of freedom, \(h\) represents the grid size, and the superscript \(k\) is the element number. The polynomial basis used \((\psi_{i}(r,s))\) is the tensor product orthonormalized Legendre polynomials of degree \(N\). The number of degrees of freedom are given by \(N_{p}=(N+1)(N+1)\). Now, using \(\psi_{j}(r,s)\) as the test function, the weak form of the equation (41) is obtained as \[\sum_{i=0}^{N_{p}-1}\frac{\partial Q_{i}^{k}}{\partial t}\int_{\Omega_{k}}\psi _{i}\psi_{j}d\Omega+\int_{\partial\Omega_{k}}\hat{F}\psi_{j}ds-\int_{\Omega_{k }}\vec{F}.\nabla\psi_{j}d\Omega=0\quad j=0,\ldots,N_{p}-1 \tag{43}\] where \(\partial\Omega_{k}\) is the boundary of \(\Omega_{k}\), \(\vec{F}=(\mathbf{F(Q),G(Q)})\) and \(\hat{F}=\bar{F^{*}}.\hat{n}\) where \(\bar{F^{*}}\) is the monotone numerical flux at the interface which is calculated using an exact or approximate Riemann solver and \(\hat{n}\) is the unit outward normal. We have used the Lax-Friedrichs numerical flux for all our calculations unless otherwise specified. This is termed to be \(\mathbf{P}^{N}\) based discontinuous Galerkin method. Equation (43) is integrated using an appropriate Gauss Legendre quadrature and is discretized in time by using the fifth order Runge-Kutta time discretization given in [18] unless otherwise specified. To control spurious oscillations which occur near discontinuities, a limiter is used with a troubled cell indicator (a shock detector). We have used the KXRCF troubled cell indicator [19] and the compact subcell WENO (CSWENO) limiter proposed in [20] for all our calculations. To capture the shock accurately, we have used overset grids as proposed in [15]. The communication between overset grids occurs using the procedure given in [21]. The shock capturing procedure with overset grids is briefly given below. **Shock capturing procedure with overset grids:** The step by step procedure is as follows: **Step 1:** Run the solver on a coarse grid with a given troubled cell indicator and limiter to steady state and obtain the solution. **Step 2:** Look at the troubled cells (with a reliable shock detector [19]) to locate the discontinuities (shocks) that occur in the solution.The troubled cells give us a good idea of the location of the discontinuities. **Step 3:** Construct an overset grid conforming to the computational domain which is refined in a direction perpendicular to the discontinuities such that they are approximately parallel to a grid line. **Step 4:** Using this overset grid, we rerun the solver with the coarse grid solution as the initial condition. While running the solver, we use the troubled cell indicator and the limiter only on the overset grid. We also use a high resolution numerical flux on the overset grid to capture the shock accurately. We have used the SLAU2 [22] numerical flux in the overset grid and the less expensive Lax-Friedrichs flux elsewhere. Using this procedure, we obtain a more accurate solution with the discontinuities approximately aligned to a grid line. **Validation of the solver using Isentropic Vortex Problem:** Consider the two-dimensional Euler equations given by equation (41) in the domain \([0,10]\times[-5,5]\) for the Isentropic Euler Vortex problem suggested in [23] as a test case. The analytical solution is given by: \(\rho=\left(1-\left(\frac{\gamma-1}{16\gamma\pi^{2}}\right)\beta^{2}e^{2(1-r^{2 })}\right)^{\frac{\gamma-1}{\gamma-1}}\), \(u=1-\beta e^{(1-r^{2})}\frac{y-y_{0}}{2\pi}\), \(v=\beta e^{(1-r^{2})}\frac{x-x_{0}-t}{2\pi}\), and \(p=\rho^{\gamma}\), where \(r=\sqrt{(x-x_{0}-t)^{2}+(y-y_{0})}\), \(x_{0}=5\), \(y_{0}=0\), \(\beta=5\) and \(\gamma=1.4\). Initialization is done with the exact solution at \(t=0\) and periodic boundary conditions are used at the edges of the domain. The solution is obtained for various orders making sure that the spatial (DGM) and temporal (Runge-Kutta) schemes are of the same order. The errors in density and numerical orders of accuracy are calculated at \(t=2\) for various grid sizes and are presented in Table 7. We can clearly see that the theoretical order of accuracy of the scheme is obtained by the DG solver.
2301.13829
On the Deepest Cycle of a Random Mapping
Let $\mathcal{T}_n$ be the set of all mappings $T:\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}$. The corresponding graph of $T$ is a union of disjoint connected unicyclic components. We assume that each $T\in\mathcal{T}_n$ is chosen uniformly at random (i.e., with probability $n^{-n}$). The cycle of $T$ contained within its largest component is callled the deepest one. For any $T\in\mathcal{T}_n$, let $\nu_n=\nu_n(T)$ denote the length of this cycle. In this paper, we establish the convergence in distribution of $\nu_n/\sqrt{n}$ and find the limits of its expectation and variance as $n\to\infty$. For $n$ large enough, we also show that nearly $55\%$ of all cyclic vertices of a random mapping $T\in\mathcal{T}_n$ lie in the deepest cycle and that a vertex from the longest cycle of $T$ does not belong to its largest component with approximate probability $0.075$.
Ljuben Mutafchiev, Steven Finch
2023-01-31T18:23:21Z
http://arxiv.org/abs/2301.13829v3
# The Deepest Cycle of a Random Mapping: a Problem Proposed by Steven Finch ###### Abstract Let \(\mathcal{T}_{n}\) be the set of all mappings \(T:\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}\). The corresponding graph of \(T\) is a union of disjoint connected unicyclic components. We assume that each \(T\in\mathcal{T}_{n}\) is chosen uniformly at random (i.e., with probability \(n^{-n}\)). The deepest cycle of \(T\) is contained within its largest component. Let \(\nu_{n}=\nu_{n}(T)\) denote the length of the deepest cycle in \(T\in\mathcal{T}_{n}\). In this paper, we find the limits of the expectation and variance of \(\nu_{n}/\sqrt{n}\) as \(n\to\infty\). For \(n\) large enough, we also show that nearly \(55\%\) of all cyclic vertices of a random mapping \(T\in\mathcal{T}_{n}\) lie in its deepest cycle and that a vertex from the longest cycle of \(T\) does not belong to its largest component with approximate probability \(0.075\). **Mathematics Subject Classifications:** 60C05, 05C80 **Key words:** random mapping, functional graph, deepest cycle ## 1 Introduction and Statement of the Main Result We start with some notation that will be used freely in this note. For a positive integer \(n\), let \(\mathcal{T}_{n}\) denote the set of all mappings \(T:[n]\to[n]\), where \([n]:=\{1,2,\ldots,n\}\). It is clear that the cardinality \(|\mathcal{T}_{n}|\) of \(\mathcal{T}_{n}\) is \(n^{n}\). A mapping \(T\in\mathcal{T}_{n}\) corresponds to a directed graph \(G_{T}\), called functional digraph, with edges \((i,T(i)),i\in[n]\), where every vertex \(i\in[n]\) has out-degree \(1\). \(G_{T}\) is a union of disjoint connected components. A vertex \(i\) is called cyclic if, for the \(m\)-fold composition \(T^{(m)}\) of \(T\), we have \(T^{(m)}(i)=i\) for some \(m\geq 1\). Since the vertices of \(G_{T}\) have out-degree \(1\), each component contains a unique directed cycle and directed trees attached to the cyclic vertices. Let \(\lambda_{n}=\lambda_{n}(T),T\in\mathcal{T}_{n}\), denote the number of cyclic vertices in \(G_{T}\). We introduce the uniform probability measure \(\mathbb{P}\) on the set \(\mathcal{T}_{n}\). That is, we assign the probability \(n^{-n}\) to each \(T\in{\cal T}_{n}\). In this way, \(\lambda_{n}\), as well as any other numerical characteristic of \(G_{T}\), becomes a random variable (or, a statistic in the sense of random generation of mappings from \({\cal T}_{n}\)). The size of the largest component of \(G_{T}\) will be further denoted by \(\mu_{n}=\mu_{n}(T)\). The cycle contained within the largest component of \(G_{T}\) is called the _deepest_ one. Let \(\nu_{n}=\nu_{n}(T)\) denote its length. In [5], Finch suggests to study the asymptotic behavior of \(\nu_{n}\) as \(n\rightarrow\infty\). The main goal of this work is to find the asymptotic of the mean and variance of the size of the deepest cycle \(\nu_{n}\). There is a substantial probabilistic literature on random mappings. Here we give only references to the popular monographs [18, 12, 2]. For large \(n\), some properties of the functional digraphs \(G_{T},T\in{\cal T}_{n}\), are also used in analysis of algorithms. For example, the cyclic structure of random mappings is closely related to algorithms for integer factorization and, in particular, to the Pollard's \(\rho\)-algorithm; see, e.g., [15, 4, 7, 13]. Random mapping statistics are also relevant to some algorithms for generic attacks on iterated hash constructions; see [3]. Throughout the paper the notation \(\mathbb{E}\) and \(\mathbb{V}ar\) stand for the expectation and variance with respect to the uniform probability measure \(\mathbb{P}\) on the set \({\cal T}_{n}\), respectively. Below we state our main result on the deepest cycle of a random mapping. **Theorem 1**: _Let \(E_{1}(s)=\int_{s}^{\infty}\frac{e^{-t}}{t}dt,s>0,\) be the exponential integral function. Then we have_ _(i)_ \[\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\mathbb{E}(\nu_{n})=\frac{1}{\sqrt{ 2}}\int_{0}^{\infty}\frac{\exp{(-s-\frac{1}{2}E_{1}(s))}}{\sqrt{s}}ds\approx 0.6884;\] _(ii)_ \[\lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{V}ar(\nu_{n})\approx 0.2839.\] The proof of this theorem is given in Section 2. In Section 3, we consider a sampling experiment: we assume that a vertex of the random functional digraph \(G_{T},T\in{\cal T}_{n}\), is chosen uniformly at random from the set \([n]\). We give interpretations arising from this random choice. We conclude Section 3 with an open problem. ## 2 Proof of Theorem 1 A mapping is said indecomposable (or connected) if it possesses exactly one component. Our proof is based on an asymptotic result for the cycle length of a random indecomposable mapping due to R\(\acute{e}\)nyi [17] and on properties of the limiting distribution function of the size of the largest component \(\mu_{n}\). Consider the subset \(\mathcal{T}_{n}^{\prime}\subset\mathcal{T}_{n}\) of the indecomposable mappings. The cardinality \(|\mathcal{T}_{n}^{\prime}|\) of the set \(\mathcal{T}_{n}^{\prime}\) was determined by Katz [10], who showed that \[|\mathcal{T}_{n}^{\prime}|=(n-1)!\sum_{k=0}^{n-1}\frac{n^{k}}{k!}.\] Using the normal approximation of the Poisson distribution with mean \(n\) (see, e.g., [9, Section 5.10]), it is easy to see that \[\lim_{n\to\infty}e^{-n}|\mathcal{T}_{n}^{\prime}|=1/2. \tag{1}\] We introduce the uniform probability measure \(\mathcal{P}\) on the set \(\mathcal{T}_{n}^{\prime}\). Let \(\nu_{n}^{\prime}=\nu_{n}^{\prime}(T),T\in\mathcal{T}_{n}^{\prime}\), denote the count of the cyclic vertices in \(T\). R\(\acute{e}\)nyi [17] showed that, with respect to \(\mathcal{P}\), \(\nu_{n}^{\prime}/\sqrt{n}\) converges in distribution to the random variable \(|\xi|\), where \(\xi\) has standard normal distribution. In addition, with the aid of (1), he established the following local limit theorem: \[\mathcal{P}\left(\frac{\nu_{n}^{\prime}}{\sqrt{n}}=u\right)=\sqrt{\frac{2}{n \pi}}e^{-u^{2}/2}(1+o(1)),\quad n\to\infty, \tag{2}\] where \(0<u=o(n^{1/6})\). Let \(\mathcal{E}\) denote the expectation with respect to the measure \(\mathcal{P}\). Using (2), one can represent \(\mathcal{E}(\nu_{n}^{\prime})\) and \(\mathcal{E}(\nu_{n}^{\prime 2})\) as Riemann sums with step size \(1/\sqrt{n}\) of the integrals \[\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}ue^{-u^{2}/2}du=\sqrt{\frac{2}{\pi}} \quad\text{and}\quad\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}u^{2}e^{-u^{2}/2}du=1,\] respectively. Hence \[\mathcal{E}(\nu_{n}^{\prime})\sim\sqrt{\frac{2n}{\pi}},\quad\text{and}\quad \mathcal{E}(\nu_{n}^{\prime 2})\sim n. \tag{3}\] Now, we proceed to the preliminaries concerning the largest component of a random mapping. The limiting distribution of the size \(\mu_{n}\) of the largest component was first determined by Kolchin [11] (see also [12, Section 1.13]). Arratia et al. [2] developed a unifying approach to the study of the component spectrum of a large parametric class of decomposable combinatorial structures, called logarithmic structures. These structures satisfy a condition, called there logarithmic. It introduces a dependency on a parameter \(\theta>0\). It is shown [2, Section 6.1] that random mappings satisfy the logarithmic condition with \(\theta=1/2\). Therefore, in the material from [2, Sections 4.2 and 5.5] that we shall present further, we restrict ourselves to this value of \(\theta\). Consider the random variable \(\eta\) whose probability density function \(p(x),x>0\), is given by \[p(x)=\frac{e^{-\gamma x}}{\sqrt{\pi x}}\left(1+\sum_{k=1}^{\infty}\frac{(-1)^{k }}{2^{k}k!}\int\ldots\int_{I_{k}(x)}(1-\sum_{j=1}^{k}y_{j})^{-1/2}\frac{dy_{1} \ldots dy_{k}}{y_{1}\ldots y_{k}}\right), \tag{4}\] where \[I_{k}(x):=\{(y_{1},\ldots,y_{k}):y_{1}>x^{-1},\ldots,y_{k}>x^{-1},\sum_{j=1}^{k}y_ {j}<1\},\] and \(\gamma\approx 0.5772\) denotes Euler's constant. From [2, Theorem 4.6] it follows that \[\varphi(s):=\int_{0}^{\infty}e^{-sx}p(x)dx=\exp\left(-\frac{1}{2}\int_{0}^{1} \frac{1-e^{-sy}}{y}dy\right)=\frac{e^{-\gamma/2}}{\sqrt{s}}e^{-\frac{1}{2}E_{1 }(s)}, \tag{5}\] where \(E_{1}(s)\) denotes the exponential integral function introduced in Theorem 1(i). The last equality in (5) follows from the classical identity \[\int_{0}^{s}\frac{1-e^{-y}}{y}dy=E_{1}(s)+\log s+\gamma,\quad s>0;\] see, e.g., [1, Section 5.1]. Let us now state the integral limit theorem for the size of the largest component of a random mapping: [2, Lemma 5.7]: **Proposition 1**: _As \(n\to\infty\), \(\mu_{n}/n\) converges in distribution to a random variable with distribution function \(F\) given by_ \[F(x)=e^{\gamma/2}\sqrt{\pi/x}p(1/x),\quad x>0, \tag{6}\] _where \(p(x)\) is as defined in (4)._ Watterson [20] observed that \(p(x)\) satisfies the delay differential equation \[xp^{\prime}(x)+\frac{1}{2}p(x)+\frac{1}{2}p(x-1)=0\qquad\mbox{for}\quad x>1, \quad p(x)=\frac{e^{-\gamma/2}}{\sqrt{\pi x}}\quad\mbox{for}\quad 0<x\leq 1. \tag{7}\] From (6) and (7) one can easily deduce the limiting probability density function \(f(x)\) of \(\mu_{n}/n\). We have \[f(x)=\frac{1}{2}e^{\gamma/2}\sqrt{\pi}x^{-3/2}p\left(\frac{1}{x}-1\right), \quad 0<x\leq 1. \tag{8}\] Arratia et al. [2, Lemma 5.9] derive also a local limit theorem for \(\mu_{n}\). It is stated as follows. **Proposition 2**: _Suppose that \(m\leq n\) satisfies \(\frac{m}{n}\to x\in(0,1)\) as \(n\to\infty\). Then_ \[\mathbb{P}(\mu_{n}=m)=\frac{1}{n}f(x)(1+o(1)), \tag{9}\] _where \(f(x)\) is given by (8)._ _Proof of part (i)_. Recall that \(\nu_{n}^{\prime}\) was the count of the cyclic vertices of a random indecomposable mapping of \([n]\) into itself. In the computation of \(\mathbb{E}(\nu_{n})\), we shall use conditional expectations. First, we note that, for \(m\leq n\), we have \(\mathbb{E}(\nu_{n}|\mu_{n}=m)=\mathcal{E}(\nu_{m}^{\prime})\). Decomposing the expectation of \(\nu_{n}\) into a weighted sum of conditional expectations (see, e.g., [9, Section 3.7]), we obtain \[\mathbb{E}(\nu_{n})=\sum_{m=1}^{n}\mathbb{E}(\nu_{n}|\mu_{n}=m)\mathbb{P}(\mu_ {n}=m)=\sum_{m=1}^{n}\mathcal{E}(\nu_{m}^{\prime})\mathbb{P}(\mu_{n}=m). \tag{10}\] Dividing both sides of (10) by \(\sqrt{n}\) and setting \(m=\lfloor xn\rfloor\), \(0_{\mbox{\rm i}}\mbox{\rm x}_{\mbox{\rm i}}1\), from the first asymptotic equivalence in (3) and (9) we observe that the right-hand side of (10) represents the Riemann sum of the integral \[I:=\sqrt{\frac{2}{\pi}}\int_{0}^{1}\sqrt{x}f(x)dx \tag{11}\] with step size \(1/n\). It can be easily seen that the error term in this representation becomes negligible as \(n\to\infty\). Hence \[\frac{1}{\sqrt{n}}\mathbb{E}(\nu_{n})=I+o(1). \tag{12}\] To complete the proof, it remains to evaluate the integral \(I\). We first replace \(f(x)\) by its expression (8). Then, we set in (11) \(y=\frac{1}{x}-1\). Since \(p(x)\) is the probability density function of the random variable \(\eta>0\), we can rewrite (11) as follows: \[I=\frac{e^{\gamma/2}}{\sqrt{2}}\int_{0}^{\infty}\frac{p(y)}{1+y}dy=\frac{e^{ \gamma/2}}{\sqrt{2}}\mathbf{E}((1+\eta)^{-1}), \tag{13}\] where \(\mathbf{E}\) denotes the expectation with respect to the Lebesgue measure on \([0,\infty)\). Recall that the Laplace transform \(\varphi(s)\) of \(\eta\) is given by both right side expressions of (5). Furthermore, an obvious computation shows that \[\int_{0}^{\infty}e^{-s}\varphi(s)ds=\mathbf{E}((1+\eta)^{-1}). \tag{14}\] Combining the last expression for \(\varphi(s)\) in (5) with (13) and (14), we obtain the required representation of \(I\). Using Mathematica, version 12.0, Finch [6] shows that \[I=0.6884050874956\ldots. \tag{15}\] Combining (12) and (15) completes the proof of part (i). _Proof of part(ii)_. In the proof we use again the local limit approximation (9) and the second asymptotic equivalence of (3). In the same way as in part (i), we obtain \[\lim_{n\to\infty}\frac{1}{n}\mathbb{E}(\nu_{n}^{2})=\int_{0}^{1}xf(x)dx.\] Since \(f(x)\) is the limiting probability density function of \(\mu_{n}/n\), we observe that \[\lim_{n\to\infty}\frac{1}{n}\mathbb{E}(\nu_{n}^{2})=\lim_{n\to\infty}\frac{1}{n} \mathbb{E}(\mu_{n})=0.7578230112\ldots,\] where the last numerical value was found by Gourdon [8, p.152] (see also [2, Table 5.1]). Since \(\frac{1}{n}\mathbb{V}ar(\nu_{n})=\frac{1}{n}(\mathbb{E}(\nu_{n}^{2})-( \mathbb{E}(\nu_{n}))^{2})\), the numerical result of part (ii) follows from (15) and the proof is complete. ## 3 Concluding Remarks Suppose that a vertex \(i\in[n]\) of the graph \(G_{T},T\in\mathcal{T}_{n}\), is chosen uniformly at random. The probability that \(i\) possesses a certain property (e.g., \(i\) is a cyclic vertex, \(i\) belongs to the largest component of \(G_{T}\), etc.) can be computed directly, using the total probability formula. For example, the probability that a randomly chosen vertex is cyclic equals \(\sum_{k=1}^{n}\frac{k}{n}\mathbb{P}(\lambda_{n}=k)=\frac{1}{n}\mathbb{E}( \lambda_{n})\) (recall that \(\lambda_{n}\) is the total number of cyclic vertices in \(G_{T}\)). In a similar way, one can interpret the ratio \(\mathbb{E}(\nu_{n})/\mathbb{E}(\lambda_{n})\) as the limiting conditional probability that a randomly chosen cyclic vertex belongs to the largest component (deepest cycle). It is well-known that \(\mathbb{E}(\lambda_{n})\sim\sqrt{\pi n/2}\) as \(n\to\infty\); for example see [18, Section 6.3]. Combining this asymptotic equivalence with the numerical result of Theorem 1(i), we obtain the approximate value of this probability, namely, \[\lim_{n\to\infty}\frac{\mathbb{E}(\nu_{n})}{\mathbb{E}(\lambda_{n})}=\lim_{n \to\infty}\sqrt{\frac{2}{\pi n}}\mathbb{E}(\nu_{n})\approx 0.5493. \tag{16}\] Now, consider the length \(\kappa_{n}\) of the longest cycle of a random mapping from \(\mathcal{T}_{n}\). Purdom and Williams [16] showed that \(\lim_{n\to\infty}\frac{1}{\sqrt{n}}\mathbb{E}(\kappa_{n})\approx 0.7825\). Hence the limiting conditional probability that a randomly chosen cyclic vertex belongs to the longest cycle is \[\lim_{n\to\infty}\frac{\mathbb{E}(\kappa_{n})}{\mathbb{E}(\lambda_{n})}\approx 0.6243. \tag{17}\] The difference between (17) and (16) is approximately equal to \(0.075\). It can be interpreted as the approximate limiting probability that the the longest cycle and the largest component of \(G_{T}\) are disjoint. Finch [5] called the component containing the longest cycle of a random mapping _the richest_ component. In this terminology, the difference \(0.075\) equals the approximate limiting probability that the richest component is not the largest one. The problem concerning the average size of the richest component remains still unsolved. In our last remark, we propose another open problem related to the size \(\tau_{n}\) of the largest tree in a random mapping from \(\mathcal{T}_{n}\). Since \(\tau_{n}\) does not exceed the size of the component to each the largest tree belongs and \(\mu_{n}\) is the maximum component size of \(T\in\mathcal{T}_{n}\), for all \(n\geq 1\), we have \(\tau_{n}\leq\mu_{n}\). The limiting distribution function of \(\tau_{n}/n\) as \(n\to\infty\) was first determined by Stepanov [19]. There is another probabilistic proof of this result due to Pavlov [14] (see also [12, Section 3.3]). The following natural question arises: what can be said about the probability that the largest tree is a subgraph of the largest component of a random mapping? It seems the limit theorems from [19, 14] would be helpful to obtain an asymptotic estimate for this probability. ## Acknowledgements I would like to thank Steven Finch for his support in this study. I am especially grateful to him for the numerical evaluation of the integral in Theorem 1(i) and for bringing my attention to a numerical result obtained by X. Gourdon [8]. I am also grateful to Emil Kamenov and Mladen Savov for helpful discussions. This work was partially supported by Project KP-06-N32/8 with the Bulgarian Ministry of Education and Science.
2309.07918
Unified Human-Scene Interaction via Prompted Chain-of-Contacts
Human-Scene Interaction (HSI) is a vital component of fields like embodied AI and virtual reality. Despite advancements in motion quality and physical plausibility, two pivotal factors, versatile interaction control and the development of a user-friendly interface, require further exploration before the practical application of HSI. This paper presents a unified HSI framework, UniHSI, which supports unified control of diverse interactions through language commands. This framework is built upon the definition of interaction as Chain of Contacts (CoC): steps of human joint-object part pairs, which is inspired by the strong correlation between interaction types and human-object contact regions. Based on the definition, UniHSI constitutes a Large Language Model (LLM) Planner to translate language prompts into task plans in the form of CoC, and a Unified Controller that turns CoC into uniform task execution. To facilitate training and evaluation, we collect a new dataset named ScenePlan that encompasses thousands of task plans generated by LLMs based on diverse scenarios. Comprehensive experiments demonstrate the effectiveness of our framework in versatile task execution and generalizability to real scanned scenes. The project page is at https://github.com/OpenRobotLab/UniHSI .
Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, Jiangmiao Pang
2023-09-14T17:59:49Z
http://arxiv.org/abs/2309.07918v4
# Unified Human-Scene Interaction via ###### Abstract Human-Scene Interaction (HSI) is a vital component of fields like embodied AI and virtual reality. Despite advancements in motion quality and physical plausibility, two pivotal factors, versatile interaction control and the development of a user-friendly interface, require further exploration before the practical application of HSI. This paper presents a unified HSI framework, _UniHSI_, which supports unified control of diverse interactions through language commands. This framework is built upon the definition of interaction as _Chain of Contacts_ (CoC): steps of human joint-object part pairs, which is inspired by the strong correlation between interaction types and human-object contact regions. Based on the definition, UniHSI constitutes a _Large Language Model (LLM) Planner_ to trans Figure 1: UniHSI supports unified and long-horizon control following language commands, enjoying impressive features like fine-granularity control, diverse interactions with the same object, and multi-obj interaction. late language prompts into task plans in the form of CoC, and a _Unified Controller_ that turns CoC into uniform task execution. To facilitate training and evaluation, we collect a new dataset named _ScenePlan_ that encompasses thousands of task plans generated by LLMs based on diverse scenarios. Comprehensive experiments demonstrate the effectiveness of our framework in versatile task execution and generalizability to real scanned scenes. The project page is at [https://github.com/OpenRobotLab/UniHSI](https://github.com/OpenRobotLab/UniHSI). ## 1 Introduction Human-Scene Interaction (HSI) constitutes a crucial element in various applications, including embodied AI and virtual reality. Despite the great efforts in this domain to promote motion quality (Holden et al., 2017; Starke et al., 2019, 2020; Hassan et al., 2021; Zhao et al., 2022; Hassan et al., 2021; Wang et al., 2022) and physical plausibility (Holden et al., 2017; Starke et al., 2019, 2020; Hassan et al., 2021; Zhao et al., 2022; Hassan et al., 2021; Wang et al., 2022), two key factors, versatile interaction control and the development of a user-friendly interface, are yet to be explored before HSI can be put into practical usage. This paper aims to provide an HSI system that supports versatile interaction control through language commands, one of the most uniform and accessible interfaces for users. Such a system requires: 1) Aligning language commands with precise interaction execution, 2) Unifying diverse interactions within a single model to ensure scalability. To achieve this, the initial effort involves the uniform definition of different interactions. Inspired by Hassan et al. (2021), we propose that interaction itself contains a strong prior in the form of human-object contact regions. For example, in the case of "lie down on the bed", it can be interpreted as "first the pelvis contacting the mattress of the bed, then the head contacting the pillow". To this end, we formulate interaction as steps of human joint-object part contact pairs, which we refer to as _Chain of Contacts_. Unlike previous contact-driven methods, which are limited to supporting specific interactions through manual design, our interaction definition is generalizable to versatile interactions and capable of modeling multi-round transitions. The recent advancements in Large Language Models have made it possible to translate language commands into the Chain of Contacts. The structured formulation then can be uniformly processed for the downstream controller to execute. Following the above formulation, we propose **UniHSI**, the first **Un**ified physical **HSI** framework with language commands as inputs. UniHSI consists of a high-level **LLM Planner** to translate language inputs into the task plans in the form of Chain of Contacts and a low-level **Unified Controller** for executing these plans. Combining language commands and background information such as body joint names and object part layout, we harness prompt engineering techniques to instruct LLMs to plan interaction step by step. To facilitate the unified execution, we devise the TaskParser as the core of the Unified Controller. Following the Chain of Contacts, the TaskParser collects information including joint poses and object point clouds from the physical environment, then formulates them into uniform task observations and task objectives. As illustrated in Fig. 1, the design of the Unified Controller models whole-body joints and arbitrary parts of objects in the scenarios to enable fine-granularity control and multi-object interaction. With different language commands, we can generate diverse interactions with the same object. Unlike previous methods that only model a limited horizon of interactions, like "sitting down", we design the TaskParser to evaluate the completion of the current steps and sequentially fetch the next step, resulting in multi-round and long-horizon transition control. The Unified control leverages the adversarial motion prior framework (Peng et al., 2021) that uses a motion discriminator for realistic motion synthesis and a physical simulation (Makoviychuk et al., 2021) to ensure physical plausibility. Another impressive feature of our framework is the training is interaction annotation-free. Previous methods typically require datasets that capture both target objects and the corresponding motion sequences, which demand numerous laboring. In contrast, we leverage the interaction knowledge of LLMs to generate interaction plans. It significantly reduces the annotation requirements and makes versatile interaction training feasible. To this end, we create a novel dataset named **ScenePlan**. It encompasses thousands of interaction plans based on scenarios constructed from PartNet (Mo et al., 2019) and ScanNet (Dai et al., 2017) datasets. We conduct comprehensive experiments on ScenePlan. The results illustrate the effectiveness of the model in versatile interaction control and good generalizability on real scanned scenarios. ## 2 Related Works **Human motion synthesis.** How to synthesize realistic human behavior is a long-standing topic. Most existing methods focus on promoting the quality and diversity of humanoid movements (Barsoum et al., 2018; Harvey et al., 2020; Pavllo et al., 2018; Yan et al., 2019; Zhang et al., 2022; Tevet et al., 2023; Zhang et al., 2023b) but do not consider scene influence. Recently, there has been a growing interest in synthesizing motion with human-scene interactions, driven by its applications in various applications like embodied AI and virtual reality. Many previous methods (Holden et al., 2017; Starke et al., 2019; Zhang et al., 2021; Hassan et al., 2021; Zhao et al., 2022; Hassan et al., 2021; Wang et al., 2022; Zhang et al., 2022b; Wang et al., 2022b) use data-driven kinematic models to generate static or dynamic interactions. These methods are typically inferior in physical plausibility and prone to synthesizing motions with artifacts, such as penetration, floating, and sliding. The need for additional post-processing to mitigate these artifacts hinders the real-time applicability of these frameworks. On the other hand, recent advancements in physics-based methods (Peng et al., 2021; Peng et al., 2022; Hassan et al., 2023; Juravsky et al., 2022; Pan et al., 2023) show promising potential in ensuring physical plausibility through the utilization of physics-aware simulators. However, these methods exhibit limitations in the following aspects: 1) Previous physics-based methods for human-scene interactions (Hassan et al., 2023; Pan et al., 2023) require distinct policy networks for each task, hindering their ability to learn versatile interactions within a unified controller. 2) These methods primarily focus on simple action-based control, such as walking and sitting, while overlooking finer-grained details of interaction control. 3) These methods heavily rely on motion sequences with annotated human-scene interactions, posing challenges in acquiring high-quality motion sequences of this nature. In contrast, our UniHSIre-designs different types of human-scene interactions into a uniform representation that can be prompted by world knowledge extracted from our high-level LLM Planner. This design enables us to train a unified controller with versatile interaction skills without the need for annotated human-scene interaction motion sequences. The detailed comparisons of key features are listed in Tab. 1. **Languages in motion synthesis.** Incorporating language understanding into human motion control has become a recent research focus. Natural language serves as a universal and human-friendly interface for motion animation, significantly improving efficiency for using motion generation models in real-world applications. Zhang et al. (2022a); Chen et al. (2023) incorporated a diffusion model into the text-to-motion generation framework, achieving remarkable results. Tevet et al. (2022; 2023); Zhang et al. (2023a) leveraged CLIP (Radford et al., 2021) as a robust text encoder in the generation process. Furthermore, Zhang et al. (2023b); Jiang et al. (2023) integrated large language models for unified motion synthesis and multimodal controls. However, these methods primarily focus on scene-agnostic motion synthesis. Generating human-scene interactions using language commands poses additional challenges because the output movements must align with the commands and be coherent with the environment. Zhao et al. (2022) generates static interaction gestures through rule-based mapping of language commands to specific tasks. Juravsky et al. (2022) utilized BERT (Devlin et al., 2019) to infer language commands, but \begin{table} \begin{tabular}{c c c c c c} \hline \hline Methods & \begin{tabular}{c} Unified \\ Interaction \\ \end{tabular} & \begin{tabular}{c} Language \\ Input \\ \end{tabular} & \begin{tabular}{c} Lang-horizon \\ Transition \\ \end{tabular} & \begin{tabular}{c} Interaction \\ Attention-free \\ \end{tabular} & \begin{tabular}{c} Control \\ Joints \\ \end{tabular} & \begin{tabular}{c} Multi-object \\ Interaction \\ \end{tabular} \\ \hline NSM Starke et al. (2019) & & & ✓ & & 3 (belvis, hands) & ✓ \\ SAM Hassan et al. (2021) & & & & 1 (pelvis) & \\ COLICM Zhang et al. (2022b) & & & & & 3 (pelvis, hands) & ✓ \\ HUMANSE Wang et al. (2022b) & ✓ & ✓ & & & - \\ ScenrollDiffer Huang et al. (2023) & ✓ & ✓ & & & - \\ PADL Juravsky et al. (2022) & & ✓ & ✓ & ✓ & \\ InterPhys Hassan et al. (2023) & & & & & 4 (pelvis, head, hands) & \\ \hline Ours & ✓ & ✓ & ✓ & ✓ & 15 (whole-body) & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Key Feature Comparisons between UniHSI and previous methods their method requires pre-defined tasks and different low-level policies for task execution. Wang et al. (2022b) unified various tasks in a CVAE (Yao et al., 2022) network with a language interface, but their performance was limited due to challenges in grounding target objects and contact areas for the characters. In contrast, UniHSI utilizes large language models to transfer language commands into the formation of _Chain of Contacts_ and design a robust unified controller to execute versatile interaction based on the structured formation. ## 3 Methodology As shown in Fig. 2, UniHSI supports versatile human-scene interaction control following language commands. In the following subsections, we first illustrate how we design the unified interaction formulation as a Chain of Contacts (Sec. 3.1). Then we show how we translate language commands into the unified formulation by the LLM Planner (Sec. 3.2). Finally, we elaborate on the construction of the Unified Controller (Sec. 3.3). ### Chain of Contacts The initial effort of UniHSI lies in the unified formulation of interaction. Inspired by Hassan et al. (2021b) which infers contact regions of humans and objects based on the interaction gestures of humans, there exists high coherence between the contact regions and the interaction types. To this end, we can universally define interaction as a Chain of Contacts \(\mathcal{C}\), with the formulation as \[\mathcal{C}=\{\mathcal{S}_{1},\mathcal{S}_{2},...\}, \tag{1}\] where \(\mathcal{S}_{i}\) is the \(i^{th}\) contact step. Each step \(\mathcal{S}\) includes several contact pairs. For each contact, we control whether a joint contact the corresponding object part and the direction of the contact. We construct each contact pair with five elements: an object \(o\), an object part \(p\), a humanoid joint \(j\), the contact type \(c\) of \(j\) and \(p\), and the relative direction \(d\) from \(j\) to \(p\). The contact type includes "contact", "not contact", and "not care". The relative direction includes "up", "down", "front", "back", "left" and "right". For example, one contact unit \(\{o,p,j,c,d\}\) could be {chair, seat surface, pelvis, contact, up}. In this way, we can formulate each \(\mathcal{S}\) as \[\mathcal{S}=\{\{o_{1},p_{1},j_{1},c_{1},d_{1}\},\{o_{2},j_{2},p_{2},c_{2},d_{2 }\},...\}. \tag{2}\] The chain of contacts is the output of the LLM Planner and the input of the Unified Controller. ### Large Language Model Planner We leverage LLMs as our planners to infer language commands \(\mathcal{L}\) into manageable plans \(\mathcal{C}\). As shown in Fig. 3, the inputs of the LLM Planner include language commands \(\mathcal{L}\), background scenario information \(\mathcal{B}\), humanoid joint information \(\mathcal{J}\) together with pre-set instructions, rules and examples. Specifically, \(\mathcal{B}\) includes several objects \(\mathcal{O}\) and their optional spatial layouts. Each object consists of several parts \(\mathcal{P}\), _i.e._ a chair could consist of arms, the back, and the seat. The humanoid joint information is pre-defined for all scenarios. We use prompt engineering to combine these elements together and instruct ChatGPT (OpenAI, 2020) to output task plans. By modifying instructions in the prompts, we can generate specified numbers of plans for diverse ways of interactions. We can also let LLMs automatically generate plausible commands given the scenes. In this way, we build our interaction datasets for training and evaluation of the Unified Controller. ### Unified Controller The Unified Controller takes multi-step plans \(\mathcal{C}\) and background scenarios in the form of meshes and point clouds as input, and outputs realistic movements coherent to the environments. **Preliminary.** We build the controller upon the Adversarial Motion Priors (AMP) method (Peng et al., 2021). AMP is a goal-conditioned reinforcement learning framework incorporated with an adversarial discriminator to model the motion prior. Its objective is defined by a reward function \(R(\cdot)\) as \[R(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},\mathcal{G})=w^{G}R^{G}(\mathbf{s}_{t},\mathbf{a} _{t},\mathbf{s}_{t+1},\mathcal{G})+w^{S}R^{S}(\mathbf{s}_{t},\mathbf{s}_{t+1}). \tag{3}\] The task reward \(R^{G}\) defines the high-level goal \(\mathcal{G}\) an agent should achieve. The style reward \(R^{S}\) encourages the agent to imitate low-level behaviors from motion datasets. \(w^{G}\) and \(w^{S}\) are empirical weights of \(R^{G}\) and \(R^{S}\), respectively. \(\mathbf{s_{t}}\), \(\mathbf{a_{t}}\), \(\mathbf{s_{t+1}}\) are the state at time \(t\), the action at time \(t\), the state at time \(t+1\), respectively. The style reward \(R^{S}\) is modeled using an adversarial discriminator \(D\), which is trained according to the objective: \[\begin{split}\operatorname*{arg\,min}_{D}\;-\mathbb{E}_{d^{ \mathcal{M}}(\mathbf{s_{t}},\mathbf{s_{t+1}})}\left[\log\left(D(\mathbf{s_{t}^{A}},\mathbf{s_{t +1}^{A}})\right)\right]-\mathbb{E}_{d^{\mathbf{s}}(\mathbf{s},\mathbf{s_{t+1}})}\left[ \log\left(1-D(\mathbf{s^{A}},\mathbf{s_{t+1}^{A}})\right)\right]\\ +w^{\text{SP}}\;\mathbb{E}_{d^{\mathcal{M}}(\mathbf{s},\mathbf{s_{t+1}})} \left[\left|\left|\nabla_{\phi}D(\phi)\right|_{\phi=(\mathbf{s^{A}},\mathbf{s_{t+1}^{ A}})}\right|\right|^{2}\right],\end{split} \tag{4}\] where \(d^{\mathcal{M}}(\mathbf{s},\mathbf{s_{t+1}})\) and \(d^{\mathbf{s}}(\mathbf{s},\mathbf{s_{t+1}})\) denote the likelihood of a state transition from \(\mathbf{s_{t}}\) to \(\mathbf{s_{t+1}}\) in the dataset \(\mathcal{M}\) and the policy \(\pi\) respectively. \(w^{\text{SP}}\) is an empirical coefficient to regularize gradient penalty. \(\mathbf{s^{A}}=\Phi(\mathbf{s})\) is the observation for discriminator. The style reward \(r^{S}=R^{S}(\cdot)\) for the policy is then formulated as: \[R^{S}(\mathbf{s}_{t},\mathbf{s_{t+1}})=-\log(1-D(\mathbf{s}_{t}^{A},\mathbf{s}_{t+1}^{A})). \tag{5}\] We adopt the key design of motion discriminator for realistic motion modeling. Our main contribution to the controller parts lies in the unification of different tasks. As shown in the left part of Fig. 4 (a), AMP (Peng et al., 2021), as well as most of the previous methods (Juravsky et al., 2022; Figure 2: **An overview of UniHSI. The whole pipeline consists of two major components: the LLM Planner and the Unified Controller. The LLM planner takes language inputs \(\mathcal{L}\) and background scenario information \(\mathcal{B}\) as inputs and outputs multi-step plans \(\mathcal{C}\) in the form of a Chain of Contacts. The Unified Controller then executes \(\mathcal{C}\) step-by-step and outputs interaction movements.** Zhao et al., 2023), design specified task observations, task objectives, and hyperparameters to train task-specified control policy. In contrast, we unify different tasks into Chains of Contacts and devise a TaskParser to process the uniform representation. **TaskParser.** As the core of the Unified Controller, the TaskParser is responsible for formulating a Chain of Contacts into uniform task observations and task objectives, and sequentially fetching steps for multi-round interaction execution. Given one specific contacting pair \(\{o,p,j,c,d\}\), for task observation, the TaskParser collects the corresponding position \(\mathbf{v}^{j}\in\mathbb{R}^{3}\) of the joint \(j\), and point clouds \(\mathbf{v}^{p}\in\mathbb{R}^{m\times 3}\) of the object part \(p\) from the simulation environment, where \(m\) is the point number of point clouds. It selects the nearest point \(\mathbf{v}^{np}\in\mathbf{v}^{p}\) from \(\mathbf{v}^{p}\) to \(\mathbf{v}^{j}\) as the target point for contact. Then we formulate task observation of the single pair as \(\{\mathbf{v}^{np}-\mathbf{v}^{j},c,d\}\). For the task observation in the network, we map \(c\) and \(d\) into digital numbers, but here we still use the same notation for simplicity. Combining these contact pairs together, we get the uniform task observations \(s^{U}=\{\{\mathbf{v}_{1}^{np}-\mathbf{v}_{1}^{j},c_{1},d_{1}\},\{\mathbf{v}_{2}^{np}-\mathbf{v }_{2}^{j},c_{2},d_{2}\},...,\{\mathbf{v}_{n}^{np}-\mathbf{v}_{n}^{j},c_{n},d_{n}\}\}\). The task reward \(r^{G}=R^{G}(\cdot)\) is the summarization of all contact pair rewards: \[R^{G}=\sum_{k}w_{k}R_{k},\ k=1,2,...,n. \tag{6}\] We model each contact reward \(R_{k}\) according to the contact type \(c_{k}\). When \(c_{k}=\mathrm{contact}\), the contact reward encourages the joint \(j\) to be close to the part \(p\), satisfying the specified direction \(d\). When \(c_{k}=\mathrm{notcontact}\), we hope the joint \(j\) is not close to the part \(p\). If \(c_{k}=\mathrm{notcare}\), we directly set the reward to max. Following the idea, the \(k^{th}\) contact reward \(R_{k}\) is defined as \[R_{k}=\begin{cases}w_{\mathrm{dis}}\mathrm{exp}(-w_{dk}||\mathbf{d}_{k}||)+w_{ \mathrm{dir}}\mathrm{min}(\overline{\mathbf{d}}_{k}\hat{\mathbf{d}}_{k},0),&c_{k}= \mathrm{contact}\\ 1-\mathrm{exp}(-w_{dk}||\mathbf{d}_{k}||),&c_{k}=\mathrm{not\ contact}\\ 1,&c_{k}=\mathrm{not\ care}\end{cases} \tag{7}\] where \(\mathbf{d}_{k}=\mathbf{v}^{np}-\mathbf{v}^{j}\) indicates the \(k^{\mathrm{th}}\) distance vector, \(\overline{\mathbf{d}}_{k}\) is the normalized unit vector of \(\mathbf{d}_{k}\), \(\hat{\mathbf{d}}_{k}\) is the unit direction vector specified by direction \(d_{k}\), and \(c_{k}\) is the \(k^{\mathrm{th}}\) contact type. \(w_{dis}\), \(w_{dir}\), \(w_{dk}\) are corresponding weights. Here we set the scale interval of \(R_{k}\) as \([0,1]\). We use _exp_ to ensure the scale interval. Similar to the formulation of contact reward, the TaskParser considers a step to be completed if All \(k=1,2,...,n\) satisfy \[\begin{cases}||\mathbf{d}_{k}||<0.1\ \mathrm{and}\ \overline{\mathbf{d}}_{k}\hat{\mathbf{d} }_{k}>0.8,&c_{k}=\mathrm{contact}\\ ||\mathbf{d}_{k}||>0.1,&c_{k}=\mathrm{not\ contact}\\ \mathrm{True}.&c_{k}=\mathrm{not\ care}\end{cases} \tag{8}\] Figure 3: The process of translating language commands to Chains of Contacts **Adaptive Contact Weights.** The formulation of 6 includes lots of weights to balance different contact parts of the rewards. Empirically setting them requires much laboring and is not generalizable to versatile tasks. To this end, we adaptively set these weights based on the current optimization process. The basic idea is to give parts of rewards that are hard to optimize high rewards while lowering the weights of easier parts. Given \(R_{1}\), \(R_{2}\),..., \(R_{n}\), we set their weights to \[w_{k}=(1-R_{k})/(n-\sum_{k=1,2,\ldots,n}R_{k}+e), \tag{9}\] **Ego-centric Heightmap.** To avoid collision when navigating or interacting in a scene, the humanoid must be scene-aware. Here we adopt similar approaches in Wang et al. (2022); Won et al. (2022); Starke et al. (2019) that sample surrounding information as the humanoid's observation. We build a square ego-centric heightmap that samples the height of surrounding objects (Fig. 4 (b)). It is rather important when we extend our methods into real scanned scenarios such as ScanNet (Dai et al., 2017) in which various objects are densely distributed and easy to be collided. ## 4 Experiments Existing methods and datasets related to human-scene interactions mainly focus on short and limited tasks (Hassan et al., 2021; Peng et al., 2021; Hassan et al., 2023; Wang et al., 2022; Araujo et al., 2023). To the best of our knowledge, we are the first method that supports arbitrary horizon interactions with language commands as input. To this end, we construct a novel dataset for training and evaluation. We also conduct various ablations with vanilla baselines and key components of our framework. ### Datasets and Metrics To facilitate the training and evaluation of UniHSI, we construct a novel dataset named ScenePlan, that comprises various indoor scenarios and interaction plans. The indoor scenarios are collected and constructed from object datasets and scanned scene datasets. We leverage our LLM Planner to generate interaction plans based on these scenarios. The training of our model also requires motion datasets to train the motion discriminator which constrains our agents to interact in natural ways. We follow the practice of Hassan et al. (2023) to evaluate the performance of our method. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Source} & \multicolumn{3}{c|}{Success Rate (\%)} & \multicolumn{3}{c|}{Contact Error \(\downarrow\)} & \multicolumn{3}{c}{Success Steps} \\ & Simple & Mid & Hard & Simple & Mid & Hard & Simple & Mid & Hard \\ \hline PartNet (Mo et al., 2019) & 91.1 & 63.2 & 39.7 & 0.038 & 0.073 & 0.101 & 2.3 & 4.5 & 6.1 \\ \multicolumn{1}{c|}{wo Adaptive Weights} & 21.2 & 5.3 & 0.1 & 0.181 & 0.312 & 0.487 & 0.7 & 1.2 & 0.0 \\ \multicolumn{1}{c|}{wo Heightmap} & 61.6 & 45.7 & 0.0 & 0.068 & 0.076 & - & 1.8 & 3.4 & 0.0 \\ \hline ScanNet (Dai et al., 2017) & 76.1 & 43.5 & 32.2 & 0.067 & 0.101 & 0.311 & 1.8 & 2.9 & 4.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on ScenePlan Figure 4: **Design Visualization.** (a) Our framework unifies different tasks into uniform designs through the unified interface of CoC and the TaskParser. (b) An illustration of ego-centric heightmap in one ScanNet scene in the form of green dots. Darker green indicates a larger height. **ScenePlan.** we collect scenarios for ScenePlan from the PartNet (Mo et al., 2019) and ScanNet (Dai et al., 2017) datasets. PartNet (Mo et al., 2019) provides various indoor objects with fine-grained part annotations, which are suitable for LLM Planners. We collect diverse objects from PartNet and composite them into scenarios. ScanNet (Dai et al., 2017) involves diverse scenes scanned from real indoor rooms. We collect scenes and annotate key object parts based on their fragmented area annotations. We then use LLM Planner to generate various interaction plans based on these scenarios. Specifically, the training set comprises 40 objects from PartNet. We generate 5\(\sim\)20 plausible interaction steps for each of these objects. During training, we randomly select 1\(\sim\)4 objects from the training set for each scenario and randomly pick the steps of these objects as interaction plans. The evaluation set comprises 40 objects from PartNet and 10 scenarios from ScanNet. We manually or randomly construct objects from PartNet into scenarios. We generated a total of 1,040 interaction plans on PartNet scenarios and 100 interaction plans on ScanNet scenarios for evaluation. These plans involve a diverse range of interactions, including versatile interaction types, various horizons, and multiple objects. **Motion Datasets.** We use the SAMP dataset (Hassan et al., 2021) and CIRCLE (Araujo et al., 2023) as our motion dataset. SAMP includes 100 minutes of MoCap clips, covering common walking, sitting, and lying down behaviors. CIRCLE contains 10 hours (more than 7000 sequences) of both right and left-hand reaching data. We use all clips in SAMP and pick 20 representative clips in CIRCLE for training. **Metrics.** We follow Hassan et al. (2023) that uses _Success Rate_ and _Contact Error_ (_Precision_ in Hassan et al. (2023)) as the main metrics to quantitatively measure the quality of interactions. Success Rate records the percentage of trials that humanoids successfully complete every step of the whole plan. In our experiments, we consider a trial of \(n\) steps to be successfully completed if humanoids finish it in \(n\times 10\) seconds. We also record the average error of all contact pairs, which can be formulated as \[\mathrm{Contact}\;\mathrm{Error}=\sum_{i,c_{i}\neq 0}er_{i}/\sum_{i,c_{i} \neq 0}1, \tag{10}\] where \[er_{i}=\begin{cases}||\mathbf{d}_{k}||,&c_{i}=\mathrm{contact}\\ \min(0.3-||\mathbf{d}_{k}||,0).&c_{i}=\mathrm{not\;contact}\end{cases} \tag{11}\] We further record the convergence time of different training settings. We test the success rate every 5k steps and consider the training is converged when the successful rate is higher than 80%. We further record _Success Steps_ and _Convergenece time_ in the following experiments as a supplement to evaluate our model in detail. ### Implementation Details We follow Peng et al. (2021) to construct the low-level controller, which includes a policy network and a discriminator network. The policy network comprises a critic network and an actor network, both of which are modeled as a CNN layer followed by two MLP layers with [1024, 1024, 512] units. The discriminator is modeled with two MLP layers having [1024, 1024, 512] units. We use Figure 5: Visual examples of tasks of different difficulty PPO (Schulman et al., 2017) as the base reinforcement learning algorithm for policy training and employ the Adam optimizer Kingma & Ba (2014) with a learning rate of 2e-5. Our experiments are conducted on the IsaacGym (Makoviychuk et al., 2021) simulator using a single Nvidia A100 GPU with 8192 parallel environments. ### Performance on ScenePlan We initially conduct experiments on our ScenePlan dataset. To measure performance in detail, we categorize task plans into three levels: simple, medium, and hard. We classify plans within 3 steps as simple tasks, those with more than 3 steps but with a single object as medium-level tasks, and those with multiple objects as hard tasks. Simple task plans typically involve straightforward interactions, such as getting close to a bed, sitting on the bed, and then lying on the bed. Medium-level plans encompass more diverse interactions with multiple rounds of transitions, like resting hands on armrests or placing the left foot on the right knee. Hard task plans introduce multiple objects, requiring agents to navigate between these objects and interact with one or more objects simultaneously. Examples of tasks with varying difficulty are illustrated in Fig. 5. As shown in Table 2, UniBI performs well in simple task plans, exhibiting a high Success Rate and low error. However, as task plans become more diverse and complex, the performance of our model experiences a noticeable decline. Nevertheless, the Success Steps metric continues to increase, indicating that our model still performs well in parts of the plans. It's important to note that the scenarios in the ScenePlan test set are unseen during training, and scenes from ScanNet even exhibit a modality gap with the training set. The overall performance on the test set demonstrates the versatile capability, robustness, and generalization ability of UniHSI. ### Ablation Studies We first perform ablations on the key components of our model (Sec. 4.4.1). Next, we validate the superiority of our unified design compared to previous methods. #### 4.4.1 Key Components Ablation **Adaptive Weights.** As shown in Table 2, the removal of Adaptive Weights from our controller results in a significant drop in performance across all levels of tasks. This is because Adaptive Weights are essential for balancing the optimization of different contact pairs. When certain pairs are not used or are easy to learn in specific interactions, Adaptive Weights automatically reduce their weights and increase the weights of less straightforward pairs, making it particularly important as tasks become more complex. **Ego-centric Heightmap.** Eliminating the Ego-centric Heightmap also leads to performance degradation, particularly for hard tasks. The Ego-centric Heightmap plays a crucial role in agent navigation within scenes. It enables the perception of surroundings and prevents agents from colliding with other objects. This explains why our model faces difficulties with hard tasks that involve complex \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Success Rate (\%) \(\uparrow\)} & \multicolumn{3}{c}{Contact Error \(\downarrow\)} \\ & Sit & Lie Down & Reach & Sit & Lie Down & Reach \\ \hline NSM - Sit (Starke et al., 2019) & 75.0 & - & - & 0.19 & - & - \\ SAMP - Sit (Hassan et al., 2021) & 75.0 & - & - & 0.06 & - & - \\ SAMP - Lie Down(Hassan et al., 2021) & - & 50.0 & - & - & 0.05 & - \\ InterPhys - Sit (Hassan et al., 2023) & 93.7 & - & - & 0.09 & - & - \\ InterPhys - Lie Down(Hassan et al., 2023) & - & 80.0 & - & - & 0.30 & - \\ \hline AMP (Peng et al., 2021)-Sit & 77.3 & - & - & 0.090 & - & - \\ AMP-Lie Down & - & 21.3 & - & - & 0.112 & - \\ AMP-Reach & - & - & **98.1** & - & - & 0.016 \\ AMP-Vanilla Combination (VC) & 62.5 & 20.1 & 90.3 & 0.093 & 0.108 & 0.032 \\ \hline UniHSI & **94.3** & **81.5** & 97.5 & **0.032** & **0.061** & **0.016** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation on baseline models and vanilla implementations scenarios with numerous objects. The Ego-centric Heightmap is also the key factor enabling our model to generalize to real scanned scenes. #### 4.4.2 Design Comparison with Previous Methods **Baseline Settings.** In comparison with previous methods, we select simple interaction tasks that can be accomplished by those methods, including "Sit," "Lie Down," and "Reach". Due to differences in training datasets and the unavailability of code for the most related method (Hassan et al., 2023), direct comparisons are challenging. We report their results from (Hassan et al., 2023) for a rough comparison. To ensure fairness and clarity in the comparison, we integrate the key designs of Hassan et al. (2023) into our baseline model (Peng et al., 2021). We manually formulate task observations and task objectives for different tasks following Hassan et al. (2023). These task objectives can be generally expressed as: \[R^{G}=\begin{cases}0.7R^{\mathrm{near}}+0.3R^{\mathrm{far}},&\text{if distance}>0.5\text{m}\\ 0.7R^{\mathrm{near}}+0.3,&\text{otherwise}\end{cases} \tag{12}\] Here \(R^{\mathrm{far}}\) encourages the character to move towards the object, and \(R^{\mathrm{near}}\) encourages the character to perform specific tasks once it is close, requiring task-specific designs. We also implement a vanilla baseline that combines different tasks within the same model. In this setup, we concatenate the task observations of various tasks and encode task choices within the task observations. During training, we randomly select tasks and train them with their corresponding rewards. **Quantitative Comparison.** As shown in Table 3, UniHSI achieves higher or comparable performance across all these metrics. The performance gain increases as tasks become more complex, with the most significant improvement observed in the "Lie Down" task, the most complex among them. The primary reason for our model outperforming baseline implementations is the decomposition of tasks into multi-step plans, significantly reducing the complexity of interaction tasks. Moreover, different tasks often share similar motion transitions. For instance, the "Lie Down" task typically involves first sitting down on the target object. As a result, our model can learn complex tasks more easily and efficiently adapt to different tasks. Figure 6 (b) demonstrates that our methods not only achieve a higher success rate but also converge more quickly than baseline implementations. Figure 6: **Visual Ablations. (a) Our model performs more naturally and accurately than the baselines in tasks like “Sit” and “Lie Down”. (b) The training of our model is more efficient and effective.** Another noteworthy result is that the vanilla combination of AMP (Peng et al., 2021) leads to a noticeable drop in performance across all tasks, whereas our methods remain effective. This is due to the fact that the vanilla combination of different tasks results in interference and inefficient training. In contrast, our approach unifies these tasks into uniform representations and optimizes them with consistent objectives, effectively benefiting multi-task learning. **Qualitative Comparison.** In Figure 6 (a), we qualitatively visualize the performance of baseline methods and our model. Our model performs more naturally and accurately than the baselines in tasks like "Sit" and "Lie Down". This is primarily attributed to the differences in task objectives. Baseline objectives (Eq. 12) model the combination of sub-tasks, such as walking close and sitting down, as simultaneous processes. Consequently, agents tend to perform these different goals simultaneously. For example, they may attempt to sit down even if they are not in the correct position or throw themselves like a projectile onto the bed, disregarding the natural task progression. On the other hand, our methods decompose tasks into natural movements through language planners, resulting in more realistic interactions. ## 5 Discuss and Conclusion In this work, we take a step forward toward the unified HSI system that supports versatile interactions and language commands. By defining interaction as Chain of Contacts, we model different interactions into steps of human joint-object part contact pairs. Following this definition, we build the framework of UniHSI involving a Large Language Planner to translate language commands into prompted CoC and a Unified Controller that turns CoC into uniform task execution. We train and evaluate our controller through our newly developed dataset, ScenePlan, which includes thousands of task plans in diverse scenarios. We validate the effectiveness as well as generalizability through comprehensive experiments conducted on the ScenePlan dataset, which shaw benefit future work to achieve a more versatile and user-accessible HSI system. **Limitations and Future Work.** Apart from the advantages of our framework, there are a few limitations. First, our framework can only control humanoids to interact with fixed objects. We do not take moving or carrying objects into consideration. Enabling humanoids to interact with movable objects is an important future direction. Besides, we do not integrate LLM seamlessly into the training process. In the current design, we use pre-generated plans. Involving LLM in the training pipeline will promote the scalability of interaction types and make the whole framework more integrated. ## Appendix A Detailed prompting example of the LLM Planner As shown in Table. 4. We present the full prompting example of the input and output of the LLM Planner that is demonstrated in Fig. 2 and Fig. 3. The output is generated by OpenAI (2020). In the future study, we will collect a list of language commands and integrate ChatGPT OpenAI (2020) and GPT OpenAI (2023) into the loop to evaluate the performance of the whole framework of UniHSI. ## Appendix B Details of the ScenePlan We present three examples of different levels of interaction plans in the ScenePlan in Table. 5, 6, and 7, respectively. Simple-level interaction plans involve interactions within 3 steps and with 1 object. Medium-level interaction plans involve interactions of more than 3 steps and with 1 object. Hard-level interaction plans involve interactions of more than 3 steps and more than 1 object. Specifically, each interaction plan has an item number and two subitems named "obj" and "chain_of_contacts". The "obj" item includes information about objects like object id, name, and transformation parameters. The "chain_of_contacts" item includes steps of contact pairs in the form of CoC. We provide the list of interaction types that are included in the training and evaluation of our framework in Table. 8 and 9. Figure 8: An example of multi-step interaction with the same object Figure 7: An example of multi-obj interaction ## Appendix C More Visualizations We further provide more quantitative results in Fig. 7, 8, 9. \begin{table} \begin{tabular}{l} \hline \hline Input \\ \hline Instruction: I want to play video games for a while, then go to sleep. \\ Given the instruction, generate 1 task plan according to the following background information, rules, and examples. \\ [start of background Information] \\ The room has OBJECTS: [bed, chair, table, laptop]. \\ The [OBJECT: laptop] is upon the [OBJECT: table]. The [OBJECT: table] is in front of the [OBJECT: chair]. The [OBJECT: bed] is several meters away from [OBJECT: table]. The human is several meters away from these objects. \\ The [OBJECT: bed] has PATS: [pilot], matreress. The [OBJECT: chair] has PATS: [pilot], matreress. The [OBJECT: table] has PATS: [backward], The [OBJECT: pict]. [OBJECT: pict]. [OBJECT: pict]. [OBJECT: pict]. [nob] has PATS: [sorex], keyboard. The human has JOINTS: [pevlis, left hip, left knee, left foot, right hip, right knee, right foot, torso, head, left shoulder, left elbow, left hand, right shoulder, right elbow, right hand]. \\ [end of background Information] \\ \hline [start of rules] \\ \hline 1. Each task plan should be composite into detailed steps. If the human is not close to the target object, the first step should be to get close to the object. \\ 2. Each step should contain meaningful joint-part pairs. \\ 3. Each joint-part pair should be formatted into {JOINT, PART of OBJECT, Contact type, Contact Direction}. Or if the step is getting close to an object, the step should be formatted into {none, none, none, none, relative direction of the target object}. JOINT in the format should be replaced by JOINT in the background information. Important: PART in the format should be replaced by PART or JOINT in the background information. The choices of Contact type include [contact, not contact]. The choices of Contact Direction include [front, back, left, right, up, down, none]. \\ 4. Be plausible. Do not generate uncommon interactions. \\ 5. Only interact with still objects. Do not move objects. \\ [end of rules] \\ \hline [start of an example] \\ \hline [start of background Information] \\ The room has a {OBJECT: chair}. \\ The [OBJECT: chair] has PATS: [back\_soft\_surface, armrest\_hard\_surface, armrest\_hard\_surface, seat\_surface]. \\ The human has JOINTS: [pelvis, left hip, left knee, left foot, right hip, right knee, right foot, torso, head, left shoulder, left elbow, left hand, right shoulder, right elbow, right hand, right shoulder, right elbow, right hand]. \\ The human is several meters away from the [OBJECT: chair]. \\ [end of background Information] \\ Step 1: Get close to the chair. \\ Pair 1: {none, none, none, none, front} \\ Step 2: Sit on the chair. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up}. \\ Step 3: Rest the right hand on the right arm of the chair. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up}. \\ Pair 2: {chair, armrest\_hard\_surface, right hand, contact, up}. \\ Step 4: Place the right foot on the left knee. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up}. \\ Pair 2: {chair, left knee, right foot, contact, up}. \\ Step 5: Stand up. \\ Pair 1: {none, none, none, none, front} \\ Step 6: Sit on the chair. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up}. \\ [end of an example] \\ \hline Output \\ \hline Step 1: Get close to the chair. \\ Pair 1: {pelvis, none, none, none, front} \\ Step 2: Sit on the chair. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up} \\ Step 3: {Use the laptop to play video games. \\ Pair 1: {chair, seat\_surface, pelvis, contact, up} \\ Pair 2: {laptop, keyboard, left hand, contact, up} \\ Pair 3: {laptop, keyboard, right hand, contact, up} \\ Step 4: Stand up from the chair. \\ Pair 1: {chair, none, none, none, front} \\ Step 5: Get close to the bed. \\ Pair 1: {bed, none, none, none, front} \\ Step 6: Sit on the bed. \\ Pair 1: {bed, matreress, pelvis, contact, down} \\ Step 7: Lie down on the bed. \\ Pair 1: {bed, matreress, pelvis, contact, down} \\ Pair 2: {bed, matreress, torso, contact, down} \\ Pair 3: {bed, pillow, head, contact, up} \\ \hline \hline \end{tabular} \end{table} Table 4: **Detailed prompting example of the LLM Planner.** It shows a full example of the input and output of the LLM Planner demonstrated in Fig. 2 and Fig. 3. \begin{table} \begin{tabular}{l} \hline \{ "0000": \\ \{ "obj": \{ "000": ["000":[ "id": "45005", "name": "chair", "rotate": [[1.5707963267948966, 0, 0], [0, 0, -1.5707963267948966]], "scale": 1.5. "transfer": [0,-2.0], } \\ \} \\ \}, \\ \end{tabular} \end{table} Table 6: **An example of medium-level interaction plans in ScenePlan.** Medium-level interaction plans involve interactions of more than 3 steps and with 1 object. \begin{table} \begin{tabular}{l} \hline \{ \{ "0000": \\ \{ "obj": \{ "000":[ "id": "45005", "name": "chair", "rotate": [[1.5707963267948966, 0, 0], [0, 0, -1.5707963267948966]], "scale": 1.5. "transfer": [0,-2.0], } \\ \} \\ \}, \\ \end{tabular} \end{table} Table 5: **An example of simple-level interaction plans in ScenePlan.** Simple-level interaction plans involve interactions within 3 steps and with 1 object. \begin{table} \begin{tabular}{l} \hline \hline \{ \\ "0000": \\ \(\)"obj": \\ \(\) \(\{\) "000": \\ \(\) \(\{\) "id": "37825": \\ "name": "chair", \\ "rotate": [1.5707963267948966, 0, 0], [0, 0, -.15707963267948966], \\ "scale": 1.5, \\ "transfer": [0,-2,0] \\ \(\}\), \\ \(\{\) "001": \\ \(\{\) "id": "21980", \\ "name": "table", \\ "rotate": [[1.5707963267948966, 0, 0], [0, 0, 1.5707963267948966]], \\ "scale": 1.8, \\ "transfer": [1,-2,0] \\ \(\}\), \\ \(\{\) "id": "11873", \\ "name": "laptop", \\ "rotate": [1.5707963267948966, 0, 0], [0, 0, 1.5707963267948966], \\ "scale": 0.6, \\ "transfer": [0.8,-2,0.65] \\ \(\}\), \\ \(\{\) "003": \\ \(\{\) "id": "10873", \\ "name": "bed", \\ "rotate": [[1.5707963267948966, 0, 0], [0, 0, -.15707963267948966]], \\ "scale": 3, \\ "transfer": [-0.2,-4,0] \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(}\), \\ \(\}\), \\ \(}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(}\), \\ \(\}\), \\ \(\}\), \\ \(\}\), \\ \(}\ \begin{table} \begin{tabular}{l l} \hline \hline **Interaction Type** & **Contact Formation** \\ \hline \hline Get close to xxx & \{xxx, none, none, none, dir\} \\ \hline Stand up & \{xxx, none, none, none, dir\} \\ \hline Left hand reaches xxx & \{xxx, part, left\_hand, contact, dir\} \\ \hline Right hand reaches xxx & \{xxx, part, right\_hand, contact, dir\} \\ \hline Both hands reaches xxx & \{xxx, part, left\_hand, contact, dir\}, \\ \hline Sit on xxx & \{xxx, seat\_surface, pelvis, contact, up\} \\ \hline Sit on xxx, left hand on left arm & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_hand, contact, up\} \\ \hline Sit on xxx, hands on arms & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, hands away from arms & \{xxx, left\_arm, left\_hand, not contact, none\}, \\ \hline Sit on xxx, left elbow on left arm & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, right elbow on right arm & \{xxx, right\_arm, right\_elbow, contact, up\} \\ \hline Sit on xxx, elbows on arms & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, left arm, left\_elbow, contact, none\} \\ \hline Sit on xxx, right arm, right\_elbow, contact, none\} \\ \hline Sit on xxx, left hand on left knee & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, right\_knee, right\_hand, contact, up\} \\ \hline Sit on xxx, right hand on right knee & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, hands on knees & \{xxx, left\_knee, right\_hand, contact, none\}, \\ \hline Sit on xxx, left hand on stomach & \{xxx, right\_knee, right\_hand, contact, none\} \\ \hline Sit on xxx, right hand on stomach & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, hands on stomach & \{xxx, pelvis, left\_hand, contact, none\}, \\ \hline Sit on xxx, pelvis, right hand, contact, none\} \\ \hline Sit on xxx, left foot on right knee & \{xxx, right\_knee, left\_foot, contact, none\} \\ \hline Sit on xxx, right foot on left knee & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, right foot on left knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, left hand on left knee & \{xxx, seat\_surface, pelvis, contact, up\}, \\ \hline Sit on xxx, left hand on left knee & \{xxx, left\_knee, right\_hand, contact, none\} \\ \hline Sit on xxx, left hand on right knee & \{xxx, left\_knee, right\_hand, contact, none\} \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\} \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\} \\ \hline Sit on xxx, left hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\} \\ \hline Sit on xxx, left hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right knee & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, left\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_footfoot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_footfoot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx, right\_knee, right\_foot, contact, none\}, \\ \hline Sit on xxx, right hand on right arm & \{xxx \begin{table} \begin{tabular}{l l} \hline \hline **Interaction Type** & **Contact Formation** \\ \hline Lie on xxx & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\} \\ \hline Lie on xxx, left knee up & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\} \\ & \{xxx, mattress, left\_knee, not contact, none\} \\ \hline Lie on xxx, right knee up & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\}, \\ & \{xxx, mattress, right\_knee, not contact, none\} \\ \hline Lie on xxx, knees up & \{xxx, mattress, left\_knee, not contact, none\} \\ & \{xxx, mattress, right\_knee, not contact, none\} \\ & \{xxx, mattress, right\_knee, not contact, none\} \\ \hline Lie on xxx, left hand on pillow & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\}, \\ & \{xxx, pillow, left\_hand, contact, none\} \\ \hline Lie on xxx, right hand on pillow & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\}, \\ & \{xxx, pillow, right\_hand, contact, none\} \\ \hline Lie on xxx, hands on pillow & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, left\_hand, contact, none\}, \\ & \{xxx, pillow, right\_hand, contact, none\} \\ \hline Lie on xxx, on left side & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\}, \\ & \{xxx, mattress, right\_shoulder, not contact, none\} \\ \hline Lie on xxx, on right side & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, mattress, left\_shoulder, not contact, none\} \\ \hline Lie on xxx, left foot on right knee & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, contact, up\}, \\ & \{xxx, right\_knee, left\_foot, contact, up\} \\ \hline Lie on xxx, right foot on left knee & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, left\_knee, right\_foot, contact, up\} \\ & \{xxx, left\_knee, right\_foot, contact, up\} \\ \hline Lie on xxx, head up & \{xxx, mattress, pelvis, contact, up\}, \\ & \{xxx, pillow, head, not contact, none\} \\ \hline \hline \end{tabular} \end{table} Table 9: List of Interactions in ScenePlan-2
2307.16361
Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples
Deep Neural Networks (DNNs) for 3D point cloud recognition are vulnerable to adversarial examples, threatening their practical deployment. Despite the many research endeavors have been made to tackle this issue in recent years, the diversity of adversarial examples on 3D point clouds makes them more challenging to defend against than those on 2D images. For examples, attackers can generate adversarial examples by adding, shifting, or removing points. Consequently, existing defense strategies are hard to counter unseen point cloud adversarial examples. In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark to evaluate adversarial robustness, which can provide a detailed understanding of the effects of the defense and attack methods. We then collect existing defense tricks in point cloud adversarial defenses and then perform extensive and systematic experiments to identify an effective combination of these tricks. Furthermore, we propose a hybrid training augmentation methods that consider various types of point cloud adversarial examples to adversarial training, significantly improving the adversarial robustness. By combining these tricks, we construct a more robust defense framework achieving an average accuracy of 83.45\% against various attacks, demonstrating its capability to enabling robust learners. Our codebase are open-sourced on: \url{https://github.com/qiufan319/benchmark_pc_attack.git}.
Qiufan Ji, Lin Wang, Cong Shi, Shengshan Hu, Yingying Chen, Lichao Sun
2023-07-31T01:34:24Z
http://arxiv.org/abs/2307.16361v2
# Benchmarking and Analyzing Robust Point Cloud Recognition: ###### Abstract Deep Neural Networks (DNNs) for 3D point cloud recognition are vulnerable to adversarial examples, threatening their practical deployment. Despite the many research endeavors have been made to tackle this issue in recent years, the diversity of adversarial examples on 3D point clouds makes them more challenging to defend against than those on 2D images. For examples, attackers can generate adversarial examples by adding, shifting, or removing points. Consequently, existing defense strategies are hard to counter unseen point cloud adversarial examples. In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark to evaluate adversarial robustness, which can provide a detailed understanding of the effects of the defense and attack methods. We then collect existing defense tricks in point cloud adversarial defenses and then perform extensive and systematic experiments to identify an effective combination of these tricks. Furthermore, we propose a hybrid training augmentation methods that consider various types of point cloud adversarial examples to adversarial training, significantly improving the adversarial robustness. By combining these tricks, we construct a more robust defense framework achieving an average accuracy of 83.45% against various attacks, demonstrating its capability to enabling robust learners. Our code-base are open-sourced on: [https://github.com/qiufan319/benchmark_pc_attack.git](https://github.com/qiufan319/benchmark_pc_attack.git). ## 1 Introduction As an prominent form of 3D data representation, point clouds are extensively employed in various real-world sensing applications, such as autonomous driving [36], robotics [14], and healthcare [1]. To achieve precise perceive 3D objects, prior studies [15, 16, 29] have investigated the development of deep neural networks (DNNs) capable of detecting, segmenting, and identifying objects from point cloud data. While these DNN-based methods have exhibited notable success, recent studies have exposed their susceptibility to adversarial examples [33, 38, 30]. Specifically, the addition, removal, or shifting of a small proportion of 3D points from an object can significantly degrade the performance of the DNNs. To mitigate the risk of adversarial examples, several de Figure 1: Point Cloud defense’s adversarial robustness to various attacks in a radar chart. We evaluate the defense under 9 attack methods, including PGD [9], SIA [7], L3A [23], Drop [38], AOF [8], KNN [26],GeoA3 [30], ADVPC [5], Add [33], IFGM [9], Perturb [33]. Our method achieve good adversarial robustness against all attacks. fense strategies have been proposed to enhance the robustness of point cloud DNNs [32, 39, 9]. For example, pre-processing techniques are applied to remove the points perturbed by adversarial examples [39, 32]. In addition, adversarial training [9, 27, 11], which incorporates adversarial examples in the model training process, is designed to improve the adversarial robustness. Despite the initial success of investigating the adversarial robustness of point cloud DNNs, there are three obvious limitations for existing attacks and defenses: **L1: Unrealistic attack and defense scenarios.** The current state-of-the-art (SOTA) adversarial learning has primarily focused on white-box attacks and defenses [33, 30, 39], where the attacker has complete knowledge of the model architecture and paramteres. While these scenarios are useful for testing the limits of existing methods and understanding their vulnerabilities, they do not reflect the real-world security threat landscape. In many security-critical applications, such as autonomous driving and financial systems, attackers may not access to the model parameters. **L2: Lack of a unified and comprehensive adversarial robustness benchmark.** While several studies [18, 28, 25, 19] have been proposed to evaluate the robustness of point cloud DNNs, they have are all focused on benchmark under diverse types of corruptions. However, existing benchmarks research for studying adversarial robustness remains unexplored. Compared with the corruption-oriented attack methods, adversarial examples are difficult to be detected by both humans and machines. Moreover, perturbation generated using gradient descent are more effective than random corruptions, resulting in higher error rates and better imperceptibility. Despite recent studies exploring adversarial examples and defense on point cloud DNNs [33, 26], most of them has employ substantially different evaluation settings such as datasets, attacker's capability, perturbation budget, and evaluation metrics. The lack of a unified evaluation framework makes it challenging to fairly quantify the adversarial robustness. Additionally, current adversarial robustness evaluations only focus on one or a few attacks, defenses, and victim models, limiting the generalization and comparability of the results. For instance, the effectiveness of point cloud attack methods [33, 30] is typically evaluated under a limited set of defenses and models. Similarly, defense strategies are often evaluated against only a few early attacks, making it difficult to capture their strengths and weaknesses based on incomplete evaluations. **L3: Poor generalization of defense strategies.** Differ 2D image attack that modify the pixel value in a fixed data size, the adversarial example on point cloud offer a wider attack space and arbitrary data size. For instance, an attacker can generate adversarial example by adding, shifting, or removing points on the original point cloud. Unfortunately, most of existing defense strategies only consider one or two types, which can not handle unseen adversarial examples. In this paper, we propose the first comprehensive and systematic point cloud adversarial robustness benchmark. Our benchmark provides a unified adversarial setting and comprehensive evaluation metrics that enable a fair comparison for both attacks and defenses. By analyzing the quantitative results, we propose a hybrid training strategy and construct a more robust defense framework by combining effective defense tricks. Our main contributions are summarized below: **1) Practical Scenario.** To evaluate the real-world performance of attacks and defenses, we refine the capability of both the attacker and defender, For example, we limited the maximum number of points added and the knowledge level of the attacker and defender. In our benchmark, all attackers are processed under the black-box setting, where the attacker does not have any additional knowledge about the model parameters, model structure, and training dataset. **2) Unified Evaluation Pipeline.** Our benchmarks provide a comprehensive and standardized evaluation methodology, enabling fair comparison and reproducibility of the proposed methods. For example, we evaluate the attack from attack success rate, transferability, and imperceptible, which are essential metrics for assessing the effectiveness, imperceptibility, and generalization of the attacks. **3) Bag-of-tricks for Defending Adversarial Examples.** Based on our adversarial robustness analyses with our benchmark, we proposed a hybrid training approach that jointly consider different types of adversarial examples, including adding, shifting, and removing points, to perform adversarial training. Through analysis of experiment result, we further construct a more robust defense framework by combining the effective defense tricks. As shown in Figure 1, our framework achieve the SOTA adversarial robustness under various attacks. ## 2 Related works **3D Point Cloud Recognition.** In contrast to 2D image data, 3D point cloud data is irregular and unordered, making it hard to be consumed by the neural networks designed for the 2D domain. PointNet [15] is the pioneering work that directly consumes point cloud. It achieves permutation invariance of points by learning each point independently and subsequently using a symmetric function to aggregate features. Due to its high accuracy and efficiency, it has been widely used as the backbone for 3D point cloud recognition. As the update of PointNet, PointNet++ [16] improves point cloud learning by capturing local information from the neighborhood of each point. Another representative work is DGCNN [29], which enhances the representation capability by building neighbor graphs between adjacent points and using a convolution-like operation (EdgeConv) on each connecting edge to capture local information. Recently, some transformer-based methods [12, 4, 34] have been proposed, achieving good performance. **Robustness Benchmark for Point Cloud.** Several benchmarks [21, 20, 2, 22, 6] have been built for studying the robustness of point cloud learning. [17] build a real-world dataset to evaluate the gap between simulation and real-world. To evaluate the corruption robustness, ModelNet-C [19] categorizes common corruptions and builds a new corruption dataset ModelNet-C by corrupting the ModelNet40 test set with simulated corruptions like occlusion, scale, and rotation. RobustPointset [25] evaluates the robustness of point cloud DNNs and shows that existing data augmentation methods can not work well to unseen corruptions. However, little attention has been paid to adversarial examples of point cloud recognition. In this paper, we aim to present the first comprehensive, systematic benchmark to evaluate the point cloud adversarial examples and defenses. ## 3 Benchmark ### Preliminaries **Problem Formulation.** We defined the point cloud as \(X\in\mathbb{R}^{N\times 3}\), where \(N\) is the number of points. Each point \(x_{i}\in\mathbb{R}^{3}\) indicates the 3D coordinate \((x_{i},y_{i},z_{i})\). Formally, a classifier \(f_{\theta}(X)\to Y\) maps the input point cloud \(X\) to its corresponding label \(y\in Y\) with parameter \(\theta\). For adversarial examples on point cloud DNNs, an attacker generates an adversarial example \(\hat{X}\), which makes the classifier \(f_{\theta}\) output an incorrect label \(\hat{Y}\). Generally, the objective function of generating adversarial examples can be formulated as: \[\min D(X,\hat{X}),\qquad\mathrm{s.t.}\ f_{\theta}(\hat{X})=\hat{Y}, \tag{1}\] where \(D(\cdot,\cdot)\) is the distance function measuring similarity between \(X\) and \(\hat{X}\). The distance is normally constrained to a small budget \(\rho\) ensuring the imperceptibility. Because the equation (1) is non-convex, according to [33] we reformulated it as gradient-based optimization algorithms: \[\min f_{adv}(X,\hat{X})+\lambda*D(X,\hat{X})\qquad\mathrm{s.t.}\ D(X,\hat{X})<\rho, \tag{2}\] where \(f_{adv}\) is the adversarial loss function, including logits loss and cross-entropy loss, and \(\lambda\) is a hyperparameter to balance distance and adversarial loss. **Attack Types.** An attacker can have different targets of generating adversarial examples. In our benchmark, we divided the attacks into targeted and untargeted. Targeted: A targeted attack tries to make the victim model outputs a result that it desired, as \(f_{\theta}(\hat{X})=Y^{*}\), where \(y^{*}\) is the target label. Untargeted: an untargeted attack only aims to make the victim model outputs a wrong result, as \(f_{\theta}(\hat{X})\neq Y\), where \(Y\) is the true label. **Attack Knowledge.** The attacker can have different levels of knowledge of the victim model. Based on the knowledge level, the attacks can be divided into Black-Box and White-Box. Black-Box: The attacker can not get any information about the victim model, such as gradient information, model structure, and parameters. However, they have limited chances to query the victim model and obtain the output. White-Box: The attacker can get any information about the victim model. In both knowledge settings, the attacker can access the training dataset. **Attack Stage.** Based on the stage where the attacks happened, we divided the attacks into Poisoning and Evasion. Poisoning: The attacker generate the adversarial examples and inject them into the training dataset. Once the attacker change the training dataset, the victim model will be retrained on changed dataset to get a worse model. Evasion: The parameter of the victim model is fixed, and attackers inject adversarial perturbation into testing data. ### Practice Scenario In real-world, the victim model is usually trained in a confidential manner, and the attacker is hard to modified the model meaning that white-box and poisoning setting are normally infeasible. In our benchmark, we make the following assumptions for a unified and practical adversarial robustness evaluation protocol: (1) Black-box: the attacker does not know the defender's strategies, and vice versa. (2) Evasion: The point cloud DNNs are trained with trusted data and training model is inaccessible to the attacker. (3) Untargeted: in our benchmark, we select untargeted attacks for the evaluation of adversarial robustness. Because untargeted attack is easier than targeted attacks for attacker, thus the untargeted attack is the upper bound of attack intensities and more difficult for defense strategies. We define the full capabilities of attackers and defenders in our benchmark: **Attacker**: 1) The attacker can access the testing point cloud data to produce adversarial examples, but they should not have knowledge about the victim model or defense mechanism. 2) To preserve adversarial examples imperceptible, the attacker is only allowed to add or delete a limited number of points in the point cloud. 3) The attacker can not modify the training dataset. 4) The attacker can only query the victim model with limited times. **Defender**: 1) The defender has full access to the training dataset. 2) The defender can use any solution to improve the robustness without additional knowledge about the attack. **Both sides**: We assume that attackers know the architecture of victim model (e.g., PointNet, PointNet++), and then they can train a corresponding surrogate model to generate adversarial examples. Similarly, the defender can have some assumptions on the effects of adversarial examples (e.g., point cloud adversarial examples usually exist some outliers). For both the attacker and the defender, the generalization (e.g., an attack can bypass multiple defense techniques) is an important factor for adversarial robustness quantifica tion. Thus, we evaluate the effectiveness of SOTA attacks against various defense techniques and model architectures. We also conduct similar quantifications for the defenses in our benchmark. By following the above rules, we provide a unified evaluation scenario for attacks and defenses in a principled way. It is worth nothing that the unified scenario is not the only valid ones, our benchmark will include more scenarios as this field evolved over time. As shown in Figure 2, the attack and defense pool include all attack methods and defense strategies in our benchmark. Our evaluation metrics incorporate three attack metrics, namely, attack success rate, distance, and transferability, to assess the performance of attack methods. Additionally, we use one defense accuracy metric to evaluate the effectiveness. We construct attack and defense leaderboards based on the metrics values. Further, we conduct modular analysis on each defense strategy, and subsequently integrate effective modules to construct a more robust defense framework. ### Generating Adversarial Examples Adversarial examples were first discovered by [24] in 2D image classification tasks. With the development of adversarial learning, some works [33, 7, 30] proved that point clouds also be vulnerable to adversarial examples. The adversarial examples on point cloud can be divided into adding points, removing points, and shifting points attacks. **Adding Points.** The attacker generate adversarial examples by adding a set of adversarial points \(Z\in\mathbb{R}^{k\times 3}\) where \(k\) is the number of modified points in different attack settings. Given the adversarial perturbations \(\rho\in\mathbb{R}^{k\times 3}\) on added points, the objective function of adding points attacks can be formulated as: \[\min f_{adv}(X,X\cup(Z+\rho))-\lambda D(X,X\cup(Z+\rho)), \tag{3}\] Adding independent points [33] chooses the critical points that are still active after max-pooling operation, as the initialized positions, and then uses C&W [3] to output their final coordinates. Although other adding points attacks exist, such as add clusters [33] and adversarial sticks [10]. These methods are not practical because they create a noticeable continuous deformation and then produce large perturbations. Consequently, for the purpose of adding points attack, only independent points are considered. **Removing Points.** The attacker remove some points to spoof the classifier. As the representative work of removing points attack, saliency map [38] constructs a saliency map to measure the contribution of each point and then removes the points based on the saliency score. In our benchmark, we limit the number of dropped points to keep the drop attack imperceptible. **Shifting Points.** The attacker perturbs the coordinates of a set of points to implement an attack. The objective function of shifting points attacks can be formulated as: \[\min f_{adv}(X,(X+\rho))-\lambda D(X,(X+\rho)), \tag{4}\] The iterative fast gradient method (IFGM) [9] is an extension of the fast gradient method (FGSM) that repeats FGSM multiple times to generate better adversarial examples. The project gradient descent method (PGD) [9] projects the perturbed point onto the triangle plane where the points are sampled. Perturb [33] proposes a C&W based algorithm to generate adversarial examples. To reduce the outliers, KNN [26] incorporates Chamfer measurement and KNN distance to encourages the compactness of local neighborhoods in the generated adversarial examples. GeoA3 [30] perturbs points in a geometrically aware way by adding local curvatures to the loss function, thereby making the adversarial examples more imperceptible. L3A [23] proposes Figure 2: The pipeline of our benchmark. a novel optimization method to avoid local optima, making the attack more efficient. AdvPC [5] utilizes a point cloud auto-encoder (AE) during the generation, improving the transferability of adversarial examples. SIA [7] builds a tangent plane to each point and limits the point perturbation along the optimal direction on the tangent plane, making the adversarial examples more imperceptible. AOF [8] proposes a more robust and transferable attack by perturbing the low-frequency in the frequency domain. ## 4 Analysis and Bag-of-Tricks for Defending Adversarial Examples To alleviate the adversarial behaviors, the most popular defending techniques can be divided into three directions, i.e., pre-processing, reconstruction, and augmentation, as shown on Figure 3. **Pre-processing.** Advanced pre-processing aims to reduce the noise before inference. A straightforward approach is Simple Random Sampling (SRS), which random samples a subset of points from the original point cloud as input. Statistical Outlier Removal (SOR) [39] computes KNN distance and removes the points with a large KNN distance. **Reconstruction.** Adversarial examples often result in the absence or alteration of geometric features in the original point cloud. With the development of 3D point cloud reconstruction, some works employed 3D reconstruction networks to improve robustness. We consider two 3D Reconstruction networks in our benchmark: **DUP-Net**[39]: DUP-Net employs the PU-Net [35] as its reconstruction network. The PU-Net utilizes point cloud up-sampling to reconstruct the surface and generate a high-resolution point cloud that captures the missing structures of the object's surface. More experiment results of DUP-Net can be found in the appendix. **IF-Defense**[32]: In contrast to DUP-Net, IF-Defense employs the ConvONet [31] as its reconstruction network. The ConvONet uses the point cloud as input for shape latent code extraction, while the encoder produces a learned implicit function network. By pre-training the implicit model on clean data, the decoder's output space situates on the accurate and complete shape manifold. We present the results of the adversarial robustness evaluation of IF-Defense and ConvONet in the appendix. We find that both reconstruction networks can improve adversarial robustness. Especially, ConvONet, with its superior 3D reconstruction performance, exhibits better adversarial robustness in most attacks. **Augmentation.** The principle of augmentation is aimed at enhancing the robustness of the model when encountering minor noise. One notable approach is adversarial training [9], which incorporates adversarial examples into the training phase. Another augmentation method is PointCutmix [37], which utilizes mix-based point cloud to enhance the model's robustness. However, due to the variety of attack types, existing augmentation methods performance poorly against adversarial attacks in point cloud. **Hybrid Training.** To enhance the performance of augmentation, we propose a hybrid training method that leverages multiple attack approaches. Especially, hybrid training selects \(k\) attack approaches, including adding, removing, and shifting attacks. For each class in the dataset, the proposed method divides the samples equally into \(k\) parts and applies different attack approaches to each part. Finally, all generated adversarial examples are integrated to augment the training data. The result of adversarial robustness of augmentation methods is reported in the appendix. Our hybrid training achieves the highest level of adversarial robustness among all augmentation methods. In Table 1, we show the accuracy of defense strategies. We find the three components can improve the robustness of adversarial robustness. Based on the aforementioned analyses, we propose a robust defense framework that integrates SOR, hybrid training, and ConvONet. In Table 2, our defense framework demonstrates superior robustness compared to other existing defense strategies. Moreover, in the ablation study presented in the appendix, we demonstrate that all aforementioned modules contribute significantly to the adversarial robustness of our defense framework, with hybrid training being the critical component for enhancing adversarial robustness. These results substantiate our conclusions from the modular analysis and establish our framework as a solid baseline for future research on adversarial robustness. \begin{table} \begin{tabular}{l|c c c} \hline \hline & AOF & GEO3 & SIA \\ \hline w/o defense & 54.54 & 61.26 & 31.40 \\ SOR (Pre-processing) & 68.48 & 73.22 & 59.12 \\ IF-Defense (Reconstruction) & 66.99 & 65.68 & 43.76 \\ Hybrid Training (Augmentation) & 73.43 & 75.45 & 76.26 \\ \hline \hline \end{tabular} \end{table} Table 1: The accuracy of defense strategies. Figure 3: Robust defense framework paradigm. The adversarial robustness of point clouds against various attacks is influenced by three critical components, including pre-processing, reconstruction, and augmentation methods. ## 5 Experiments ### Experimental Setup **Dataset and DNNs.** All of our experiments are conducted commonly on ModelNet40 dataset. ModelNet40 consists of 123,11 CAD models for 40 object classes. In our experiments, we split ModelNet40 dataset into two parts: 9,843 and 2,468 samples for training and testing, respectively. Following [16], we use farthest points sampling (FPS) to uniformly sample 1024 points from the surface of each object as input data. We adopt eight widely used point cloud DNNs as victim models, including PointNet [15], PointNet++ [16], DGCNN [29], PointConv [31], PCT [4], Curvenet [12], PRC [18], and GDANet [34]. For PointNet++ and PointConv, we use the single scale grouping (SSG) strategy. All models are trained without data augmentation. **Attack Settings.** According to the attacker capability setting, we implemented all attacks on the testing dataset and using a PointNet model as surrogate model. It should be note that the hyperparameters of the surrogate model differed from those of the victim models. Specifically, We employed 11 different attacks. In the adding points attack [33], we added 100 points, and in the removing points attack [38], we removed 200 points. Regarding the shifting points attack, we utilized a range of methods, including SIA [7], L3A [23], KNN [26], GeoA3 [30], IFGM [9], PGD [9], Perturb [33], AOF [8] and AdvPC [5]. To ensure fair verification, we constrained all Shifting points adversarial examples equally using an \(l_{\infty}-\) normal ball with a radius of 0.16, and we performed untargeted attacks under the same \begin{table} \begin{tabular}{c c|c c c c c c c c c c c} \hline \hline Defense \& (\&\)Acc) & Model & Clean & PGD & SIA & L3A & Drop & AOF & KNN & GeoA3 & AdvPC & Add & IFGM & Perturb \\ \hline \multirow{8}{*}{Ours} & PointNet & 87.36 & 76.70 & 76.26 & 75.04 & 80.79 & 81.60 & 85.53 & 84.04 & 86.22 & 87.28 & 86.26 & 86.35 \\ & PointNet++ & 88.67 & 71.56 & 78.61 & 76.70 & 83.14 & 84.12 & 86.87 & 85.78 & **87.40** & 88.45 & 88.17 & 88.70 \\ & DGCNN & 87.97 & 72.93 & 81.81 & 77.796 & 82.94 & 84.08 & 86.39 & 84.81 & 86.71 & 88.17 & 87.76 & 87.32 \\ & Pointconv & 87.12 & 70.14 & 80.67 & 76.50 & 83.79 & 81.77 & 86.30 & 85.53 & 86.83 & 87.88 & 87.52 & 86.95 \\ & PCT & 83.27 & 73.91 & 78.32 & 75.77 & 76.86 & 79.66 & 82.41 & 80.71 & 81.60 & 82.78 & 82.21 & 83.71 \\ & Curvenet & 88.57 & 76.13 & 80.26 & 78.16 & 83.67 & 83.59 & 87.20 & 84.97 & 86.14 & 88.05 & 86.99 & 87.88 \\ & RPC & 88.86 & 74.11 & **82.66** & **78.20** & 82.98 & **84.48** & **87.93** & 85.49 & 87.38 & 88.01 & 88.45 & 88.13 \\ & GDANet & 88.57 & 75.32 & 80.79 & 77.31 & 83.59 & 83.75 & 86.59 & 84.44 & 86.63 & 88.65 & 86.95 & 87.28 \\ \hline \hline \multirow{8}{*}{Hybrid Training} & PointNet & 88.57 & 80.15 & 53.08 & 50.28 & 77.55 & 73.34 & 64.71 & 75.45 & 84.12 & 83.39 & 85.17 & 87.64 \\ & PointNet++ & 89.75 & 77.39 & 52.80 & 57.74 & 85.74 & 79.85 & 74.68 & 82.74 & 86.08 & 85.45 & 88.29 & 89.00 \\ & DGCNN & 89.47 & **81.40** & 66.29 & 61.14 & 86.14 & 80.19 & 80.59 & 82.74 & 87.28 & 87.76 & 88.53 & 89.95 \\ & Pointconv & 90.19 & 80.06 & 45.30 & 53.77 & **86.47** & 69.65 & 81.69 & 83.31 & 85.13 & 88.82 & **89.93** & **90.28** \\ & PCT & 87.44 & 45.95 & 75.97 & 76.70 & 72.69 & 78.57 & 85.86 & 83.02 & 86.35 & 87.12 & 87.86 & 87.24 \\ & Current & 87.16 & 44.57 & 76.22 & 76.00 & 74.15 & 81.48 & 85.45 & 83.43 & 86.14 & 87.40 & 87.84 & 86.35 \\ & RPC & 85.45 & 53.85 & 76.46 & 74.80 & 70.30 & 80.31 & 83.95 & 82.17 & 84.60 & 84.52 & 85.90 & 84.68 \\ & GDANet & 87.24 & 38.74 & 79.01 & 75.04 & 72.16 & 80.71 & 85.41 & 82.86 & 85.21 & 86.43 & 86.87 & 86.67 \\ \hline \hline \multirow{8}{*}{IF-Defense} & PointNet & 85.33 & 44.89 & 68.60 & 68.76 & 65.19 & 75.28 & 82.46 & 82.74 & 83.79 & 85.41 & 82.86 & 85.01 \\ & PointNet++ & 87.52 & 38.61 & 72.45 & 72.33 & 73.01 & 78.28 & 85.56 & 81.77 & 85.66 & 87.20 & 85.37 & 87.12 \\ & DGCNN & 87.88 & 40.32 & 77.92 & 76.05 & 71.48 & 80.35 & 85.78 & 85.41 & 85.49 & 86.75 & 85.86 & 87.32 \\ & Pointconv & 85.53 & 28.61 & 78.24 & 73.66 & 73.87 & 75.89 & 84.85 & 84.24 & 84.48 & 85.58 & 84.20 & 85.37 \\ & PCT & 88.33 & 45.83 & 75.24 & 73.45 & 72.85 & 79.42 & 85.86 & 85.29 & 86.35 & 87.16 & 85.94 & 86.83 \\ & Curvenet & 88.33 & 45.38 & 76.18 & 75.45 & 74.19 & 80.51 & 85.45 & 86.02 & 86.14 & 88.01 & 86.75 & 86.67 \\ & RPC & 88.05 & 40.44 & 76.13 & 73.62 & 73.58 & 77.43 & 83.95 & **86.06** & 84.60 & 87.72 & 85.90 & 87.64 \\ & GDANet & 87.93 & 38.05 & 81.65 & 75.45 & 72.37 & 80.92 & 85.41 & 85.94 & 85.21 & 87.72 & 86.10 & 86.87 \\ \hline \hline \multirow{8}{*}{Ours} & PointNet & 86.95 & 42.10 & 63.21 & 63.70 & 57.86 & 68.48 & 80.06 & 73.22 & 80.49 & 86.10 & 84.16 & 85.53 \\ & PointNet++ & 88.57 & 25.00 & 64.30 & 72.49 & 66.25 & 71.31 & 85.13 & 80.23 & 84.89 & 88.70 & 87.72 & 88.98 \\ \cline{1-1} & DGCNN & 88.57 & 18.00 & 73.58 & 69.89 & 66.94 & 66.25 & 85.25 & 74.24 & 82.33 & 87.88 & 87.64 & 87.44 \\ \cline{1-1} & Pointconv & 72.12 & 11.70 & 71.84 & 69.73 & 72.12 & 65.92 & 85.53 & 77.47 & 84.68 & 88.49 & 86.02 & 87.12 \\ (75.19) & PCT & 88.41 & 38.94 & 72.97 & 70.7 setting. **Defense Settings.** For SRS [39], we randomly dropped 500 points from the input point cloud. To perform SOR [39], we first computed the average distance from its \(k\)-nearest neighbors and subsequently removed points if the average distance exceeded the threshold of \(\mu+\alpha\cdot\sigma\), where \(\mu\) and \(\sigma\) are the mean and standard deviation, respectively, and \(k\) and \(\alpha\) are hyperparameters. We set the hyperparameters to be \(k=2\) and \(\alpha=1.1\). For IF-Defense [32], we chose ConvOnet [13], which achieved the superior performance for most attacks in their experiment results. In adversarial training, all victim models were trained on clean data and adversarial examples generated by PGD with \(l_{\infty}=0.20\). For hybrid training, we combined adding independent points, saliency map, and PGD, with adversarial training. In the adding independent points attack, we added 200 points to point cloud. In the saliency map attack, we removed 300 points form the point cloud based on their saliency map. In PGD, we set the perturbation budget to \(l_{\infty}=0.20\). **Evaluate Metrics.** To evaluate the imperceptibility of generated adversarial examples, we adopt Chamfer Distance (CD) and Hausdorff Distance (HD) as distance metrics for each adversarial example in our study. (1) HD: measures the distance between two points clouds in a metric space by computing the nearest original point for each adversarial point and outputting the maximum square distance among all nearest point pairs, as shown below: \[\mathcal{D}_{H}(X,\hat{X})=\min_{x\in X}\max_{\hat{x}\in\hat{X}}\|x-\hat{x}\|_ {2}^{2}, \tag{5}\] (2) CD: CD is similar to HD but takes an average rather than a maximum, It defined as: \[\mathcal{D}_{C}(X,\hat{X})=\frac{1}{\|\hat{X}\|_{0}}\sum_{\hat{x}\in\hat{X}} \min_{x\in X}\|x-\hat{x}\|_{2}^{2}. \tag{6}\] Moreover, for generating adversarial example methods, we use attack success rate to evaluate their effusiveness. (4) Attack Success Rate (ASR): it computes the attack success rate against defense strategies. For defense strategies, we use defense accuracy (ACC) to evaluate their adversarial robustness. (5) ACC: it measures the accuracy of defense strategies against attack methods. ### Experimental Results **Point Cloud Adversarial Robustness Leaderboard.** Following the process of Figure 2, we evaluate the performance of attacks vs. defenses. An illustrated example of leaderboard for point cloud adversarial robustness is presented in Table 2, where the attacks and defenses are ranked based on their respective average attack success rate and average defense accuracy. **1)** The effectiveness of defense strategies may can vary depending on the models and attacks they are applied to. In Figure 4 We examine the adversarial robustness of 5 defense strategies across various attacks. Our result reveal that while hybrid training exhibits a high defense accuracy against SIA, Drop, and AOF attacks, it performs poorly against KNN and L3A attacks. In addition, we explore the defense accuracy of defense strategies with different victim models under same attack, as depicted in Figure 5. We find that IF-Defense has a large defense performance gap between PointConv and DGCNN. To obtain more convincing results, we recommend that researchers comprehensively evaluate the adversarial robustness of defense strategies by subjecting them to a wide spectrum of attacks and victim models. Such evaluations are essential for accurately evaluating the generalization capabilities of defenses and promoting their practical viability. **2)** Among current point cloud DNNs, it has been observed that models incorporating advanced grouping operations, such as Curve grouping in Curvenet and frequency grouping in GDANet and RPC, exhibit superior performance against various attacks. This performance superiority can potentially be attributed to the high expressiveness of these models' architectures. **3)** Some defense methods, such as SOR, show worse performance than No defense model. There are two reasons to conduct this phenomenon. For attack, some early attacks (e.g., Pertub and IFGM) exhibit poor transferability. Thus, training settings differences in the target model can degrade the ASR. For defense, some defense methods modifying the shape of point cloud (e.g., SOR and ConvONet) also impacted the classification performance. In some cases, Figure 4: Adversarial robustness of 5 defense strategies under AGF L3A, Drop, AOF, KNN, and GeoA3 attacks with PointNet. Figure 5: Adversarial robustness of 5 defense strategies under AGF attack with different victim models. these defensive modifications may degrade the model performances more significantly than the early adversarial attacks, resulting in worse performance than No defense. The complete leaderboard is provided in the appendix. The leaderboard is dynamic and subject to modification with the advent of more potent attacks, defenses, or models. We will analyze the effectiveness, transferability, and imperceptible of adversarial examples in the following. **Attack Effectiveness.** In Table 2, we observe the effectiveness of adding points attack is considerably low, indicating that adding point attack poses a significant challenge in affecting the performance of existing models. Furthermore, the average success rate of most shifting points attacks is below 25%, implying that the majority of existing shifting attacks fail to significantly degrade point cloud DNNs. It indicates most of the previous works may not be applicable in real-world. Therefore, future research should priories designing more practical attack methods that take into account real-world situations. **Attack Transferability and Imperceptibility.** In the benchmark, attackers do not have knowledge about the victim model, which makes the transferability of adversarial examples crucial. To evaluate the transferability of adversarial examples, we selected three wildly-used point cloud DNNs, including PointNet, PointNet++, and DGCNN as the surrogate model. Adversarial examples generated on these surrogate models were tested on all victim models. The transferability results are presented in the appendix. All adversarial examples are tested without any defense strategies, and the transferability was ranked based on the average attack success rate. Furthermore, we evaluate the imperceptibility of adversarial examples by calculating the Hausdorff distance and Chamfer measurement, respectively. The imperceptibility results are also presented in the appendix. We ranked the adversarial examples based on the average distance of Hausdorff distance and Chamfer measurement. After observing the transferability and imperceptibility results, we identified several good imperceptible adversarial examples, such as GeoA3, IFGM, Perturb, and Add, with poor transferability, indicating a trade-off between imperceptibility and transferability. Therefore, how to balance the transferability and imperceptibility of adversarial examples is a potential research direction. ### Ablation Study and New Findings In this section, we present an ablation study of our proposed defense framework, as illustrated in Figure 6. Specifically, we conduct experiments by selectively removing individual defense components and evaluating the resulting adversarial robustness against adversarial examples, such as AOF, GeoA3, and SIA. From the results, we demonstrate that all modules within our defense framework significantly contribute to the overall robustness of the system. Meanwhile, each module has different effectiveness for robustness. For example, Hybrid training combined with SOR defense can achieve almost the same performance as all modules, but SOR plus ConvOnet gets the lowest defense performance, which reveals the significance of Hybrid training. **Our New Findings.** We present new findings on the transferability of adversarial examples in 3D point cloud DNNs. Table 2 and the transferability results in the appendix show that the transferability of point cloud adversarial examples is limited compared with 2D adversarial examples. This limitation can be attributed to the unique characteristics of 3D point cloud DNNs. To enable practical use of adversarial examples in the real-world, it is necessary to design more transferable adversarial examples. Although hybrid training has demonstrated promising accuracy results, it comes with significantly higher training costs. Therefore, investigating novel techniques that can effectively reduce training costs is a potential research direction. ## 6 Conclusion and Future Direction In this paper, we revisit the limitations of previous point cloud adversarial works and establish a comprehensive, rigorous, and unified benchmark for fair comparison of the adversarial robustness of point cloud DNNs. Moreover, we propose a hybrid training method that combines various adversarial examples, including adding, removing, and shifting, to enhance adversarial robustness. Through analysis of the benchmark results, we propose a more robust defense framework by integrating effective defense modules, achieving state-of-the-art adversarial robustness. The remarkable defense accuracy achieved by ConvOnet demonstrates a direct relationship between the performance of the reconstruction network and the adversarial robustness. Thus, we recommend further investigation and implementation of advanced reconstruction networks to improve adversarial robustness. We highly encourage the community to contribute more advanced point cloud DNNs, attacks, and defenses to enrich future point cloud adversarial robustness benchmarks, benefitting real-world applications. **Acknowledgement** This work was partially supported by the National Science Foundation Grants CRII-2246067, CCF2211163, and the National Natural Science Foundation of China under Grant No. NSFC22FYT45. Figure 6: The ablation study of our new defense framework. All attacks are generated on PointNet. HT: hybrid training.
2309.16815
Diquarks and $Λ^0/π^+$, $Ξ^-/π^+$ ratios in the framework of the EPNJL model
The applicability of the effective models to the description of baryons and the behaviour of ratios of strange baryons to pions is discussed. In the framework of the EPNJL model, the Bethe - Salpeter equation is used to find masses of baryons, which are considered as diquark-quark state. Baryon melting is discussed at a finite chemical potential and a flavor dependence of the hadronic deconfinement temperature is pointed. It is shown that the description of the diquark-quark state at finite chemical potential is limited due to the occurrence of the Bose condensate. This effect is strongly manifested in the description of light diquarks and baryons. Both $\Lambda^0/\pi^+$ and $\Xi^-/\pi^+$ ratios show a sharp behaviour as functions of $T/\mu_B$ variable, where T and $\mu_B$ are calculated along the melting lines.
A. V. Friesen, Yu. L. Kalinovsky
2023-09-28T19:45:02Z
http://arxiv.org/abs/2309.16815v1
Diquarks and \(\Lambda^{0}/\pi^{+}\), \(\Xi^{-}/\pi^{+}\) ratios in the framework of the EPNJL model ###### Abstract The applicability of the effective models to the description of baryons and the behaviour of ratios of strange baryons to pions is discussed. In the framework of the EPNJL model, the Bethe - Salpeter equation is used to find masses of baryons, which are considered as diquark-quark state. Baryon melting is discussed at a finite chemical potential and a flavor dependence of the hadronic deconfinement temperature is pointed. It is shown that the description of the diquark-quark state at finite chemical potential is limited due to the occurrence of the Bose condensate. This effect is strongly manifested in the description of light diquarks and baryons. Both \(\Lambda^{0}/\pi^{+}\) and \(\Xi^{-}/\pi^{+}\) ratios show a sharp behaviour as functions of \(T/\mu_{B}\) variable, where T and \(\mu_{B}\) are calculated along the melting lines. ## 1 Introduction In our previous works [1, 2, 3, 4] the peak-like structure in a \(K^{+}/\pi^{+}\) ratio was discussed in the framework of the Polyakov-loop extended Nambu-Jona-Lasinio model (PNJL) and its modifications including the vector interaction. The interest in this structure is due to the search for signals of a phase transition from hadron phase to the quark-gluon plasma (QGP) formation during the heavy ion collision [5, 6]. The quick rise in the \(K^{+}/\pi^{+}\) ratio is associated with the phase transition in the medium, while the jump from the maximum value to the constant valley is explained as the QGP formation during the collision. This is a consequence of the fact that after the deconfinement transition occurs in the system the strangeness yield becomes independent of the collision energy [7, 8, 9, 10]. Recent investigations showed that the \(K^{+}/\pi^{+}\) peak strongly depends on the volume of the system and tends to be less pronounced in small-size systems [7, 11]. The meson-to-meson ratios are widely considered both in theoretical and experimental works, in contrast to the baryon - to - meson ratios although they also have a peak-like structure. In the work [12] in the framework of the thermal model it was shown, that unlike to \(K^{+}/\pi^{+}\)-ratio, the peak for \(\Lambda^{0}/\pi^{+}\) does not disappear with reducing of the system size. The choice of the (E)PNJL model for such investigations is conditioned by the possibility to describe within the model both the chiral phase transition and the deconfinement transition, which can give a hint for understanding the nature of peaks at least quantitatively. For the next step it is interesting consider baryon - to - meson ratios in the framework of the model. The controversy of the applicability and the complexity of this task are related to the problem of describing baryons in the frame of the NJL-like models. The most detailed and exact description of baryons requires solving of the three-body Faddeev equation. Which leads to considering of baryons as bound state of a quark and diquark [13, 14]. The so-called "static approximation" of Faddeev equation leads to the Bethe-Salpeter equation, which is based on the polarisation loop in the diquark-quark scattering channel [15]. But the diquark-quark structure of baryons leads to the non-obviousness of the description of baryons in a dense medium, where the formation of a Bose condensate occurs and the diquark states melt. The results of our calculations and discussion about aspects of applicability of the model are presented in the last section of the article. ## 2 SU(3) PNJL Lagrangian The complete Lagrangian of the SU(3) PNJL model with the vector interaction and anomaly has the form [16, 15]: \[{\cal L} = \bar{q}\,(\,i\,\gamma^{\mu}\,D_{\mu}\,-\,\hat{m}-\gamma_{0}\mu)\,q \tag{1}\] \[+ \frac{1}{2}\,g_{S}\,\sum_{a=0}^{8}\,[\,(\,\bar{q}\,\lambda^{a}\,q \,)^{2}\,\,+\,\,(\,\bar{q}\,i\,\gamma_{5}\,\lambda^{a}\,q\,)^{2}\,]\] \[- \frac{1}{2}g_{\rm V}\sum_{a=0}^{8}\,[(\bar{q}\gamma_{\mu}\lambda^ {a}q)^{2}+(\bar{q}\gamma_{\mu}i\gamma_{5}\lambda^{a}q)^{2}]\] \[- \sum_{\alpha}g_{\rm diq}^{\alpha}\sum_{i,j}\,\left(\bar{q}_{a} \Gamma^{i}_{\alpha}q^{C}_{b}\right)\left(\bar{q}^{C}_{d}\Gamma^{j}_{\alpha}q_{ e}\right)\varepsilon^{abc}\varepsilon^{de}_{c}\] \[+ {\cal L}_{\rm det}-{\cal U}(\bar{\Phi},\bar{\Phi};T),\] where \(q=(u,d,s)\) is the quark field with three flavours, \(q^{C}\) is the charge conjugated quark field, \(\hat{m}={\rm diag}(m_{u},m_{d},m_{s})\) is the current quark mass matrix, \(g_{\rm S}\), \(g_{\rm V}\), \(g_{\rm diq}\) are the coupling constants. The entanglement PNJL model (EPNJL) includes the constants \(g_{\rm S},g_{\rm V}\) introduced as functions of T to enhance the coupling between quarks and the gauge field [17, 2]. \(\Gamma^{j}_{\alpha}\) is a product of Dirac matrices \(\gamma^{\mu}\) and Gell-Mann matrices \(\lambda^{\alpha}\), where index \(\alpha\) describes the type of diquarks. The covariant derivative is D\({}_{\mu}=\partial^{\mu}-iA^{\mu}\), where \(A^{\mu}\) is the gauge field with \(A^{0}=-iA_{4}\) and \(A^{\mu}(x)=g_{S}A^{\mu}_{a}\frac{\lambda}{2}\) absorbs the strong interaction coupling. The Kobayashi - Masakawa - t'Hooft (KMT) interaction is described by the term \[{\cal L}_{\rm det}=g_{D}\,\left\{{\rm det}\,[\bar{q}\,(\,1\,+\, \gamma_{5}\,)\,q\,]+{\rm det}\,[\bar{q}\,(\,1\,-\,\gamma_{5}\,)\,q\,]\,\right\}\] The last term is the effective potential \({\cal U}(\bar{\Phi},\bar{\Phi};T)\), expressed in terms of the traced Polyakov loop \(\Phi=N_{c}^{-1}{\rm tr}_{c}\langle L(\bar{x})\rangle\)[18], where \[L(\bar{x})={\cal P}{\rm exp}\left[\int_{0}^{\infty}d\tau A_{4}( \bar{x},\tau)\right]. \tag{2}\] The effective potential describes the confinement properties (Z\({}_{3}\)-symmetry) and is constructed on the basis of Lattice inputs in the pure gauge sector. In this work, we use the standard polynomial form of the effective potential [15, 4]. The effect of the vector interaction on the position of the critical end point in the phase diagram and on the behaviour of the peak in the \(K^{+}/\pi^{+}\) ratio was discussed in previous works [1, 2, 3, 4]. The grand potential density \(\Omega(T,\mu_{i})\) in the mean-field approximation with \(g_{\rm V}=0\) can be obtained from the Lagrangian density (1) and leads to a set of self-consistent equations: \[\frac{\partial\Omega}{\partial\langle\bar{q}_{i}q_{i}\rangle}=0, \,\,\,\frac{\partial\Omega}{\partial\Phi}=0,\,\,\,\,\frac{\partial\Omega}{ \partial\bar{\Phi}}=0, \tag{3}\] where \(\Phi,\bar{\Phi}\) are the Polyakov fields. The gap equations for quark masses are: \[m_{i} = m_{0i}-2g_{S}\langle\bar{q}_{i}q_{i}\rangle-2g_{D}\langle\bar{q} _{j}q_{j}\rangle\langle\bar{q}_{k}q_{k}\rangle, \tag{4}\] where \(i,j,k=u,d,s\) are chosen in cyclic order, \(m_{i}\) are the constituent quark masses, the quark condensates are: \[\langle\bar{q}_{i}q_{i}\rangle = -2N_{c}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{m_{i}}{E_{i}}(1-f^{+}_{ \Phi}(E_{i})-f^{-}_{\Phi}(E_{i})) \tag{5}\] with modified Fermi functions \(f^{\pm}_{\Phi}(E_{i})\): \[f^{+}_{\Phi}(E_{f}) = \frac{(\bar{\Phi}+2\Phi Y)Y+Y^{3}}{1+3(\bar{\Phi}+\Phi Y)Y+Y^{3}}\, \tag{6}\] \[f^{-}_{\Phi}(E_{f}) = \frac{(\Phi+2\bar{\Phi}\bar{Y})\bar{Y}+\bar{Y}^{3}}{1+3(\Phi+\bar{ \Phi}Y)\bar{Y}+\bar{Y}^{3}}\, \tag{7}\] where \(Y={\rm e}^{-(E_{i}-\mu_{f})/T}\) and \(\bar{Y}={\rm e}^{-(E_{i}+\bar{\mu}_{i})/T}\). mesons is defined as \[\Pi_{ij}=\int\frac{dp}{(2\pi)^{4}}\mathrm{tr}\{S^{i}(\hat{q}_{i},m_{i})\Gamma_{j} S^{j}(\hat{q}_{j},m_{j})\Gamma_{i}\}, \tag{8}\] where \(\Gamma_{i,j}\) are the vertex matrices (Fig.2) and \(S^{i}(\hat{q}_{i},m_{i})=(\hat{q}_{i}+\gamma_{0}(\mu_{i}-iA_{4})-m_{i})^{-1}\) is the \(i-\) flavour quark propagator. The meson mass is obtained from the Bethe-Salpeter equation in the meson rest frame (\(\bar{P}=0\)) \[1-P_{ij}\Pi_{ij}(P_{0}=M,\bar{P}=0)=0\, \tag{9}\] where the function \(P_{ij}\) depends on the type of meson (see details in [19]). For example, for the pion, which is a pseudo-scalar meson \(\bar{P}_{ud}=g_{S}+g_{D}\left\langle\bar{q}_{s}q_{s}\right\rangle\). Diquarks are considered as two-quark system and to describe the polarization loop in the same way, the "antiquark" is replaced by its charge conjugate propagator. Then two diagrams should be taken into account, however it can be shown that they give the same result. Polarization loops for diquarks are shown in Fig. 2, where \(\mathcal{C}=i\gamma_{0}\gamma_{2}\) is the charge conjugation operator, \(\Gamma_{i,j}\) are the vertex functions. According to group theory, diquarks can be represented by symmetric and antisymmetric wave functions both in colour and flavour spaces. Since diquarks are used to construct baryons which are "white objects", only diquarks with a colour antisymmetric wave function are considered. According to the interaction type, diquarks can be of scalar, pseudo-scalar, axial and vectorial types following the rule that the diquark wave function is total antisymmetric (see Table 1). The Bethe-Salpeter equation for diquarks in the rest frame is \[1-Z_{\mathrm{diq}}\Pi_{ij}(P_{0}=M_{\mathrm{diq}},\bar{P}=0)=0, \tag{10}\] \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\Gamma\) & Meson type & Possible mesons & Diq. type & Possible diq. \\ \hline \(\mathrm{i}\gamma_{5}\) & pseudoscalar & \(\pi,\,\mathrm{K}\) & scalar & \\ \(1\) & scalar & \(\sigma,\,K_{0}^{*}\) & pseudoscalar & \((ud)\), \((us)\), \((ds)\) \\ \(\gamma^{\mu}i\gamma_{5}\) & axial-vector & \(a_{1}^{*},\,K_{1}^{*}\) & vector & \\ \hline \(\mathrm{i}\gamma^{\mu}\) & vector & \(\rho,\,K^{*}\) & axial-vector & \([ud]\),\([us]\),\([ds]\),\([uu]\),\([dd]\),\([ss]\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of mesons and diquarks. Figure 2: Polarization loops for diquarks, ”\(\mathcal{C}\)” is the charge conjugation operator. with polarization operators corresponding to Fig.2 \[\Pi^{(1)}_{ij} = \int\frac{dp}{(2\pi)^{4}}tr\{S^{i}(\hat{q_{i}},m_{i})\Gamma_{j}S^{jC }(\hat{q_{j}},m_{j})\Gamma_{i}\}, \tag{11}\] \[\Pi^{(2)}_{ij} = \int\frac{dp}{(2\pi)^{4}}tr\{S^{iC}(\hat{q_{i}},m_{i})\Gamma_{j}S^ {j}(\hat{q_{j}},m_{j})\Gamma_{i}\}, \tag{12}\] which give the same result, \(S^{iC}(\hat{q_{i}})=(\hat{q_{i}}-\gamma_{0}(\mu_{i}+iA_{4})-m_{i})^{-1}\) is the propagator of the charge conjugated quark and \(Z_{\rm diq}\) is the coupling constant for diquarks. \(Z_{\rm diq}=g^{s}_{\rm diq}\) for scalar and pseudoscalar diquarks and \(Z_{\rm diq}=g^{s}_{\rm diq}/4\) for vector and axial-vector diquarks. According to the Lagrangian Eq. (1) and the Fierz transformation, the coupling constant \(g^{s}_{\rm diq}\) is referred to \(g_{\rm S}\) as \(g^{s}_{\rm diq}=3g_{\rm S}/4\) and is usually chosen as \(g^{s}_{\rm diq}\sim(0.705-0.75)g_{\rm S}\)[16, 15]. The description of baryons is a more complicated task, since they are a complex structures of three quarks, coupled through the exchange of gluons. Thus, the modelling of a three-body system is required and the Faddeev equation has to be considered. However some simplifications of the Faddeev equation allow to consider the baryon as a diquark-quark bound state [16, 13, 14, 15]. Considering the static approximation for the four-point interaction leads to the loop structure of the transition matrix and the matrix Bethe-Salpeter-like equation for the baryon mass: \[1-\Pi_{i(D)}(k_{0},\vec{k})\cdot Z_{ij}=0, \tag{13}\] where the constant \(Z_{ij}\) is defined as \[Z_{ij}=\frac{g_{ik}g_{jk}}{m_{k}}, \tag{14}\] where \(g_{ij}\) is the diquark-quark coupling \(D_{ij}\to q_{i}q_{j}\) and \(g_{ud}\) includes the factor (-2) [15]. In the same way as for diquarks, two diquark-quark loops should be considered, however as shown in [15] they give the same result. Just as for diquarks, two loops should be taken into account, and it can be easily shown that they give the same results \[\Pi^{(1)}_{i(D)} = \int\frac{dp}{(2\pi)^{4}}tr\{S^{i}(\hat{q_{i}},m_{i})\Gamma_{j}S^ {jC}_{D}(\hat{q_{j}},m_{j})\Gamma_{i}\}, \tag{15}\] \[\Pi^{(2)}_{i(D)} = \int\frac{dp}{(2\pi)^{4}}tr\{S^{iC}(\hat{q_{i}},m_{i})\Gamma_{j} S^{j}_{D}(\hat{q_{j}},m_{j})\Gamma_{i}\}. \tag{16}\] It should be noted here that the axial-diquark contribution to the members of the baryon octet is neglected [15, 20]. ## 4 Numerical results In previous works [1, 2, 3] a detailed study of \(K/\pi\) ratios was carried out in the framework of PNJL-like models. As the collision energy \(\sqrt{s_{\rm NN}}\) never appears in effective models, a trick with fitting \(\sqrt{s_{\rm NN}}\) by the pair \((T,\mu_{B})\) from the statistical model was used. In the statistical model, the temperature and the baryon chemical potential of freeze-out are assigned to each collision energy (e.g., as suggested by Cleymans et al. [12]). Supposing that the chiral phase transition line in the EPNJL model corresponds to the freeze-out, the K/\(\pi\) ratio can be considered as a function of a new variable \(T/\mu_{B}\) instead of \(\sqrt{s_{\rm NN}}\), where \((T,\mu_{B})\) are taken along the phase transition line. The phase diagram has a classic structure with smooth crossover at low chemical potentials and the first order chiral phase transition at high chemical potential. The PNJL model has a crossover temperature \(T_{c}=0.27\) GeV higher than the Lattice prediction \(T_{c}\sim 0.17\) GeV. An extended version of the PNJL model (EPNJL) with \(f_{V}(T)\), \(g_{S}(T)\) was introduced to reduce the critical temperature of the crossover to lower value \(T_{c}^{\rm EPNJL}=0.18\) GeV due to enhanced interaction between quarks and gauge sector [4]. For more detailed study, the meson masses were calculated both in the Bethe-Salpeter and the Beth-Uhlenbeck approaches, the latter is preferable for considering mesons in hot and dense matter, since it takes into account their spectral functions and correlations. Figure 3: The baryon loops function. For the effective models, the ratio of the particle number can be calculated in terms of the ratio of the number densities: \[n=d\int_{0}^{\infty}p^{2}dp\frac{1}{e^{\beta(\sqrt{p^{2}+m^{2}}\mp\mu)}\pm 1}, \tag{17}\] where \(d\) is the corresponding degeneracy factor, upper sign in denominator refers to fermions and lower refers to bosons and \(\beta=T^{-1}\). The pion chemical potential is a phenomenological parameter and it was chosen as a constant, the baryon chemical potential is calculated as the sum of chemical potentials of constituent quarks. The degeneracy factors are calculated as \((2s+1)(2I+1)\), for \(\Lambda^{0}\) is 2 and for \(\Xi^{-}\) is 4. The Fig.4 shows contour graphs for \(K^{+}/\pi^{+}\) and \(K^{-}/\pi^{-}\) ratios obtained in the Beth-Uhlenbeck approach of the EPNJL model with \(g_{V}=0.6g_{S}\)[3]. The black lines show the phase transition (corossover) lines. It can be seen, that when shifting along the phase transition line from low to high temperature, the trajectory shows a quick enhancement and then a fall for \(K^{+}/\pi^{+}\) and a smooth increase for the \(K^{-}/\pi^{-}\). The results in Fig.4 are presented for the case with fixed pion chemical potential \(\mu_{\pi}=0.147.6\) GeV. In order to reproduce the experimental data, the dependence of pion and the strange quark chemical potentials on the variable \(x=T/\mu_{B}\) should be introduced. The expressions should describe the increase in the pion chemical potential with \(x\) and the decrease in the strange quark chemical potential, respectively. For their \(x\)- dependence the functions of the Woods-Saxon form is suggested [2] \[\mu_{\pi}(x) = \mu_{\pi}^{\min}+\frac{\mu_{\pi}^{\max}-\mu_{\pi}^{\min}}{1+\exp( -(x-x_{\pi}^{\rm th})/\Delta x_{\pi}))}, \tag{18}\] \[\mu_{s}(x) = \frac{\mu_{s}^{\max}}{1+\exp(-(x-x_{s}^{\rm th})/\Delta x_{s}))}. \tag{19}\] Parameters in Eqs.(18), (19) were obtained from fitting the experimental data (see for details [3, 2]). The best parameter values for the EPNJL model are \(\mu_{\pi}^{\max}=107\pm 10\) MeV, \(\mu_{\pi}^{\min}=92\) MeV, \(x_{\pi}^{\rm th}=0.409\), \(\Delta x_{\pi}=0.00685\). And for \(\mu_{s}\) parameter values are \(\mu_{s}^{\max}/\mu_{u}^{\rm crit}=0.205\), \(x_{s}^{\rm th}=0.223\), \(\Delta x_{s}=0.06\). In the left panel of Fig.5\(K^{+}/\pi^{+}\) (black lines) and \(K^{-}/\pi^{-}\) (red lines) are shown as functions of \(T/\mu_{B}\) obtained in the Beth-Uhlenbeck approach for the EPNJL model with \(g_{\rm V}=0\) and \(\mu_{s}\) and \(\mu_{\pi}\) calculated according to Eqs.(18, 19). Thin lines correspond to the case when \(\mu_{s}=0\) and fixed \(\mu_{\pi}\). The shaded region corresponds to the error band due to normalization to high \(x\) RHIC and LHC data. The behaviour of potentials is shown in right panel of Fig.5. Figures 4 and 5 demonstrate that the "horn" structure in the \(K^{+}/\pi^{+}\) ratio is less sensitive to the structure of the phase diagram and more sensitive to the properties of the medium. At \(g_{\rm V}=0.6g_{\rm S}\) the phase diagram has a smooth crossover transition at high density instead of the first order transition when \(g_{\rm V}=0\). Nevertheless the ratio keeps a "horn" structure. Changing the matter properties by modelling the chemical potentials for pions and \(s\)-quark leads to the possibility of reproducing the experimental data. The present work is devoted to the description of baryons and \(\Xi^{-}/\pi\), \(\Lambda^{0}/\pi\) ratios within this kind of models. According to Eqs.(13), the Bethe-Salpeter equation for barions has a matrix form \(\det(1-Z\Pi)=0\) where for \(\Lambda\): \[\Pi^{\Lambda}=\begin{bmatrix}\Pi_{(ds)u}&0&0\\ 0&\Pi_{(us)d}&0\\ 0&0&\Pi_{(ud)s}\end{bmatrix},\hskip 14.226378ptZ^{\Lambda}=\begin{bmatrix}0&Z_{ud}& Z_{us}\\ Z_{du}&0&Z_{ds}\\ Z_{su}&Z_{sd}&0\end{bmatrix}\] and for \(\Xi\) \[\Pi^{\Xi}=\Pi_{(us)s},\ \ \ \ \ Z^{\Xi}=Z_{ds}, \tag{20}\] functions \(\Pi\), \(Z\) are presented in Eqs. (14-16). The calculations were performed with the parameter set \(m_{u0}=m_{d0}=4.75\) MeV, \(m_{s0}=0.147\) GeV, \(\Lambda=0.708\) GeV, \(g_{S}\Lambda^{2}=1.922\), \(g_{D}\Lambda^{5}=10.0\), \(g_{\rm V}=0\), \(g_{\rm diq}=0.725g_{\rm S}\). The choice of the parameter set was driven by the requirement to have the proton and \(\Lambda\) masses below the threshold \(M_{D}+m_{q}\). The dissociation temperature for baryons is postulated from their diquark-quark structure. The Mott temperature (\(T_{\rm Mott}^{\rm bar}\)) is a temperature for which the mass of baryons is equal to the sum of quark and diquark masses [15, 20, 21]. To avoid the situation when the diquark melts at a lower temperature and the baryon still exists, the baryon deconfinement temperature is chosen as \[T_{\rm dec}^{\rm bar}=\min\{T_{\rm Mott}^{\rm bar},T_{\rm Mott}^{\rm diq}\}.\] Nevertheless, even if the diquark becomes already unbound due to the Mott effect, the baryon can still be bound as a three-particle state (so-called "borromean state") [22, 21]. In this case the "dissociation" temperature for baryons should be considered as temperature when the baryon melts into 3 quarks (\(T_{\rm diss}\)). The dissociation boundaries of baryons corresponding to \(\{T_{\rm dec}^{\rm bar},T_{\rm diss}\}\) are shown in Fig.6 (right panel) with the light-blue shaded area for \(\Lambda\) and the light-green one for \(\Xi\). The red line corresponds to the phase diagram of the EPNJL model with \(g_{\rm V}=0\): dashed line corresponds to the crossover and solid line corresponds to the first order transition. As Figure 5: Left panel: The \(K^{+}/\pi^{+}\) (black lines) and \(K^{-}/\pi^{-}\) (red lines) ratios are shown as function of \(T/\mu_{B}\). Thin lines correspond to the case when \(\mu_{s}=0\) and fixed \(\mu_{\pi}=0.147\) GeV. Right panel: chemical potentials and pion mass as functions of \(T/\mu_{B}\). can be seen in the Fig.6, \(\Xi^{-}\) baryon is described till \(\mu_{q}\sim 0.37\) GeV, which is higher then \(\mu_{q}\sim 0.27\) GeV for \(\Lambda\). It appears from the fact that \(\Xi^{-}\) is considered as combination of the scalar \((ds)\) diquark and s-quark, unlike \(\Lambda\), which is a superposition of \((ud)+s\) and \((ds)+u\left((us)+d\right)\) states. The diquark with heavy quark survives at higher values of the chemical potential than light diquarks. The left panel of Fig. 6 shows masses of diquarks (dashed), baryons (solid) and their thresholds \(M_{D}+m_{q}\) (short-dashed) as functions of chemical potential \(\mu_{q}\). Light diquarks melt at lower densities (or chemical potentials) due to the origin of the Bose-Einstein condensate. The results for baryon-to-meson ratios are presented in Fig.7. Data for \(\Xi^{-}/\pi^{+}\) and \(\Lambda^{0}/\pi^{+}\) were calculated along the lower green and blue curves of phase diagram (rigth panel in Fig.6) which correspond to \(T_{\rm dec}^{\rm bar}\) till 0.373 GeV for \(\Xi^{-}\) and 0.27 GeV for \(\Lambda\) and then along the dash-dotted vertical lines, which are supposed now as "freezeout lines" at high chemical potential. Both ratios demonstrate the peak-like behaviour. ## 5 Conclusions The article summarizes our calculations of the ratios of mesons and baryons with strangeness to nonstrange mesons within the framework of the PNJL-like models. The interest to these ratios appears since they have the "horn" structure in their energy dependencies, which is supposed to be a signal of deconfinement and may be sensitive to the structure of the phase diagram, including the position of CEP and TCP [23]. Our works show that the \(K^{+}/\pi^{+}\) ratio is more sensitive to the matter properties, than to the phase diagram structure. This work demonstrates that the EPNJL model reproduces the peak-like structure for \(\Lambda^{0}/\pi^{+}\) and \(\Xi^{-}/\pi^{+}\) ratios, but the validity of this estimation is limited by some features of the description of baryons in the model. Therefore, most of our analysis must primarily be taken as qualitative hints, e.g., about the role of the strange quark chemical potential and pion chemical potential or the effect of the vector interaction. This work raises several aspects related to the description of baryons as diquark-quark bounded states. The first one is associated with the selection of correct model parameters, which would make it possible to obtain proton and other baryon masses below the threshold value \(M_{D}+mq\). For example, our parameters and choice of the model variation affect the deconfinement temperatures of baryons. The statistical model and the experiment predict a lower chemical freeze-out temperature for proton in comparison with that for \(\Xi\). This difference is about 30 MeV [20, 24]. The PNJL model with our parameters shows 20 MeV, while the EPNJL model shows 10 MeV. The second aspect is related to the description of the baryon as a diquark-quark state in the framework of (E)PNJL model. As noted above, this model usually takes into account only scalar part in the mass equations Eqs.(13) - (16), skipping axial-vector part. Nevertheless in works [25, 26] is shown that the accounting for the axial-vector part in mass equations plays an important role in the correct description of the baryon properties [26]. The third aspect concerns the description of baryons as a quark-diquark state at a high chemical potential. At low density the two quark pair forms tightly bound localized diquark states which can pick up another quark with the right colour to form a colour-singlet baryon. The rise of the chemical potential (or density) leads to a weakening of the interaction strength between quarks and form weakly bounded Cooper pairs in an attractive colour antitriplet channel, leading to the phenomenon of colour superconductivity. However in dense matter the diquark does not have to be stable in order to form a stable baryon, since it can be a bounded state of three quarks, the so-called Borromean state [15, 21]. In this situation, further improvements on the more fundamental side, allowing to include the axial-vector part, and describe the baryon above critical densities, are highly desirable. ## 6 Acknowledgments We acknowledge a discussion with D. Blaschke about the baryon description in the frame of the model. We thank A. Radzhabov for his comments on the quasi-chemical potential for pions and strange chemical potential in nonequilibrium systems.
2309.12483
PrivAgE: A Toolchain for Privacy-Preserving Distributed Aggregation on Edge-Devices
Valuable insights, such as frequently visited environments in the wake of the COVID-19 pandemic, can oftentimes only be gained by analyzing sensitive data spread across edge-devices like smartphones. To facilitate such an analysis, we present a toolchain called PrivAgE for a distributed, privacy-preserving aggregation of local data by taking the limited resources of edge-devices into account. The distributed aggregation is based on secure summation and simultaneously satisfies the notion of differential privacy. In this way, other parties can neither learn the sensitive data of single clients nor a single client's influence on the final result. We perform an evaluation of the power consumption, the running time and the bandwidth overhead on real as well as simulated devices and demonstrate the flexibility of our toolchain by presenting an extension of the summation of histograms to distributed clustering.
Johannes Liebenow, Timothy Imort, Yannick Fuchs, Marcel Heisel, Nadja Käding, Jan Rupp, Esfandiar Mohammadi
2023-09-21T20:55:29Z
http://arxiv.org/abs/2309.12483v2
# A Toolchain for Privacy-Preserving Distributed Aggregation on Edge-Devices ###### Abstract Valuable insights, such as frequently visited environments in the wake of the COVID-19 pandemic, can oftentimes only be gained by analyzing sensitive data spread across edge-devices like smartphones. To facilitate such an analysis, we present a toolchain for a distributed, privacy-preserving aggregation of local data by taking the limited resources of edge-devices into account. The distributed aggregation is based on secure summation and simultaneously satisfies the notion of differential privacy. In this way, other parties can neither learn the sensitive data of single clients nor a single client's influence on the final result. We perform an evaluation of the power consumption, the running time and the bandwidth overhead on real as well as simulated devices and demonstrate the flexibility of our toolchain by presenting an extension of the summation of histograms to distributed clustering. Keywords:Distributed and Secure AggregationEdge-Devices Differential Privacy Acoustic Scene Classification (ASC) Clustering + Footnote †: journal: KI - Kunstliche Intelligenz manuscript No. ## 1 Introduction Analyzing huge amounts of data can bring valuable insights in various scenarios. In most cases, such an amount of data is distributed among multiple clients, specifically edge-devices. Edge-devices are devices like smartphones or smartwatches that are limited in their resources, but provide a multitude of sensors, which can be used to collect all sorts of data. However, an increasing awareness of privacy concerns, e.g. supported by the general data protection regulation in the EU [6], requires that the privacy of individuals is protected. To still be able to maximize the value of sensory data while simultaneously providing the necessary degree of privacy protection, we see two approaches, namely privacy-preserving federated learning or specialized aggregation schemes. To the best of our knowledge, frameworks for privacy-preserving federated learning oftentimes do not provide any code [10; 11], or the provided code can only be used to reproduce experimental results and set up a local implementation [3; 7]. The same holds for privacy-preserving aggregation schemes specialized on specific use-cases like heavy hitters [12] or distributed clustering [4]. Even if an implementation is provided, it can only be used to create a local setup. Although frameworks for federated learning and specialized aggregation schemes are an important building block, we argue that there is also the need for a toolchain which can be used to implement the basic setup of distributed learning. This includes an app which is compatible to edge-devices and also a server which coordinates the learning process. Such a setup should lay the foundation for providing the necessary degree of privacy protection and for taking into account the limited resources of edge-devices. In this light, we propose a toolchain that provides the basic setup for distributed learning on edge-devices and enables its rapid realization. This setup includes an app, a server and a communication protocol. First, a user installs the app on their edge-device and the app starts a data collection phase. At a certain point, the data collection stops and the collected data gets pre-processed into a summable format. Then, the server coordinates a secure aggregation to obtain a global result from all the local, pre-processed data of clients. After the aggregation, the server publishes the global result on a website to facilitate further analyses. To enable secure aggregation, we provide an implementation of a state-of-the-art secure summation protocol [2] such that the server and other parties cannot learn individual inputs. Clients introduce random noise to their local data before the aggregation. By making use of the popular privacy notion _differential privacy_[5], the local noise addition in combination with the security guarantees of the secure summation protocol suffice to hide the influence of clients in the aggregated sum, i.e. to protect the privacy of clients. The implementation of the data collection, the exact format of the aggregated data and safeguarding differential privacy is tailored to the specific use-case. This work is a proof of concept of our toolchain for a given use-case. The use-case is about collecting information about frequently visited environments in a pandemic like COVID-19 [1]. Using our toolchain, clients locally record audio files, determine the surrounding environment via machine learning called _acoustic scene classification_ and aggregate the resulting environment labels to a local histogram. To further enrich the local data, clients also include the number of Bluetooth devices in their surrounding into the local histogram. Then, random noise is added to their local statistic and a secure aggregation takes place based on the provided secure summation protocol. Afterward, the server publishes the final histogram to enable a further analysis by experts. The summed up noise in combination with the guarantees of the secure summation suffice to protect the privacy of single clients' data. Finally, we evaluate the power consumption, the running time and the bandwidth overhead of the most consumptive parts of our toolchain applied to the use-case of environment labels and thereby demonstrate its compatibility with edge-devices. Additionally, we discuss and demonstrate an extension of our toolchain to distributed, differentially private clustering. This naturally extends our use-case in the scenario of the COVID-19 pandemic for extracting not only the most frequent environment labels but sequences of labels. We present how we generated synthetic data, demonstrate that such sequences can indeed be clustered and discuss how this could be implemented. Contribution * We present a toolchain explicitly designed for edge-devices with limited resources to enable the distributed and secure aggregation of sensory data. * We use a state-of-the-art secure summation protocol to prevent any external party and even the central server to learn individual inputs and apply the notion of differential privacy to hide the presence of single clients' data in the final result. * We demonstrate the flexibility of our toolchain by directly incorporating a use-case for improving the information flow in scenarios like a pandemic and discuss a potential extension to distributed clustering. ## 2 Parties We consider he following parties in the context of our toolchain: UsersA user collects and stores sensitive data on their device. Thus, users have a vested interest in protecting their own data from unauthorized individuals. Additionally, a device is limited in its resources due to natural constraints imposed by, e.g., smartphones. ServerThe server acts as an intermediary in our toolchain by providing necessary hyper-parameters and coordinating the aggregation. In contrast to clients, the server is equipped with high computational resources to efficiently perform the aggregation process. Third PartiesOther parties, such as public health officials, researchers, or users, are permitted to analyze the published results. However, they should not be able to learn single clients' data (security) or whether a specific client was part of the aggregation (privacy). ## 3 Edge-Device Toolchain In this section, we give an overview over the individual steps of our aggregation toolchain and present design choices as well as details on the implementation. The specific realization of the individual steps depends on our use-case of extracting frequently visited environments in a pandemic. ### Overview Local Data Collection & Pre-ProcessingThe app of our toolchain automatically collects environment labels over a specific period of time by using the microphone of the device and a machine learning model trained on acoustic scene classification. The environment labels are represented in the form of a histogram. To preserve privacy when dealing with histograms, clients inject random noise into each count of the histograms [5]. Secure AggregationThe locally noised histograms are securely aggregated using a specific secure multiparty computation (SMPC) protocol. This protocol allows clients to collaboratively compute an aggregated sum without revealing individual inputs to the server. After aggregating all local histograms, the noise introduced by each user separately suffices to hide the influence of a single user on the aggregated result. Thus, third parties remain oblivious to individual contributions, safeguarding the privacy of the users. PublishingAfter the server has aggregated the local histograms in a secure and privacy-preserving way, the resulting global histogram is ready to be published. Besides coordinating the secure aggregation, the server also hosts a website where the final results are made public. Interested third parties such as public health officials or researchers can now gain insights into the overall patterns of visits to different environments. ### Technical Building Blocks This section presents details about the building blocks of our toolchain, namely acoustic scene classification (ASC) for collecting environment labels, a state-of-the-art secure summation protocol and differential privacy to guarantee the necessary degree of privacy. Acoustic Scene ClassificationThe data collection step mainly consists of a machine learning task called acoustic scene classification. Our app regularly records audio files on a user's device. The files are used as input for a neural network trained to predict the environment in which the recording took place. We call this prediction the environment label. To enhance this information, we incorporate the number of Bluetooth devices present at the time of recording, which can be obtained using Bluetooth LE technology. By aggregating local histograms over environment labels and Bluetooth device counts, we generate statistics that are highly valuable to experts. Especially in scenarios like the COVID-19 pandemic, these statistics can contribute to a better understanding of infection waves. Secure SummationWe aim to ensure security for individual inputs in the aggregation. Therefore, we employ a secure summation protocol to aggregate the local histograms of individual users into a single global histogram. For this purpose, we utilize the protocol introduced in [2] which can handle scenarios where some clients drop out during the execution, which is a realistic consideration when dealing with edge-devices. We have implemented the standard version of the protocol in which the server acts honest-but-curious which means that the server follows the protocol but tries to infer as much information as possible. The protocol can also be extended in a way such that it is able to tolerate a malicious server cooperating with other malicious clients. Privacy ProtectionTo protect the privacy of clients who take part in the aggregation, we require the aggregation to satisfy the notion of differential privacy (DP). Definition 1Differential Privacy (DP).: A randomized algorithm \(M:\mathcal{D}\to A\) is \(\varepsilon\)-differentially private if for any pair of databases \(D_{0},D_{1}\in\mathcal{D}\) that only differ in a single element and all tests \(S\subseteq A\) on the result of \(M\) the following holds: \[\Pr[M(D_{0})\in S]\leq\exp(\varepsilon)\cdot\Pr[M(D_{1})\in S]\] We consider a scenario in which a data set is distributed among clients. The data set contains data points which attributes can be correlated. Thus, when protecting the privacy of individuals, we aim to hide the influence of entire data points and not single attributes. We consider two neighboring versions of the distributed data set, one data set \(D_{0}\) with all the data points and one data set \(D_{1}\) where the data points of a single client are missing. One of the data sets is used as input for our aggregation algorithm \(M\). An attacker then receives the output of \(M\) and has to decide which data set was used as input. Formally (see Definition 1), we say that if the output distributions of the aggregation \(M(D_{0}),M(D_{1})\) regarding neighboring data sets are similar s.t. their ratio is bounded by \(\exp(\varepsilon)\), then the algorithm satisfies DP. In this way, the degree of privacy protection can be controlled via the parameter \(\varepsilon\) called privacy budget. Generally, it can be assumed that with \(\varepsilon\leq 1\) a reasonable privacy protection is ensured. To satisfy DP, an algorithm has to be randomized and for further details we refer to [5]. Intuitively, this means that even if an attacker has knowledge over the entire data set except for the data points of a single user, the influence of this user stays hidden and thus the privacy of this user can be protected. To hide the influence of a single users' data, their worst-case influence on the final result has to be bounded from above by a finite value. We call this value sensitivity. When working with environment labels, we restrict the local count per environment label and the count of Bluetooth devices to a maximum of \(c\) and \(b\), respectively. Now, the influence of a single client is bounded and by adding Laplacian noise scaled by \(\frac{\varepsilon}{\varepsilon}\) and \(\frac{b}{\varepsilon}\), the summation of environmental labels and number of Bluetooth devices preserves \(\varepsilon\)-DP [8]. In our distributed scenario, the server first publishes the required privacy budget \(\varepsilon\) and then performs an aggregation of all clients' individual data. Although clients locally add the required amount of noise to their counts, directly aggregating all noised counts does not preserves privacy since other parties and the server have direct access to the individual noised inputs. Therefore, we use a secure summation protocol such that the server only obtains the sum of all individual inputs. In this way, we effectively simulate a trusted aggregator which enables the aggregation to satisfy DP. To account for potential dropouts, where a proportion \(\theta\) of users may drop out, users are required to scale their noise by \(\frac{1}{1-\theta}\). This precaution ensures that even in the worst-case scenario the aggregated noise remains sufficient to achieve differential privacy. ## 4 Evaluation ### Implementation Details The implementation of our toolchain1 involves several components. The app was developed using Android Studio in Java and Kotlin, while the SMPC functionality was implemented using the Java Native Interface in C\({}^{++}\). On the server side, we implemented the secure summation part in C\({}^{++}\). Both the app and server components of the protocol rely on the libraries "boost" and "cryptolib" to facilitate their functionality. The web services and the underlying database aspects are implemented using Python3 and the framework Django. Footnote 1: [https://anonymous.depen.science/r/DPSeckgPipeline-4169/](https://anonymous.depen.science/r/DPSeckgPipeline-4169/) ### Experiments To demonstrate the efficiency of our aggregation toolchain, we conduct experiments of specific parts of the toolchain. The results show that being part of the aggregation only introduces a moderate overhead to a client's device. We perform all measurements on Google Pixel 5 smartphones and to perform secure aggregation, we use simulated clients. Running TimeAs secure aggregation constitutes the most to the running time, we measure the running time of our implementation of the protocol. We let the secure summation run 50 times for three different numbers of simulated clients. The results can be seen in Table 1. Our implementation is practical for a large amount of users: The running time for secret sharing and the pseudo-random generator (PRG) evaluation roughly doubles for every order of magnitude. This means our implementation succeeds in maintaining a sub-linear running time (in the number of total clients) which is the main reason for the efficiency of the protocol. Power ConsumptionAcoustic scene classification (ASC) and secure summation are the central components in our use-case which is why we measure the respective power consumption. We first measure the baseline power consumption on a Google Pixel 5 smartphone without any active apps. To simulate the application of collecting environment labels in the every day life, we activate the ASC every 5 minutes and once a minute to underline the high consumption of performing inference with a neural network. We also include power measurements for plain secure aggregation of 100 users which consists of invoking of the secure summation protocol once per hour. The plot in Figure 1 displays the power consumption of the four scenarios. It shows that performing ASC once a minute drastically increases the power consumption. However, when comparing the daily routine to the baseline, the power consumption only increases by 5%. This is further reduced if ASC is omitted and the secure summation protocol gets invoked once per hour. In summary, the results show that secure aggregation and ASC without too many activations only increase the power consumption by a tolerable amount. BandwidthTo measure the amount of data a smartphone receives and sends when being part of our toolchain, we make use of a profiler, an in-built tool of Android Studio, for measuring different system resources. A client first collects a few environment labels, creates a histogram and takes part in a secure aggregation of \begin{table} \begin{tabular}{||c c c c c||} \hline Users & Neighbours & Sharing & PRG Eval. & Total \\ \hline \hline \(10^{3}\) & 83 & 0.017 & 0.033 & 24.5 \\ \hline \(10^{4}\) & 103 & 0.033 & 0.078 & 27.43 \\ \hline \(10^{5}\) & 109 & 0.061 & 0.112 & 28.41 \\ \hline \end{tabular} \end{table} Table 1: Running time on simulated clients (in seconds) for Shamir secret sharing, PRG expansion (AES in counter mode) and the entire protocol. We set the hyper-parameters \(\gamma=1/20,\delta=1/3,\sigma=40,\eta=30\) accordingly. For further details, we refer to [2]. local histograms with around \(10^{4}\) participants in total which requires a single invocation of the secure summation protocol. In the process, the client is a real smartphone and the other participants are simulated clients. The results show that a client receives and sends around 3 MB of data. This means our implementation requires around 6 MB of data. For comparison: To browse on Instagram for 5 minutes required on average 35 MB of data which is more than \(5\times\) of our traffic. The experiments on the bandwidth show that the collection of environment labels in combination with secure aggregation only lead to a small amount of additional traffic. ## 5 Clustering Short Context Traces To demonstrate that our toolchain is not restricted to the aggregation of histograms, we propose an interesting extension to distributed clustering as future work. This is also an extension of our use-case, because with clustering, we are interested not only in frequently visited environments but in frequently occurring sequences of environments we call traces. We first demonstrate how to generate a synthetic data set of traces and present results on how differentially private clustering performs on these traces. Finally, we discuss the implementation of the distributed analysis of traces. ### Synthetic Trace Generation Since we are not aware of any real-world data set containing traces, we generate synthetic data first. Specifically, we use random walks to generate synthetic traces. The resulting data set contains two types of traces: The first type represents repetitive daily routines, characterized by common patterns that occur frequently. These patterns have a high occurrence rate and mirror societal behavior. The second type of traces captures individual movement patterns that occur from time to time but are not representative for the population. ### Evaluation To demonstrate that traces can be aggregated by using a differentially private clustering algorithm, we apply such an algorithm in the local scenario. Specifically, we use the diffprivlib [9] Python library, which provides a differentially private clustering algorithm based on KMeans. Our objective is to identify traces that represent frequently occurring daily routines. In our evaluation, we vary the privacy budget \(\varepsilon\) and the number of random walks, both affecting the accuracy of the clustering. Each random walk can be considered as a single individual generating environment labels through their daily activities, resulting in multiple traces. The results of our evaluation are depicted in Figure 2. For benchmarking purposes, we also compare the differentially private KMeans clustering implementation to a non-differentially private version. The privacy-preserving clustering algorithm is able to detect traces representing daily routines. This demonstrates that frequently occurring traces can be obtained via clustering while simultaneously enforcing privacy protection. ### Integration To integrate the analysis of traces into our toolchain, one can utilize the differentially private distributed clustering algorithm LSH-Splits [4]. It can be implemented in a distributed manner solely based on secure summation which makes it compatible with our toolchain. The Figure 1: Battery level on a Google Pixel 5 over 12 hours for idle-mode (Baseline), ASC every 5 minutes (daily routine), ASC every 60 seconds and plain secure summation (SecSum) executed once per hour. Figure 2: Accuracy of clustering sequences of environment labels without (blue) and with privacy protection for various privacy budgets.
2310.20234
HEDNet: A Hierarchical Encoder-Decoder Network for 3D Object Detection in Point Clouds
3D object detection in point clouds is important for autonomous driving systems. A primary challenge in 3D object detection stems from the sparse distribution of points within the 3D scene. Existing high-performance methods typically employ 3D sparse convolutional neural networks with small kernels to extract features. To reduce computational costs, these methods resort to submanifold sparse convolutions, which prevent the information exchange among spatially disconnected features. Some recent approaches have attempted to address this problem by introducing large-kernel convolutions or self-attention mechanisms, but they either achieve limited accuracy improvements or incur excessive computational costs. We propose HEDNet, a hierarchical encoder-decoder network for 3D object detection, which leverages encoder-decoder blocks to capture long-range dependencies among features in the spatial space, particularly for large and distant objects. We conducted extensive experiments on the Waymo Open and nuScenes datasets. HEDNet achieved superior detection accuracy on both datasets than previous state-of-the-art methods with competitive efficiency. The code is available at https://github.com/zhanggang001/HEDNet.
Gang Zhang, Junnan Chen, Guohuan Gao, Jianmin Li, Xiaolin Hu
2023-10-31T07:32:08Z
http://arxiv.org/abs/2310.20234v1
# HEDNet: A Hierarchical Encoder-Decoder Network for 3D Object Detection in Point Clouds ###### Abstract 3D object detection in point clouds is important for autonomous driving systems. A primary challenge in 3D object detection stems from the sparse distribution of points within the 3D scene. Existing high-performance methods typically employ 3D sparse convolutional neural networks with small kernels to extract features. To reduce computational costs, these methods resort to submanifold sparse convolutions, which prevent the information exchange among spatially disconnected features. Some recent approaches have attempted to address this problem by introducing large-kernel convolutions or self-attention mechanisms, but they either achieve limited accuracy improvements or incur excessive computational costs. We propose HEDNet, a hierarchical encoder-decoder network for 3D object detection, which leverages encoder-decoder blocks to capture long-range dependencies among features in the spatial space, particularly for large and distant objects. We conducted extensive experiments on the Waymo Open and nuScenes datasets. HEDNet achieved superior detection accuracy on both datasets than previous state-of-the-art methods with competitive efficiency. The code has been released. ## 1 Introduction Learning effective representations from sparse input data is a key challenge for 3D object detection in point clouds. Existing point-based methods [1; 2; 3; 4; 5] and range-based methods [6; 7; 8; 9; 10] either suffer from high computational costs or exhibit inferior detection accuracy. Currently, voxel-based methods [11; 12; 13; 14; 15] dominate high-performance 3D object detection. The voxel-based methods partition the unstructured point clouds into regular voxels and utilize sparse conventional neural network (CNNs) [11; 12; 16; 17; 18; 19] or transformers [13; 14; 15] as backbones for feature extraction. Most existing sparse CNNs are primarily built by stacking submanifold sparse residual (SSR) blocks, each consisting of two submanifold sparse convolutions [20] with small kernels. However, submanifold sparse convolutions maintain the same sparsity between input and output features, and therefore hinder the exchange of information among spatially disconnected features. Consequently, models employing SSR blocks face challenges in effectively capturing long-range dependencies among features. One potential solution is to replace the submanifold sparse convolutions in SSR block with regular sparse convolutions [21]. However, this leads to a significant decrease in feature sparsity as the network deepens, resulting in substantial computational costs. Recent research has investigated the utilization of large-kernel sparse CNNs [12; 16] and transformers [14; 15] to capture long-range dependencies among features. However, these approaches have either demonstrated limited improvements in detection accuracy or come with significant computational costs. Thus, the question remains: _is there an efficient method that enables sparse CNNs to effectively capture long-range dependencies among features?_ Revisiting backbone designs in various dense prediction tasks [13; 22; 23; 24; 25; 26], we observe that the encoder-decoder structure has proven effective in capturing long-range dependencies among features. These methods typically use a high-to-low resolution backbone as an encoder to extract multi-scale features and design different decoders to recover high-resolution features that can model long-range relationships. For instance, PSPNet [24] incorporates a pyramid pooling module to capture both local and global contextual information by pooling features at multiple scales. SWFormer [13] integrates a top-down pathway into its transformer backbone to capture cross-window correlations. However, the utilization of the encoder-decoder structure in designing sparse convolutional backbones for 3D object detection has not yet been explored, to the best of our knowledge. In this work, we propose a sparse encoder-decoder (SED) block to overcome the limitations of the SSR block. The encoder extracts multi-scale features through feature down-sampling, facilitating information exchange among spatially disconnected regions. Meanwhile, the decoder incorporates multi-scale feature fusion to recover the lost details. A hallmark of the SED block is its ability to capture long-range dependencies while preserving the same sparsity between input and output features. Since current leading 3D detectors typically rely on object centers for detection [27; 28], we further adapt the 3D SED block into a 2D dense encoder-decoder (DED) block, which expands the extracted sparse features towards object centers. Leveraging the SED block and DED block, we introduce a hierarchical encoder-decoder network named HEDNet for 3D object detection in point clouds. HEDNet can learn powerful representations for the detection of large and distant objects. Extensive experiments were conducted on the challenging Waymo Open [29] and nuScenes [30] datasets to demonstrate the effectiveness of the proposed HEDNet on 3D object detection. HEDNet achieved impressive performance, with a 75.0% L2 mAPH on the Waymo Open _test_ set and a 72.0% NDS on the nuScenes _test_ set, outperforming prior methods that utilize large-kernel CNNs or transformers as backbones while exhibiting higher efficiency. For instance, HEDNet was 50% faster than DSVT, the previous state-of-the-art transformer-based method, with 1.3% L2 mAPH gains. ## 2 Related work ### 3D object detection in point clouds For 3D object detection in point clouds, methods can be categorized into three groups: point-based, range-based, and voxel-based. Point-based methods [1; 2; 3; 4; 5] utilize the PointNet series [31; 32] to directly extract geometric features from raw point clouds and make predictions. However, these methods require computationally intensive point sampling and neighbor search procedures. Range-based methods [6; 7; 8; 9; 10] convert point clouds into pseudo images, thus benefiting from the well-established designs of 2D object detectors. While computationally efficient, these methods often exhibit lower accuracy. Voxel-based approaches [11; 17; 18; 19] are currently the leading methods for high-performance 3D object detection. Most voxel-based methods employ sparse CNNs that consist of submanifold and regular sparse convolutions with small kernels to extract features. Regular sparse convolutions can capture distant contextual information but are computationally expensive. On the other hand, submanifold sparse convolutions prioritize efficiency but sacrifice the model's ability to capture long-range dependencies. ### Capturing long-range dependencies for 3D object detection To capture long-range dependencies for 3D object detection, recent research has explored solutions such as large-kernel sparse convolutions [33; 16] and self-attention mechanisms [13; 14; 15]. However, directly applying plain large-kernel CNNs for 3D representation learning can lead to problems such as overfitting and reduced efficiency. Weight-sharing strategies have been proposed to mitigate overfitting, like LargeKernel3D [12] and Link [16], however, they still suffer from low efficiency. Other methods, such as SST [14] and DSVT [15], utilize transformers as replacements for sparse CNNs. SST employs a single-stride sparse transformer to preserve high-resolution features without using down-sampling operators that may cause a loss of detail. Similarly, DSVT employs a single-stride sparse transformer and performs window-based self-attention sequentially along the X-axis and Y-axis. While both large-kernel CNNs and transformers aim to capture long-range dependencies, they either achieve comparable performance or exhibit lower efficiency compared with sparse CNNs. In contrast, our proposed HEDNet effectively captures long-range dependencies with the help of encoder-decoder blocks while achieving competitive inference speed compared with existing methods. ### Encoder-decoder networks for dense prediction The encoder-decoder structure has been extensively investigated in various dense prediction tasks. For example, the FPN-series [22; 34; 35; 36] incorporates lightweight fusion modules as decoders to integrate multi-scale features extracted from image classification backbones. DeeplabV3+ [25] employs an atrous spatial pyramid pooling module to combine low-level features with semantically rich high-level features. SWFormer [13] introduces a top-down pathway into its transformer backbone to capture cross-window correlations. However, there is limited exploration of the encoder-decoder structure in the design of sparse CNNs. Most voxel-based approaches [17; 18; 27; 28] rely on high-to-low resolution sparse CNNs to extract single-scale high-level features. Part-A2-Net [37] adopts the UNet [38] to extract features, but it performs detection using the encoder of UNet while utilizing the decoder for the auxiliary segmentation and part prediction tasks. In this study, we propose HEDNet, which primarily consists of encoder-decoder blocks to effectively capture long-range dependencies. ## 3 Method ### Background The sparse CNNs adopted by most voxel-based methods [18; 27; 28] are primarily built by stacking SSR blocks, each consisting of two submanifold sparse (SS) convolutions [20]. In addition, they usually insert regular sparse (RS) convolutions [21] into the stacked SSR blocks to reduce the resolution of feature maps progressively (similar to ResNet [39]). SS convolution and SSR block.We present the structure of a single SSR block in Figure 1 (a). Two SS convolutions are sequentially applied to the input feature map, with skip connections incorporated between the input and output feature maps of the SSR block. The _sparsity of feature map_ is defined as the ratio of the regions that are _not_ occupied by valid (nonzero) features to the total area of the feature map. SS convolution only operates on valid features, allowing the output feature map of the SSR block to maintain the same sparsity as the input feature map. However, this design hinders the Figure 1: Comparison among SSR block (a), RSR block (b), and our SED block (c). The ‘Skip conn.’ denotes the skip connection, and the orange dashed lines represent the convolution kernel space. Valid features have non-zero values. Expanded and empty features have zero values. In (b), convolution is applied to both valid and expanded features, _i.e._,the convolution kernel center traverses the regions covered by these features. The red dashed square highlights the regions from which the output feature marked by a star can receive information. In (c), we adopt a 3\(\times\)3 RS convolution _with a stride of 3_ for feature down-sampling (Down) as an example. UP denotes feature up-sampling. exchange of information among spatially disconnected features. For instance, in the top feature map, the output feature marked by a star cannot receive information from the other three feature points outside the red dashed square in the bottom feature map (marked by the red triangles). This poses a challenge for the model in capturing long-range dependencies. RS convolution and RSR block.One possible solution to problem is to replace the SS convolutions in the SSR block with RS convolutions. We call this modified structure regular sparse residual (RSR) block and illustrate its structure in Figure 1 (b). RS convolution operates on both valid and expanded features [21]. Expanded features correspond to the features that fall within the neighborhood of the valid features. Taking a 2D RS convolution with a kernel size of 3\(\times\)3 as an example, the neighborhood of a certain valid feature consists of the eight positions around it. This design leads to an output feature map with a lower sparsity compared with the input feature map. Stacking RS convolutions reduces the feature sparsity dramatically, which in turn leads to a notable decrease in model efficiency compared with using SS convolutions. This is why existing methods [18; 27; 28] typically limit the usage of RS convolution to feature down-sampling layers. ### SED and DED blocks SED block.SED block is designed to overcome the limitations of SSR block. The fundamental idea behind this design is to reduce the spatial distance between distant features through feature down-sampling and recover the lost details through multi-scale feature fusion. We illustrate a two-scale SED block in Figure 1 (c). After feature down-sampling, the spatially disconnected valid features in the bottom feature map are integrated into the adjacent valid features in the middle feature map. An SSR block is subsequently applied to the middle feature map to promote interaction among valid features. Finally, the middle feature map is up-sampled to match the resolution of the input feature map. _Note that the feature up-sampling layer (UP) only up-samples features to the regions covered by the valid features in the input feature map._ As a result, the proposed SED block can maintain the same sparsity between input and output feature maps. This characteristic prevents the introduction of excessive computational costs when stacking multiple SED blocks. The architecture of a three-scale SED block is presented in Figure 2 (a). The SED block adopts an asymmetric encoder-decoder structure similar to UNet [38], with the encoder responsible for extracting multi-scale features and the decoder sequentially fusing the extracted multi-scale features with the help of skip connections. Given the input feature map X, the function of the SED block can be formulated as follows: \[\mathrm{F}_{1} =\mathrm{SSR}^{m}(\mathrm{X}) \tag{1}\] \[\mathrm{F}_{2} =\mathrm{SSR}^{m}(\mathrm{Down}_{1}(\mathrm{F}_{1}))\] (2) \[\mathrm{F}_{3} =\mathrm{SSR}^{m}(\mathrm{Down}_{2}(\mathrm{F}_{2}))\] (3) \[\mathrm{F}_{4} =\mathrm{UP}_{2}(\mathrm{F}_{3})+\mathrm{F}_{2}\] (4) \[\mathrm{F}_{5} =\mathrm{UP}_{1}(\mathrm{F}_{4})+\mathrm{F}_{1} \tag{5}\] Figure 2: Architecture of the SED block (a) and DED block (b). As an example, we illustrate blocks of three scales. Both designs share the same structure. \(\mathrm{F}_{1}\)/\(\mathrm{F}_{2}\)/\(\mathrm{F}_{3}\)/\(\mathrm{F}_{4}\)/\(\mathrm{F}_{5}\) are the names of the corresponding feature maps. The number in parentheses indicates the resolution ratio of the corresponding feature map relative to the block input. The SED block is capable of processing both 2D and 3D features, depending on whether 2D or 3D sparse convolutions are used. where \(\mathrm{F}_{5}\) denotes the output feature map with the same resolution as the input \(\mathrm{X}\). The resolution ratios of the intermediate feature maps \(\mathrm{F}_{1}\), \(\mathrm{F}_{2}\), \(\mathrm{F}_{3}\), and \(\mathrm{F}_{4}\) relative to the input \(\mathbf{X}\) are 1, 1/2, 1/4, and 1/2, respectively. SSR\({}^{m}\) indicates \(m\) consecutive SSR blocks. We adopt RS convolution as the feature down-sampling layer (Down) and sparse inverse convolution [40] as the feature up-sampling layer (UP). With an encoder-decoder structure, the SED block facilitates information exchange among spatially disconnected features, thereby enabling the model to capture long-range dependencies. Ded block.Existing high-performance 3D object detectors [15; 27; 28] usually rely on object centers for detection. However, the feature maps extracted by purely sparse CNNs may have empty holes around object centers, especially for large objects. To overcome this issue, we introduce a DED block that expands sparse features towards object centers, as shown in Figure 2 (b). The DED block shares a similar structure with the SED block but utilizes the widely used dense convolutions instead. Specifically, we replace the SSR block in the SED block with a dense residual (DR) block, which is similar to the SSR block but consists of two dense convolutions. Furthermore, the RS convolution employed for feature down-sampling is replaced with a DR block that has a stride of 2. For feature up-sampling, we replace the sparse inverse convolution with a dense deconvolution. These modifications enable the DED block to effectively expand sparse features towards object centers. ### HEDNet Based on the proposed SED block and DED block, we introduce HEDNet, a hierarchical encoder-decoder network designed for 3D object detection. The architecture of HEDNet is illustrated in Figure 3. Given the raw point clouds, a dynamic VFE module [41] is used to perform voxelization to generate a grid of voxels denoted as \(\mathrm{F}_{0}\). Subsequently, a sparse backbone including two SSR blocks and several SED blocks is employed to extract 3D sparse features. Before being fed into the 2D dense backbone, the sparse features are compressed into dense BEV features like in [18]. The 2D dense backbone, composed of \(n\) DED blocks, is responsible for expanding the sparse features towards object centers. Finally, the output features are fed into the detection head for final predictions. At a macro level, HEDNet follows a hierarchical structure similar to SECOND [18], where the resolution of feature maps progressively decreases. At a micro level, the SED and DED blocks, key components of HEDNet, employ encoder-decoder structures. This is where the name HEDNet comes from. We adopt SED and DED blocks of three scales for HEDNet by default. ## 4 Experiments ### Datasets and metrics Waymo Opencontains 160k, 40k, and 30k annotated samples for training, validation, and testing, respectively. The metrics for 3D object detection include mean average precision (mAP) and mAP Figure 3: Architecture of the proposed HEDNet. Given the raw point clouds, we first perform voxelization to generate voxels by the VFE module, then employ the 3D sparse backbone and the 2D dense backbone to extract features for the detection head. The number in the bracket denotes the resolution ratio of the corresponding feature map relative to the input. _The RS convolutions for feature down-sampling that follow the feature maps \(F_{1}\), \(F_{2}\), and \(F_{3}\) are omitted for simplicity._ weighted by the heading accuracy (mAPH). Both are further broken down into two difficulty levels: L1 for objects with more than five LiDAR points and L2 for objects with at least one LiDAR point. nuScenesconsists of 28k, 6k, and 6k annotated samples for training, validation, and testing, respectively. Mean average precision (mAP) and nuScenes detection score (NDS) are used as the evaluation metrics. mAP is computed by averaging over the distance thresholds of 0.5m, 1m, 2m, 4m across all categories. NDS is a weighted average of mAP and the other five true positive metrics measuring the translation, scaling, orientation, velocity, and attribute errors. ### Implementation details We implemented our method using the open-source OpenPCDet [49]. To build HEDNet, we set the hyperparameter \(m\) to 2 for all SED and DED blocks and stacked 4 DED blocks for the 2D dense backbone by default. For 3D object detection on the Waymo Open dataset, we adopted the detection head of CenterPoint and set the voxel size to (0.08m, 0.08m, 0.15m). We trained HEDNet for 24 epochs on the full training set (_single-frame_) to compare with prior methods. For ablation experiments in Section 4.4, we trained the models for 30 epochs on a 20% training subset. All models were trained with a batch size of 16 on 8 RTX 3090 GPUs. The other training settings strictly followed DSVT [15]. For 3D object detection on the nuScenes dataset, we adopted the detection head of TransFusion-L and set the voxel size to (0.075m, 0.075m, 0.2m). We trained HEDNet for 20 epochs with a batch size of 16 on 8 RTX 3090 GPUs. The other training settings strictly followed TransFusion-L [28]. ### Comparison with state-of-the-art methods Results on the Waymo Open dataset.We compared the proposed HEDNet with previous methods on the Waymo Open dataset (Table 1). On the validation set, HEDNet yielded 1.3% L2 mAP and 1.3% \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & mAP/mAPH & \multicolumn{3}{c|}{Vehicle AP/APH} & \multicolumn{3}{c|}{Pedestrian AP/APH} & \multicolumn{3}{c}{Cyclist AP/APH} \\ & L2 & L1 & L2 & L1 & L2 & L1 & L2 \\ \hline SECOND [18] & 61.0/57.2 & 72.3/71.7 & 63.9/63.3 & 68.7/58.2 & 60.7/51.3 & 60.6/59.3 & 58.3/57.0 \\ PointPillar [19] & 62.8/57.8 & 72.1/71.5 & 63.6/63.1 & 70.6/56.7 & 62.8/50.3 & 64.4/62.3 & 61.9/59.9 \\ Lidar-RCNN [42]\({}^{\dagger}\) & 65.8/61.3 & 76.0/75.5 & 68.3/67.9 & 71.2/58.7 & 63.1/51.7 & 68.6/66.9 & 66.1/64.4 \\ Part-A2-Net [37]\({}^{\dagger}\) & 66.9/63.8 & 77.1/76.5 & 68.5/68.0 & 75.2/66.9 & 66.2/58.6 & 68.6/67.4 & 66.1/64.9 \\ SST [14] & 67.8/64.6 & 74.2/73.6 & 65.5/65.1 & 78.9/69.6 & 70.0/61.7 & 70.6/69.6 & 68.0/66.9 \\ CenterPoint [27] & 68.2/65.8 & 74.2/73.6 & 66.2/65.7 & 76.6/70.5 & 68.8/63.2 & 72.7/71.1 & 69.7/68.5 \\ PV-RCNN [43]\({}^{\dagger}\) & 69.6/67.2 & 78.0/77.5 & 69.4/69.0 & 79.2/73.0 & 70.4/64.7 & 71.5/70.3 & 69.0/67.8 \\ CenterPoint [27]\({}^{\dagger}\) & 69.8/67.6 & 76.6/76.0 & 68.9/68.4 & 79.0/73.4 & 71.0/65.8 & 72.1/71.0 & 69.5/68.5 \\ SWFormer [13] & -/- & 77.8/77.3 & 69.2/68.8 & 80.9/72.7 & 72.5/64.9 & -/- & -/- \\ OcTr [44] & 70.7/68.2 & 78.1/77.6 & 69.8/69.3 & 80.8/74.4 & 72.5/66.5 & 72.7/71.5 & 69.9/68.9 \\ PillarNet-34 [11] & 71.0/68.5 & 79.1/78.6 & 79.0/77.0 & 80.6/74.0 & 72.3/71.2 & 69.7/68.7 \\ AFDetV2 [45] & 71.0/68.8 & 77.6/77.1 & 69.7/69.2 & 80.7/46.6 & 72.2/67.0 & 73.7/72.7 & 71.0/70.1 \\ CenterFormer [46] & 71.1/68.9 & 75.0/74.4 & 69.9/69.4 & 78.6/73.0 & 73.6/68.3 & 72.3/71.3 & 69.8/68.8 \\ LargeKernel3D[33] & -/- & 78.1/77.6 & 69.8/69.4 & -/- & -/- & -/- & -/- \\ PV-RCNN++ [47]\({}^{\dagger}\) & 71.7/69.5 & 79.3/78.8 & 70.6/70.2 & 81.3/76.3 & 73.2/68.0 & 73.7/72.7 & 71.2/70.2 \\ FSD [48]\({}^{\dagger}\) & 72.7/70.5 & 79.5/79.0 & 70.3/69.9 & 83.6/78.2 & 74.4/69.4 & 75.3/74.1 & 73.3/72.1 \\ DSVT-Voxel [15] & 74.0/72.1 & 79.7/79.3 & 71.4/71.0 & 83.7/78.9 & 76.1/71.5 & 77.5/76.5 & 74.6/73.7 \\ HEDNet (ours) & **75.3/73.4** & 81.1/80.6 & **73.2/72.7** & 84.4/80.0 & 76.8/72.6 & 78.7/77.7 & 75.8/74.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with prior methods on the Waymo Open dataset (single-frame setting). Metrics: mAP/mAPH (%)\(\uparrow\) for the overall results, and AP/APH (%)\(\uparrow\) for each category. \({}^{\dagger}\): two-stage method. L2 mAPH improvements over the prior best method DSVT-Voxel [15]. HEDNet also outperformed the two-stage models PV-RCNN++ [47] and FSD [48]. More importantly, our method significantly outperformed the transformer-based DSVT-Voxel by 1.7% L2 mAPH on the vehicle category, where the average scale of vehicles is 10 times bigger than pedestrians and cyclists. Results on the nuScenes dataset.We compared HEDNet with previous top-performing methods on the nuScenes dataset (Table 2). On the nuScenes test set, HEDNet achieved impressive results with 72.0% NDS and 67.7% mAP. Compared with TransFusion-L (which adopts the same head as HEDNet), HEDNet showcased significant improvements, with a gain of 1.8% NDS and 2.2% mAP. In addition, on the three categories with large objects, namely bus, trailer (T.L.), and construction vehicle (C.V.), HEDNet outperformed TransFusion-L by 4.1%, 4.7%, and 5.4% mAP, respectively. These results further demonstrate the effectiveness of our method. Inference speed.We further compared HEDNet with previous leading methods in terms of detection accuracy and inference speed, as depicted in Figure 4. Remarkably, HEDNet achieved superior detection accuracy compared with LargeKernel3D [12] and DSVT-Voxel [15] with faster inference speed. Note that LargeKernel3D and DSVT-Voxel were developed based on large-kernel CNN and transformer, respectively. All models were evaluated on the same NVIDIA RTX 3090 GPU. ### Ablation studies To better investigate the effectiveness of HEDNet, we constructed two network variants: HEDNet-single and HEDNet-2D. For the HEDNet-single, we replaced all the SED and DED blocks in HEDNet with single-scale blocks, _i.e._,only keeping the first \(m\) SSR/DR blocks in each SED/DED block. For the HEDNet-2D, we replaced all 3D sparse convolutions in HEDNet with 2D sparse convolutions and removed the three feature down-sampling layers that follow the feature maps \(\mathrm{F}_{1}\), \(\mathrm{F}_{2}\), and \(\mathrm{F}_{3}\), following the settings in DSVT [15]. The two SSR blocks after \(\mathrm{F}_{0}\) were also removed. In HEDNet-2D, the resolution of the output feature map of the 2D dense backbone is same as that of the network input \(F_{0}\). We conducted experiments on the Waymo Open dataset to analyze various design choices of HEDNet. All models were trained on a 20% training subset and evaluated on the validation set. #### 4.4.1 Model designs Effectiveness of the SED block.We compared the models built with RSR block, SSR block, and our proposed SED block in Table 3 (a). For the models with RSR/SSR blocks, we replaced the SED blocks in HEDNet with RSR/SSR blocks. The 2D dense backbones in the first three models were removed to fully explore the potential of the three structures. The first model with RSR blocks achieved slightly better results than the second model with SSR blocks but with much higher runtime latency. The third model with SED blocks significantly outperformed the second model with SSR blocks by 1.96% L2 mAPH. Similar gains can be observed in the last two models with DED blocks. Effectiveness of the DED block.The DED block is designed to expand sparse features towards object centers. We compared the models that include different numbers of DED blocks in Table 3(b). The models with DED blocks achieved large improvements over the model without DED blocks. The model with five blocks performed worse than the model with four blocks. The former may be overfitted to the training data. We adopted four DED blocks for HEDNet by default. HEDNet with 2D sparse backbone.We conducted experiments on HEDNet-2D to evaluate the effectiveness of our method with 2D inputs. For the construction of 2D inputs, we set the voxel size to (0.32m, 0.32m, 6m), where the size of 6m in the Z axis corresponds to the full size of the input point clouds. To compare our SED blocks with SSR blocks, we replaced each SED block in HEDNet-2D with 2 SSR blocks or 4 SSR blocks, resulting in two models of different sizes (the first two models in \begin{table} \end{table} Table 3: Ablations on the Waymo Open. \({}^{\dagger}\): with 1 DED block. In (c), ‘back.’ denotes backbone. In (d), the gray line denotes the HEDNet-single, and the blue line denotes the default HEDNet. Table 3 (c)). From Table 3 (c), we can make the following observations. Firstly, the model with 16 SSR blocks achieved similar performance to the model with 8 SSR blocks, indicating that _stacking more SSR blocks could not further boost performance_. Secondly, the models incorporating SED blocks showed significant improvements over the models using SSR blocks (at least 1.6% gains on L2 mAPH). This observation demonstrates the effectiveness of our SED block. Thirdly, stacking two DED blocks achieved better performance than using a single one. These results clearly demonstrate the generality and effectiveness of our proposed SED block and DED block. #### 4.4.2 HEDNet versus HEDNet-single We conducted a thorough comparison between the proposed HEDNet and its single-scale variant, HEDNet-single, to explore the effectiveness of the encoder-decoder structure and investigate which objects benefit from HEDNet the most. Please note that the HEDNet is designed to capture long-range dependencies among features in the spatial space, which is the core of this work. **Firstly,** we compared the models built with blocks of different numbers of scales to explore the effectiveness of the encoder-decoder structure. As shown in Table 3 (d), the models with multi-scale blocks significantly outperformed the single-scale variant HEDNet-single (the line in gray color). Using more scales achieved better performance, but introduced higher runtime latency. To strike a balance between accuracy and efficiency, we adopted three-scale blocks for HEDNet by default. **Secondly,** we evaluated the three-scale HEDNet and the HEDNet-single in Table 3 (d) separately for each category and analyzed the results based on the distance range of objects to the LiDAR sensor. We illustrate the accuracy improvements of HEDNet over HEDNet-single at various distance ranges in Figure 5. Firstly, HEDNet showed significant improvements over HEDNet-single on the vehicle category, where the size of vehicles is 10 times larger than that of pedestrians and cyclists. This highlights the importance of capturing long-range dependencies for accurately detecting large objects. Furthermore, HEDNet achieved larger performance gains on distant objects compared with objects closer to the LiDAR sensor across all three categories. We believe this is because distant objects with fewer point clouds require more contextual information for accurate detection. Overall, these results demonstrate the effectiveness of our proposed method in detecting large and distant objects. **Thirdly,** we further present some visualization results of the two models in Figure 6. HEDNet-single exhibited limitations in accurately predicting boxes for large objects and the predicted boxes often Figure 6: Qualitative results on the Waymo Open. The red boxes are annotated by humans. The blue boxes and green boxes are predicted by HEDNet and the HEDNet-single, respectively. Red points correspond to the points that fall inside the human-annotated boxes. HEDNet predicted more precise bounding boxes for the objects marked by red arrows than the single-scale variant HEDNet-single. only covered parts of the objects (see the top row). In addition, when dealing with objects containing a few points, HEDNet-single struggled to accurately estimate their orientations (see the bottom row). In contrast, HEDNet predicted more precise bounding boxes for both scenarios, which we believe is owed to the ability of HEDNet to capture long-range dependencies. ## 5 Conclusion We propose a sparse encoder-decoder structure named SED block to capture long-range dependencies among features in the spatial space. Further, we propose a dense encoder-decoder structure named DED block to expand sparse features towards object centers. With the SED and DED blocks, we introduce a hierarchical encoder-decoder network named HEDNet for 3D object detection in point clouds. HEDNet achieved a new state-of-the-art performance on both the Waymo Open and nuScenes datasets, which demonstrates the effectiveness of our method. We hope that our work can provide some inspiration for the backbone design in 3D object detection. LimitationsHEDNet mainly focuses on 3D object detection in outdoor autonomous driving scenarios. However, the application of HEDNet in other indoor applications is still an open problem. AcknowledgementsThis work was supported in part by the National Key Research and Development Program of China (No. 2021ZD0200301) and the National Natural Science Foundation of China (Nos. U19B2034, 61836014) and THU-Bosch JCML center.
2310.20552
Privacy-preserving design of graph neural networks with applications to vertical federated learning
The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM). The surging developments of graph representation learning (GRL) have opened up new opportunities for FRM applications under FL via efficiently utilizing the graph-structured data generated from underlying transaction networks. Meanwhile, transaction information is often considered highly sensitive. To prevent data leakage during training, it is critical to develop FL protocols with formal privacy guarantees. In this paper, we present an end-to-end GRL framework in the VFL setting called VESPER, which is built upon a general privatization scheme termed perturbed message passing (PMP) that allows the privatization of many popular graph neural architectures.Based on PMP, we discuss the strengths and weaknesses of specific design choices of concrete graph neural architectures and provide solutions and improvements for both dense and sparse graphs. Extensive empirical evaluations over both public datasets and an industry dataset demonstrate that VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
Ruofan Wu, Mingyang Zhang, Lingjuan Lyu, Xiaolong Xu, Xiuquan Hao, Xinyi Fu, Tengfei Liu, Tianyi Zhang, Weiqiang Wang
2023-10-31T15:34:59Z
http://arxiv.org/abs/2310.20552v1
# Privacy-preserving design of graph neural networks with applications to vertical federated learning ###### Abstract The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM). The surging developments of graph representation learning (GRL) have opened up new opportunities for FRM applications under FL via efficiently utilizing the graph-structured data generated from underlying transaction networks. Meanwhile, transaction information is often considered highly sensitive. To prevent data leakage during training, it is critical to develop FL protocols with _formal privacy guarantees_. In this paper, we present an end-to-end GRL framework in the VFL setting called VESPER, which is built upon a general privatization scheme termed _perturbed message passing (PMP)_ that allows the privatization of many popular graph neural architectures. Based on PMP, we discuss the strengths and weaknesses of specific design choices of concrete graph neural architectures and provide solutions and improvements for both dense and sparse graphs. Extensive empirical evaluations over both public datasets and an industry dataset demonstrate that VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets. ## 1 Introduction In recent years, there has been an increasing interest in adopting modern machine learning paradigms to the area of financial risk management (FRM) [31]. The most crucial task in operational risk scenarios like fraud detection is identifying risky identities based on the behavioral data collected from the operating financial platform [4; 24]. For institutions like commercial banks and online payment platforms, the most important source of behavior information is the _transaction records_ between users, making _transaction networks_ (with users as nodes and transactions as edges) a direct and appropriate data model. To exploit the potential of transaction networks in a machine learning context, recent approaches [26; 47] have been exploring the adoption of graph representation learning (GRL) [16] as a principled way of incorporating structural information contained in transaction networks into the learning process. The family of graph neural networks in the message passing form [13; 48] offers a powerful yet scalable solution to GRL, and has become the prevailing practice in industry-scale graph learning [52]. Despite its convincing performance, high-quality network data are not always available for financial institutions. F It is, therefore, of great interest for institutions to learn GRL models _collaboratively_ while being coherent to regulatory structures at the same time. The technique of federated learning (FL) [20, 49] provides a recipe for such scenarios, with participating institutions (hereafter abbreviated as _parties_) exchanging intermediate results instead of raw data. Depending on the specific form of collaboration, FL protocols are generally divided into horizontal federated learning (HFL), where participants aggregate their locally trained models to obtain a strong global model, and vertical federated learning (VFL) where participants are able to align the identifiers of modeling entities and train a model that efficiently combines feature or label information that are distributed among different parties. VFL is particularly useful when training a (supervised) model is not possible based on information of a single party, i.e., each party holds only feature or label data, and has attracted significant attention in applications to FRM [28]. While ordinary FL paradigms avoid the transmission of local raw data, they typically lack a formal guarantee of privacy [20, Chapter 4]. Moreover, recent studies have reported successful attacks targeting individual privacy against FL protocols [54, 50, 19, 9, 8]. As transaction records are widely considered extremely sensitive personal information, it is thus critical to establish FL applications in FRM with rigorous privacy guarantees. Differential privacy (DP) [11] is the state-of-the-art approach to address information disclosure that injects algorithm-specific random noise to fuse the participation of any individual. The adoption of DP as the privacy model for FL is now under active development, with most of the applications appearing in HFL over independently identically distributed (i.i.d.) data through the lens of optimization [20]. However, discussions on applying DP over VFL remain nascent [3, 53, 39]. The situation becomes even more complicated in VFL over graph-structured data, since the right notions of (differential) privacy on graphs are semantically different from that in the i.i.d. case [35, 22]. So far, as we have noticed, the only work that provides meaningful DP guarantee under VFL over graphs is the GAP model [39], which requires three stages of training. Meanwhile, a notable aspect of GRL is that the structure of the underlying graph, i.e., whether the graph is dense or sparse, might have a significant influence on the performance of the graph neural model especially when the aggregation process involves noisy perturbations. This phenomenon was overlooked in previous studies. In this paper, we discuss private FL over graph-structured data under the task of node classification in the vertical setup with edge DP [35] chosen as the privacy model. We first develop a general privatization scheme termed _perturbed message passing (PMP)_ that produces message-passing procedures over graphs that are guaranteed to satisfy edge DP constraints. Next, we discuss the influence of the underlying graph's degree profiles on the utility of specific design choices of PMP, using two representative graph aggregation schemes, namely GIN [48] and GCN [23], and develop further improvements of PMP that better handles sparse graphs under the GCN aggregation scheme. Finally, we integrate the developments of PMP and its variants into a VFL architecture called VESPER based on the SplitNN framework [14], and conducted extensive empirical evaluations over both public and industrial datasets covering dense and sparse graphs. We summarize our contributions as follows: * We propose PMP, a general framework for designing differentially private message-passing procedures. PMP enables the privatization of many popular graph neural network architectures. The privacy guarantee of PMP is formally analyzed with new privacy amplification results under uniform neighborhood sampling. * We discuss two representative design choices under the PMP framework, GIN and GCN, and discover the fact that the utility of the privatized GNN model may be affected by the _degree profile_ of the input graph. To better accommodate varying graph structures, we develop the truncated message passing framework under the base model of GCN through properly tuning the hyper-parameter that reduces noise scale at the cost of learning less structural information, which is beneficial when the input graph is _sparse_. * We derive an end-to-end VFL learning framework operating over graph-structured data called VESPER, which is efficient in computation and communication. A thorough experimental study demonstrates that VESPER achieves better privacy-utility trade-off over previously proposed models and is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets. ## 2 Methodology ### Preliminaries We focus on the node classification task over a static, undirected graph \(G=(V,E)\) with node size \(N=|V|\), node feature \(X=\{x_{v}\}_{v\in V}\) and node labels \(Y=\{y_{v}\}_{v\in V_{T}}\) where \(V_{T}\subseteq V\) is the set of training nodes with \(N_{T}=|V_{T}|\). Throughout this article, we will assume the graph of interest to be degree bounded, i.e., \[\max_{G}\max_{v\in G}d_{v}\leq D \tag{1}\] for some \(D>1\). In this paper, we will be interested in the setup where the graph data \(G\) and label information are distributed over two distinct parties. Specifically, suppose there are two parties, A (Alice) and B (Bob), where A holds the graph data \(G\) as well as the node feature \(X\) and B holds the label collection \(Y\), both indexed by node identifiers that are known to both sides (i.e., \(V_{T}\) is known to both party A and party B). We consider a representative federated learning paradigm that A and B collaboratively train a graph representation learning model via utilizing the panoply of graph neural networks [13], which could be regarded as a special case of vertical federated learning (VFL) [49]. Under VFL protocols, party A and party B iteratively exchange intermediate outputs depending on the specific training algorithm chosen. A main concern in VFL [20, Chapter 4] is, therefore, whether the exchanging process satisfies formal _privacy_ guarantees. Before elaborating on privacy protection issues, we first state the threat model in our context. **Threat model** We adopt the following threat model in this paper: In the training stage, label party B is curious about the adjacency information (i.e., the existence of some edges) in the data party A. The data party A is assumed to be benign, with both parties strictly obeying the chosen VFL protocol. 1 In other words, the goal of privacy protection is to prevent the _semi-honest_ adversary (party B) from inferring the edge membership that is only known to party A. Footnote 1: The assumption of a harmless party A might be relaxed to a curious onlooker that tries to infer party B’s label information. We discuss related extensions in section D. Differential privacy [11] is now the _de facto_ choice of privacy protection paradigm against membership inference adversaries. As an appropriate solution concept in the current setup, we introduce the edge-level differential privacy model (hereafter abbreviated as Edge DP). **Definition 1** (Edge-level differential privacy(Edge DP)).: For a (randomized) graph-input mechanism \(\mathcal{M}\) that maps graphs to some output space \(\mathcal{S}\) and two non-negative numbers \(\epsilon\) and \(\delta\), the mechanism is \((\epsilon,\delta)\)-Edge DP if for any subset \(S\) (or more rigorously defined as Borel measurable sets) of the output space, the following holds uniformly for any two possible adjacent graphs \((G,G^{\prime})\): \[\mathbb{P}[\mathcal{M}(G)\in S]\leq e^{\epsilon}\mathbb{P}[ \mathcal{M}(G^{\prime})\in S]+\delta, \tag{2}\] where we define two graphs \(G\) and \(G^{\prime}\) as being adjacent if \(G\) could be edited into \(G^{\prime}\) via adding or removing a single edge. Regarding the capability of the adversary adopted in this paper, a VFL protocol satisfying Edge DP with a reasonable \(\epsilon\) level implies that based on all the exchanged intermediate outputs between party A and party B, any membership inference algorithm may not be able to make any sophisticated guess about the existence of some specific edge in a probabilistic sense, thereby offering strong privacy protection. Most contemporary differentially private machine learning algorithms involve sequentially applying DP procedures to intermediate learning steps [1], with the privacy level of Figure 1: A concise pictorial description of the VESPER framework. We use solid arrows to depict the dataflow of forward computations and use dashed arrows to depict the dataflow of backward computations. the entire training procedure obtained via composition theorems [11; 21]. In this paper, we choose the composition framework of analytical moment accountant (AMA) [44] that exploits the idea of Renyi DP [33], which we introduce below in our graph learning context: **Definition 2** (Edge-level Renyi -differential privacy(Edge RDP)).: Sharing notations with definition 1, the mechanism \(\mathcal{M}\) is \((\alpha,\epsilon(\alpha))\)-Renyi differentially private with some \(\alpha>1\) and \(\epsilon(\alpha)\geq 0\), if for any two possible adjacent graphs \((G,G^{\prime})\), the \(\alpha\)-Renyi divergence of the induced probability distribution of random variables \(\mathcal{M}(G)\) and \(\mathcal{M}(G^{\prime})\) is bounded by \(\epsilon(\alpha)\): \[D_{\alpha}\left(\mathcal{M}(G)||\mathcal{M}(G^{\prime})\right)\leq\epsilon( \alpha), \tag{3}\] with the definition of \(\alpha\)-Renyi divergence \(D_{\alpha}\left(\cdot||\cdot\right)\) presented in appendix A. To develop privacy-preserving learning algorithms under the AMA framework, we first design mechanisms that satisfy RDP guarantee in each step, then use standard composition results of RDP [33] to obtain the privacy level of the learning procedure. Finally, we apply the conversion rule in [2] to convert it back to \((\epsilon,\delta)\)-DP for reporting. **Message passing GNNs with stochastic training** The backbone of our privacy-preserving training framework is the graph neural network model in the message passing form [13]. We define the GNN of interest to be a map from the space of graphs to a node embedding matrix with embedding dimension \(d\): \(f:\mathcal{G}\mapsto\mathbb{R}^{N\times d}\), or \(H:=\{h_{v}\}_{v\in V}=f(G)\). For an \(L\)-layer GNN, let \(h_{v}^{(0)}=g(x_{v})\) be the input encoding of node \(v\), which could be either \(x_{v}\) or some encoding based on \(x_{v}\). We assume the following recursive update rule for \(1\leq l\leq L\) and \(v\in V\): \[h_{v}^{(l)}=\sigma\left(\widetilde{h}_{v}^{(l)}\right),\quad\widetilde{h}_{v}^ {(l)}=\omega_{v}W_{1}^{(l)}h_{v}^{(l-1)}+\sum_{u\in N(v)}\beta_{uv}W_{2}^{(l)}h _{u}^{(l-1)}, \tag{4}\] with \(\mathbf{\omega}:=\{\omega_{v}\}_{v\in V}\in\mathbb{R}^{N}\) and \(\mathbf{\beta}:=\{\beta_{uv}\}_{u,v\in V\times V}\in\mathbb{R}^{N\times N}\) be model-dependent coefficients, \(\sigma\) a parameter-free nonlinear function, and \(\mathbf{W}=(W_{1}^{(1)},\dots,W_{1}^{(L)},W_{2}^{(1)},\dots,W_{2}^{(L)})\) be the collection of learnable parameters. For any matrix \(W\), we denote \(\|W\|_{\text{op}}\) as the operator norm of the matrix (i.e., its largest singular value). In this paper, we assess two representative instantiations of the protocol (4) which are the GIN model [48] with with \(\omega_{v}\equiv\beta_{uv}\equiv 1,\forall u,v\in V\) and the GCN model [23] with \(\omega_{v}=\frac{1}{d_{v}+1}\) and \(\beta_{uv}=\frac{1}{\sqrt{d_{u}+1}\sqrt{d_{v}+1}}\). For simplicity we additionally let the nonlinearity be the ReLU function and set \(W_{1}^{(l)}=W_{2}^{(l)}=W^{(l)},1\leq l\leq L\). Applying message passing updates (4) may become computationally prohibitive for large input graphs, which are frequently encountered in industrial scenarios. To enable scalable GRL, the prevailing practice is to use graph sampling methods [15] and adopt **stochastic training of graph neural networks**. In this paper, we investigate the simple and effective sampling scheme of uniform neighborhood sampling [15; 7], with the maximum number of neighbors sampled in each layer to be the maximum degree \(D\). Asides from their computational benefits, it has been observed [1; 32] that stochastic training with a low sampling ratio over large datasets is crucial to training high-utility differentially private machine learning models with reasonably small privacy budgets, which has also been recently verified in the case of differentially private graph learning [7; 39]. ### Perturbed message passing A notable fact about the message-passing protocol (4) is that it uses the aggregation strategy of _weighted summation_, thereby allowing standard additive perturbation mechanisms like the Laplace mechanism or Gaussian mechanism that are prevailing in the design of differentially private algorithms [11]. Motivated by this fact, we propose a straightforward solution to privatize message-passing GNNs in a _layer-wise_ fashion named _perturbed message passing (PMP)_, which adds layer-wide Gaussian noise with an additional normalization step that controls sensitivity. We present the pseudo-code of PMP with neighborhood sampling in algorithm 1. Next we discuss the privacy guarantee of algorithm 1. To state our main result, we first define the right notion of sensitivity in our context: **Definition 3** (Edge sensitivity).: Denote \(G^{\prime}\) as the adjacent graph via removing the edge \((u^{*},v^{*})\) from \(G\), and let \(\widetilde{h}_{v}\) and \(\widetilde{h}^{\prime}_{v}\) be the outputs of node \(v\) generated via some \(1\)-layer GNN protocol under graph \(G\) and \(G^{\prime}\) without nonlinearity, then we define the (\(\ell_{2}\)-) _edge sensitivity_ as: \[\mathcal{S}=\max_{G,G^{\prime}}\sqrt{\sum_{v\in V}\|\widetilde{h}_{v}- \widetilde{h}^{\prime}_{v}\|_{2}^{2}}. \tag{5}\] The following theorem quantifies the privacy guarantee of algorithm 1: **Theorem 2.1** (RDP guarantee).: _Let \(\mathbf{H}_{L}\) be the released outputs with input a minitach of \(B\) subgraphs produced by uniform neighborhood sampling for \(L\) layers with a maximum number of \(D\) neighbors sampled in each layer. Define \(\epsilon(\alpha):=\frac{\alpha\sum_{L}^{L}\mathcal{S}_{2}^{2}}{2\theta^{2}}\), then \(\mathbf{H}_{L}\) is \((\alpha,\epsilon_{\gamma}(\alpha)\)-RDP for any \(\alpha>1\), where \(\gamma=1-\frac{\binom{N_{T}-\frac{2(DL-1)}{D-1}}{\binom{B}{B}}}{\binom{B}{B}}\) and_ \[\epsilon_{\gamma}(\alpha)\leq\frac{1}{\alpha-1}\log\left(1+ \gamma^{2}\binom{\alpha}{2}\min\left(4\left(e^{\epsilon(2)}-1\right),\epsilon( 2)\min\left(2,\left(e^{\epsilon(\infty)-1}\right)^{2}\right)\right)\right. \tag{6}\] \[+\left.\sum_{j=3}^{\infty}\gamma^{j}\binom{\alpha}{j}e^{(j-1) \epsilon(j)}\min\left(2,\left(e^{\epsilon(\infty)-1}\right)^{j}\right)\right)\] Theorem 2.1 provides a principled way of analyzing the privacy of privatized GNN models using algorithm 1, which boils down to computing the edge sensitivity of the underlying message passing protocol. However, sensitivity computations are usually conducted in a _worst-case_ manner, resulting in unnecessarily large noise levels and significant utility loss. Therefore, it is valuable to explore the utility of concrete PMP models and their relationships with the underlying input graph. To begin our expositions, we analyze the GIN model in the following section. ``` 0: Graph \(G=(V,E)\), input encodings \(\{h_{v}^{(0)}\}_{v\in V}\), number of message passing rounds \(L\), GNN spec \((\omega,\mathbf{\beta},\sigma)\), noise scale \(\theta\), GNN parameter \(\mathbf{W}\), batch size \(B\), maximum degree \(D\). 1: Sample a random batch of root nodes \(v_{1},\ldots,v_{B}\). 2: Apply an \(L\)-layer neighborhood sampler with each layer sampling at most \(D\) nodes with roots \(v_{1},\ldots,v_{B}\), obtaining a batch of \(B\) subgraphs \((G_{v_{1}}^{(L)},\ldots,G_{v_{B}}^{(L)})\). 3: Combine \((G_{v_{1}}^{(L)},\ldots,G_{v_{B}}^{(L)})\) into a subgraph \(G_{B}^{(L)}\). Additionally, overload the notation \(N(v)\) for the neighborhood of node \(v\) with respect to \(G_{v_{B}}^{(L)}\). 4: Set \(h_{v}^{(0)}=\frac{h_{v}^{(0)}}{\left\|h_{v}^{(0)}\right\|_{2}}\) for \(\forall v\in G_{v_{B}}^{(L)}\)) 5:for\(l\in\{1,\ldots,L\}\)do 6:for\(v\in G_{v_{B}}^{(L)}\)do 7: Compute the linear update \(\widetilde{h}_{v}^{(l)}=\omega_{v}W_{1}^{(l)}h_{v}^{(l-1)}+\sum_{u\in N(v)} \beta_{uv}W_{2}^{(l)}h_{u}^{(l-1)}\). 8: Do additive perturbation, \(h_{v}^{(l)}=\sigma(\widetilde{h}_{v}^{(l)}+N(0,\theta^{2}))\) 9: Normalize \(h_{v}^{(l}=\frac{h_{v}^{(l)}}{\left\|h_{v}^{(l)}\right\|_{2}}\) return A list of all layers' embedding matrices \(\mathbf{H}_{L}=(H^{(1)},\ldots,H^{(L)})\), with \(H^{(l)}=\{h_{v}^{(l)}\}_{v\in G_{v_{B}}^{(L)}},1\leq l\leq L\). ``` **Algorithm 1** PMP with neighborhood sampling ### Analysis of GIN and the challenge of sparse graphs We start with the following proposition: **Proposition 1**.: _Under the GIN model, the edge sensitivity is bounded from above by \(\mathcal{S}_{l}^{\text{GIN}}\leq\sqrt{2}\|W^{(l)}\|_{\text{op}}\) for each \(1\leq l\leq L\)._ **Advantage of layer-wise perturbations** According to proposition 1, the edge sensitivity of GIN is independent of the input graph's maximum degree upper bound \(D\), which is essentially a direct consequence of the fact that for a \(1\) layer message passing procedure, adding or removing one edge would affect up to two nodes' output embeddings. As a consequence, the privacy cost scales linearly with the number of message-passing layers in the Renyi DP framework, thereby offering a better privacy-utility trade-off than algorithms that do the do the perturbation only in the final layer [53], whose privacy cost may scale exponentially with \(D\). **Effectiveness and challenges of summation pooling** It has been observed in previous works [39] that aggregation perturbation with sum pooling works well on graphs with a large average degree. Intuitively, this phenomenon could be understood as keeping a high "signal-to-noise ratio (SNR)" during the aggregation process: For nodes with large degrees, the noise scale becomes relatively small with respect to the summation of incoming messages. Therefore if high-degree nodes are prevalent in the underlying graph, the utility loss during aggregation is reasonably controlled for most nodes. However, realistic graph data might not have large average degrees. For example, transaction networks in FRM scenarios are usually sparse, including many nodes with degrees smaller than \(5\) or even being singular (i.e., of degree \(0\)). Consequently, the SNR of sparse networks makes it harder for summation pooling to maintain decent utility, which will be further verified in section 3. ### Improvements of PMP in the GCN model As discussed in the previous section, the degree profile of the input graph may affect the utility of PMP-privatized GNNs when the underlying aggregation follows the summation pooling scheme. It is therefore of interest to explore aggregation schemes that are more appropriate when the input graph is sparse. On first thought, we may expect aggregation schemes like mean pooling or GCN pooling to have smaller sensitivities. However, such sensitivity reduction does NOT hold in a worst-case analysis: Just think of nodes with degree \(1\), then it is not hard to check that mean pooling or GCN pooling behaves similarly to summation pooling. The primary issue with worst-case analysis is that the resulting sensitivity is determined by extremely _low-degree_ nodes. Inspired by this phenomenon, we seek improvements by first deriving lower sensitivity with an extra requirement on a _degree lower bound_, and then relax the requirement via introducing a modified protocol. We start with the following observation: **Proposition 2**.: _Assume all the possible input graphs have a minimum degree larger or equal to \(D_{\text{min}}\), or_ \[\min_{G}\min_{v\in G}d_{v}\geq D_{\text{min}}>1. \tag{7}\] _Then for the GCN model, the edge sensitivity of the \(l\)-th layer \(\mathcal{S}_{l}^{\text{GCN}}\) is bounded from above by a function \(\eta_{l}(D_{\text{min}})\), defined as:_ \[\eta_{l}(D_{\text{min}})=\sqrt{2}\left(\frac{1-1/D_{\text{min}}}{2D_{\text{min }}}+\frac{1}{D_{\text{min}}(D_{\text{min}}+1)}+\frac{1}{D_{\text{min}}+1} \right)\|W^{(l)}\|_{\text{op}}. \tag{8}\] Proposition 2 implies that the edge sensitivity of the GCN model shrinks significantly if the underlying graph has a reasonably large minimum degree, which will result in a significantly reduced noise scale that improves utility. However, the minimum degree assumption (7) is impractical since most of the realistic graph data have a large number of nodes with small degrees. To circumvent the impracticality of assumption (7) while still being able to reduce the noise scale in the GCN model, we propose a modification to the basic message passing algorithm 1 called _truncated message passing_. The idea of truncated message passing is to block all the incoming messages unless the receiver node's neighborhood is large than or equal to \(D_{\text{min}}\), which is treated as a hyperparameter. For nodes with degrees lower than \(D_{\text{min}}\), the output embedding is instead produced by an MLP with perturbation that does not involve any edge information. A detailed version is provided in algorithm 2 in appendix F. Consequently, it is straightforward to show that the differential privacy guarantee of the resulting algorithm operating on any graph matches the privacy level of perturbed GCN (produced by algorithm 1) operating only on graphs with minimum degree assumption. **How to choose \(D_{\text{min}}\)?** To maintain the same privacy level under the truncated message passing algorithm, one may reduce the noise scale \(\theta\) at the cost of raising the minimum degree hyperparameter \(D_{\text{min}}\). On the one hand, reducing the noise scale significantly improves the utility of the message-passing procedure. On the other hand, raising \(D_{\text{min}}\) might prevent a non-ignorable proportion of nodes from learning structural information. Therefore, properly adjusting \(D_{\text{min}}\) may help achieve a better privacy-utility trade-off in the GCN model. In practice, one may choose \(D_{\text{min}}\) based on prior knowledge about the degree distribution of the underlying graph or via inspecting a private release of its degree distribution, which could be done efficiently using the Laplace mechanism [11]. ### VESPER: an end-to-end learning framework In previous sections, we have established the PMP framework for differentially private graph representation learning. Now under the vertically federated learning setup described in section 2.1, we propose an end-to-end architecture inspired by the SplitNN paradigm [14] based on the PMP framework, named **V**E**rtically private **S**plit GNN with **PER**turbed message passing(**V**ESPER**). The VESPER architecture contains three main components: Encoder, Private representation extractor (PRE), and Decoder. **Encoder** The encoder module maps input node features into a \(d\)-dimensional representation vector, with an ad-hoc choice being an MLP. Note that for node features with additional structural patterns (i.e., sequence data), we may use a more tailored encoder architecture as long as it does not involve edge information. The encoder model is physically stored in party A. **Private representation extractor** The PRE module takes its input the node embeddings produced by the encoder and a batch of \(B\) subgraphs produced by a neighborhood sampler. The output representation of PRE is computed using some specific type of PMP mechanism such as PMP-GIN or PMP-GCN. The PRE module is physically stored in party A. The output of PRE is a tensor of shape \(B\times d\times L\), with \(d\) and \(L\) being the dimension of graph representation and the number of message passing layers respectively. The outputs will be transmitted from party A to party B. **Decoder** The decoder module is physically stored in party B, which decodes the received node embeddings produced by PRE into the final prediction of VESPER with its structure depending on the downstream tasks (i.e., classification, regression, ranking, etc.). We test two types of decoder architectures in our implementation of VESPER. The first one proceeds via concatenating the node embeddings of all layers followed by an MLP, which we call the CONCAT decoder. The second one treats the node embeddings as a sequence of \(L\) node embeddings and uses a GRU network to combine them, similar to the idea used in GNN architectures like GaAN [25] and GeniePath [30] which we term the GRU decoder. The VFL training protocol closely resembles the SplitNN protocol [14], where in each step, forward computation results (i.e., the outputs of the PRE module) are transmitted from party A to party B. After party B finishes the forward computation using the decoded outputs and label information, party B first update its local decoder module via back-propagation, and then sends (partial) gradients that are intermediate results of the backward computation to party A for updating party A's local parameters (i.e., parameters of the encoder module and PRE module). A pictorial illustration of the VESPER architecture is presented in figure 1. We will discuss some practical issues in implementing VESPER in appendix E.1. ## 3 Experiments In this section we present empirical evaluations of the VESPER framework via investigate its privacy-utility trade-off and resistance to empirical membership inference attacks. Due to limited space, a complete report will be postponed to appendix C. ### Datasets We use three large-scale graph datasets, with their summary statistics listed in table 2. Specifically, we use two public datasets ogbn-products and Reddit, with their detailed descriptions postponed to appendix C.1. We additionally used an industrial dataset called **the Finance dataset** which is generated from transaction records collected from one of the world's leading online payment systems. The underlying graph is generated by treating users as nodes, and two nodes are connected if at least one transaction occurred between corresponding users within a predefined time period. The business goal is to identify risky users which is cast into an algorithmic problem of node classification with a binary label. The node features are obtained via statistical summaries of corresponding users' behavior on the platform during a specific time period. The training and testing datasets are constructed under two distinct time windows with no overlap. **A differentially private analysis of degree profiles** While all three datasets are large in scale (i.e., with the number of nodes exceeding \(100,000\)), they differ significantly in their degree distributions. For a better illustration, we conduct a differentially private analysis of degree distribution (with \((0.1,0)\)-differential privacy) detailed in appendix C.2. According to the analysis, we find that both the ogbn-products and the Reddit contain a large portion of high-degree nodes (as illustrated by the spiking bar at the \(\geq 50\) category), while the Finance dataset exhibits a concentration on the lower-degree nodes. As discussed in section 2.2, it is expected that the Finance dataset is more challenging for (private) message passing under sum pooling. ### Baselines We compare the proposed VESPER framework with three types of baselines, with each one being able to implement in the vertically federated setting. **MLP without edge information** we use MLP over node features directly is the most trivial solution to the learning task as it totally ignores edge information. **Non-private GNN counterparts** we compare with ordinary GCN and GIN models without privacy guarantees, or equivalently set the \(\epsilon\) parameter in the VESPER framework to be infinity. **GNN models with privacy guarantees** we consider two alternative approaches to private GRL, namely the VFGNN model [53] and the GAP model [39]. We found the privacy analysis in the corresponding papers to be somewhat incoherent with the privacy model in our paper and we conducted new analysis of their privacy properties, detailed in appendix C.3. ### Experimental setup Due to limited space, we postpone the description of our training configurations to appendix C.4 and elaborate more on the **privacy configurations**: All the privacy reports are based on the \((\epsilon,\delta)\)-differential privacy model, with \(\delta\) being the reciprocal of the number of edges. To adequately inspect the privacy-utility trade-off, we aim to evaluate all the models with differential privacy guarantees under the total privacy costs (privacy budgets) \(\epsilon\in\{1,2,4,8,16,32\}\), with the privacy costs accounted during the entire training period. We treat the setting where \(\epsilon\in\{1,2\}\) as of _high privacy_, \(\epsilon\in\{4,8\}\) as of _moderate privacy_, and the rest as of _low privacy_. For VESPER and VFGNN, we add spectral normalization to each GNN layer. For the privacy accountant, we base our implementation upon AMA implementation available in the dp-accounting library and use an adjusted sampling probability according to theorem 2.1. For each required privacy level, we compute the minimum scale of Gaussian noise via conducting a binary search over the adjusted AMA, with associating spectral norms of weight matrices fixed at one in all layers. **Evaluation metrics** We adopt classification accuracy (ACC) as the evaluation metric for the ogbn-products and Reddit datasets, and ROC-AUC score (AUC) as the evaluation metric for the Finance dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Non-private approaches} \\ \hline Model & ogbn-products & Reddit & Finance \\ \hline MLP & \(61.66_{\text{softmax}}\) & \(71.67_{\text{softmax}}\) & \(71.39_{\text{softmax}}\) \\ GNN & \(78.54_{\text{softmax}}\) & \(94.85_{\text{softmax}}\) & \(79.75_{\text{softmax}}\) \\ GCN & \(78.25_{\text{softmax}}\) & \(94.56_{\text{softmax}}\) & \(80.13_{\text{softmax}}\) \\ \hline \hline Model & ogbn-products & \multicolumn{3}{c}{Reddit} \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Model & ogbn-products & \multicolumn{3}{c}{Reddit} \\ \hline Model & ogbn-products ### Performance and privacy-utility trade-off According to our empirical experience, obtaining reasonable performance in the _high privacy_ regime is difficult, especially for baseline algorithms. Therefore, we report two sets of results: Firstly, we thoroughly investigate the privacy-utility trade-off regarding the proposed VESPER framework under both GIN and GCN aggregation schemes and plot the results in figure 2. Secondly, we report comparisons of VESPER against private and non-private baselines with only moderate to large privacy budgets and summarize the results in table 1. The results demonstrate that the proposed VESPER framework exhibits competitive privacy-utility trade-off under both GIN and GCN aggregators. Moreover, a comparison of GIN and GCN aggregator suggests that summation pooling excels when the underlying graph is dense (i.e., ogbn-products and Reddit), while introducing the truncated message passing mechanism helps achieving better results over sparse graphs (i.e., Finance). Finally, VESPER demonstrates a better privacy-utility trade-off compared to other private GNN baselines. ### Protection against membership inference attacks We launch a membership inference attack (MIA) [37] to empirically investigate the resilience of VESPER against practical privacy risks that targets the membership of nodes instead of edges, which is regarded as a stronger attack than edge MIA. We provide a detailed description of the attack setup in appendix C.7. The attack is conducted over trained models under privacy budgets \(\epsilon\in\{1,2,4,8,16,32,\infty\}\), where \(\epsilon=\infty\) indicated no privacy protection is adopted. We use ROC-AUC (AUC) to evaluate the attack performance. We report the attack performances in Figure 4. From the results, we observe that when privacy protection is disabled (\(\epsilon=\infty\)), the attacks show non-negligible effectiveness, especially on obgn-products and Reddit datasets. Generally, with the privacy budget getting smaller (privacy getting stronger), the attack performances sharply decline. With an appropriate privacy budget, the attacks on all three datasets are successfully defended with AUC reduced to around 0.5 (random guess baseline). **Additional experiments** We will report a series of ablation studies that assess the effect of maximum degree \(D\), minimum degree \(D_{\text{min}}\) for PMP-GCN and batch size in appendix C.8. ## 4 Related Works ### Graph representation learning in the federated setting The majority of GRL research in the federated setting is based on the horizontal setup, with each party holding its own local graph data [45; 17; 38]. The adoption of VFL paradigms to GRL is relatively few, VFGNN [53] uses additive secret sharing to combine feature information held by different parties, followed by a straightforward adaptation of the SplitNN framework [14] with the underlying neural model being graph neural networks. In [5; 46], the authors discussed VFL setups where node features and graph topology belong to different parties. We refer to the recent survey [27] for a more detailed overview. ### Graph representation learning with differential privacy guarantees The most straightforward way to integrate DP techniques into GRL is via adopting private optimization algorithms like DP-SGD[1]. However, meaningful notions of differential privacy over graph data (i.e., the edge model [35] and node model [22]) are semantically different from that of i.i.d. data, and require refined privacy analysis which is sometimes ignored in the privacy analysis in previous works [53; 45; 36]. In [7], the authors analyzed the DP-SGD algorithm in the node DP model. The GAP model [39] proposed a three-stage training procedure and analyzed its privacy guarantee in both edge DP and node DP models. However, we noticed that the privacy analysis in [39] did not properly address the effect of sampling, resulting in an overly optimistic performance. Considering only edge DP, randomized response (RR) [46] that flips each entry of the underlying graph's adjacent matrix guarantees privacy (in a stronger _local_ sense), but makes reasonable privacy-utility trade-off extremely hard to obtain in practice. ## 5 Conclusion and discussions We present the VESPER framework as a differentially private solution to node classification in the VFL setup using graph representation learning techniques. The core algorithmic component of VESPER is the PMP scheme that allows efficient learning under both dense and sparse graph data. We demonstrate the practicality and effectiveness of the proposed framework by establishing theoretical DP guarantees as well as investigating its ability for privacy protection and privacy-utility trade-off empirically. We will discuss possible extensions and future directions of the VESPER framework in appendix D.
2309.05841
WALLABY Pilot Survey: the Potential Polar Ring Galaxies NGC~4632 and NGC~6156
We report on the discovery of two potential polar ring galaxies (PRGs) in the WALLABY Pilot Data Release 1 (PDR1). These untargetted detections, cross-matched to NGC 4632 and NGC 6156, are some of the first galaxies where the Hi observations show two distinct components. We used the iDaVIE virtual reality software to separate the anomalous gas from the galactic gas and find that the anomalous gas comprises ~ 50% of the total H i content of both systems. We have generated plausible 3D kinematic models for each galaxy assuming that the rings are circular and inclined at 90 degrees to the galaxy bodies. These models show that the data are consistent with PRGs, but do not definitively prove that the galaxies are PRGs. By projecting these models at different combinations of main disk inclinations, ring orientations, and angular resolutions in mock datacubes, we have further investigated the detectability of similar PRGs in WALLABY. Assuming that these galaxies are indeed PRGs, the detectability fraction, combined with the size distribution of WALLABY PDR1 galaxies, implies an incidence rate of ~ 1% - 3%. If this rate holds true, the WALLABY survey will detect hundreds of new polar ring galaxies.
N. Deg, R. Palleske, K. Spekkens, J. Wang, T. Jarrett, J. English, X. Lin, J. Yeung, J. R. Mould, B. Catinella, H. Dénes, A. Elagali, B. ~-Q. For, P. Kamphuis, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, S. Oh, J. Rhee, P. Serra, T. Westmeier, O. I. Wong, K. Bekki, A. Bosma, C. Carignan, B. W. Holwerda, N. Yu
2023-09-11T21:54:46Z
http://arxiv.org/abs/2309.05841v2
# WALLABY Pilot Survey: the Potential Polar Ring Galaxies NGC 4632 and NGC 6156 ###### Abstract We report on the discovery of two potential polar ring galaxies (PRGs) in the WALLABY Pilot Data Release 1 (PDR1). These untargetted detections, cross-matched to NGC 4632 and NGC 6156, are some of the first galaxies where the H i observations show two distinct components. We used the iDaVIE virtual reality software to separate the anomalous gas from the galactic gas and find that the anomalous gas comprises \(\sim 50\%\) of the total H i content of both systems. We have generated plausible 3D kinematic models for each galaxy assuming that the rings are circular and inclined at \(90^{\circ}\) to the galaxy bodies. These models show that the data are consistent with PRGs, but do not definitively prove that the galaxies are PRGs. By projecting these models at different combinations of main disk inclinations, ring orientations, and angular resolutions in mock datacubes, we have further investigated the detectability of similar PRGs in WALLABY. Assuming that these galaxies are indeed PRGs, the detectability fraction, combined with the size distribution of WALLABY PDR1 galaxies, implies an incidence rate of \(\sim 1\%-3\%\). If this rate holds true, the WALLABY survey will detect hundreds of new polar ring galaxies. keywords: galaxies: peculiar - radio lines: galaxies ## 1 Introduction Polar ring galaxies (PRGs) - systems which exhibit a ring or disk of material oriented perpendicular to the main disk - are some of the most visually striking objects in the Universe. An extreme among the variety of kinematically-misaligned structures detected in galaxies that includes warps and counter-rotating disks (Serra et al., 2014), PRGs hold important clues for galaxy structure and evolution that range from constraining how galaxies interact (e.g. Bekki, 1997; Bournaud and Combes, 2003; Reshetnikov and Sotnikova, 1997) and accrete their gas (e.g. Maccio et al., 2006; Brook et al., 2008; Khoperskov et al., 2021), to probing the shapes and distributions of the dark matter halos in which they reside (e.g. Sackett and Sparke, 1990; Combes and Arnaboldi, 1996; Khoperskov et al., 2014). Since the discovery of the first putative stellar polar structures around nearby galaxies (Sandage, 1961; Schechter and Gunn, 1978), there have been a number of attempts to measure their incidence in large optical surveys. Whitmore et al. (1990) searched for polar structures around S0 galaxies and generated a catalogue of 157 PRG candidates. They confirmed that PRGs require kinematic followups to determine that the polar material is indeed rotating about the inner galaxy with the same center and with a _large_ inclination with respect to galaxy's plane. PRG candidates are objects where these sorts of kinematic followups have not yet been completed. Based on the fraction of galaxies searched that exhibit potential polar rings (0.5%) and the geometric detectability of these structures depending on their sky projection, Whitmore et al. (1990) estimated 5% of all S0 galaxies may be PRGs. Moiseev et al. (2011) used Galaxy Zoo (Lintott et al., 2008) classifications of Sloan Digital Sky Survey (SDSS) images to find 275 PRG candidates, while Reshetnikov and Mosenkov (2019) also mine SDSS to identify 31 new PRG candidates. These searches suggest that, in contrast to milder kinematic misalignments (e.g. Garcia-Ruiz et al., 2002; Ann and Park, 2006; Ristea et al., 2022), stellar PRGs are rare within the galaxy population as a whole: their incidence is on the order of \(\sim 10^{-3}\)(Reshetnikov et al., 2011), and polar structures around red systems are about twice as common as those around blue systems (Smirnov and Reshetnikov, 2022). It is worth noting that relatively few PRG candidates have been modelled in detail to confirm that they are likely to host polar structures. Moreover such modelling may indicate that the stellar structure is more consistent with being an extreme warp than a truly distinct ring (e.g. J\(\acute{o}\)zsa et al., 2009). Such a distinction is largely a matter of semantics given that both extreme symmetric warps, polar rings/disks, and inclined rings/disks are part of the larger continuum of symmetric kinematically misaligned objects. These likely have similar origins and may form an evolutionary sequence (i.e. PRGs and inclined rings may transform into warps; Brook et al., 2008). Atomic hydrogen (H i) structures exhibiting extreme kinematic misalignments relative to their host galaxies have also been discovered. Many early studies targetted stellar PRGs to look for H i counterparts, finding co-located gas rings (e.g. van Gorkom et al., 1987) or disk (Brook et al., 2008; Dzudzar et al., 2021). Similarly, De Rijcke et al. (2013) found a possible H i ring about the FCC046 dwarf elliptical galaxy in the Fornax cluster, while Bait et al. (2020) found an offset H i ring around the massive quiescent galaxy AGC 203001. Another interesting example is the irregular galaxy NGC 6822, where the gas is coincident with a young stellar disk that is inclined at \(~{}\sim 60^{\circ}\) to an extended spheroid with an older population (Demers et al., 2006; Zhang et al., 2021). While gas may be found around stellar PRGs during followup observations, there are also systems in which polar H i components are seen without a corresponding stellar counterparts down to SDSS depths. For example, Stanonik et al. (2009) found a single H i disk perpendicular to the stellar disk of the void galaxy SDSS J102819.24+623502.6, while the H i ring of the SB0 galaxy NGC 4262 coincides with a series of UV-bright knots (Buson et al., 2011). While these are individual galaxies, larger H i surveys are also uncovering more objects with single H i components that are misaligned from the stellar structures. Serra et al. (2012) examined the H i content of 166 early-type galaxies from the ATLAS\({}^{\rm 3D}\) survey (Cappellari et al., 2011). They found three galaxies with single-component H i disks that have position angles at \(\sim 90^{\circ}\) to the stellar components. While no stellar counterpart is seen with the H i it is worth noting that the optical data may not yet be deep enough to see such low surface brightness features. In addition to these single component H i polar structures, there are a number of galaxies where the H i shows multiple components. Schiminovich et al. (2013) observed the elliptical'shell' galaxies Arp 230 and MCG -5-7-1, identifying and kinematically modelling both inner and outer H i components. Translating their inclination and position angle fits for the inner and outer disks suggests that the Arp 230 outer structure is inclined at \(\sim 60^{\circ}\) to the inner disk. For MCG -5-7-1, the inner and outer structures are less distinct kinematically, which may be due to the observations of that galaxy having a lower signal-to-noise ratio (\(S/N\)). More recently, Cao et al. (2022) found two galaxies in the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA; Bundy et al., 2015) survey harbouring inner and outer gaseous disks, that are both misaligned from the stellar component, in a sample of 84 galaxies with large dispersions in their gaseous kinematic position angles but with well-measured stellar position angles. In this particular case they use H\(\alpha\) observations of the ionized gas rather than H i. In this paper, we report on two galaxies from the WALLABY Pilot Data Release 1 (PDR1, Westmeier et al., 2022), NGC 4632 and NGC 6156, which we identify to have multiple H i components with symmetric kinematic misalignments. The morphology and kinematics of their H i distributions suggest that these structures plausibly could be polar, a possibility that we explore with perfectly polar kinematic models that we developed to apply to the relatively low resolution and low signal-to-noise observations presented here. Although these models confirm that the anomalous gas in NGC 4632 and NGC 6156 could plausibly be polar, we cannot rule out the possibility of a strong warp (see Sec. 4 for details); we therefore refer to these systems as _potential H i PRGs_ throughout. These systems differ from the majority of other systems with gas-rich rings (e.g. van Gorkom et al., 1987; Serra et al., 2012; De Rijcke et al., 2013; Bait et al., 2020), in that the hosts are gas rich late-type galaxies. This expands the study of polar H i structures beyond the S0 class that typically defines PRGs, but presents the additional challenge of separating the anomalous gas in the ring from that of the underlying H i disk. We carry out this task in an immersive 3D virtual reality environment (Jarrett et al., 2021). Recent discoveries of polar H i structures are particularly interesting in the context of understanding cosmological galaxy assembly, as simulations suggest that they may result from gas accretion and infall (Khoperskov et al., 2021). This raises the possibility that the incidence of H i-rich rings around nearby galaxies in particular - "H i PRGs" - could constrain this evolutionary process. While a first estimate of the incidence of H i PRGs may be inferred from studies like Serra et al. (2012) and Cao et al. (2022) (although in the case of Cao et al. (2022), the misalignment is seen H\(\alpha\) emission, so perhaps gaseous PRGs is more appropriate), the targetted nature of the existing surveys and the lack of consideration of detectability therein implies that the corresponding census is incomplete. Obtaining a true measure of the incidence of H i PRGs requires untargetted H i surveys with sufficient depth, angular resolution and sky coverage to identify them, as well as a framework for assessing H i PRG detectability. The observational piece of this picture will soon be in place with a new generation of widefield surveys that are underway now, such as the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY, Koribalski et al., 2020) on the Australian SKA Pathfinder (Johnston et al., 2008; Hotan et al., 2021). In this paper, we leverage the detections of potential H i PRGs NGC 4632 and NGC 6156 in WALLABY PDR1 to develop the first framework for estimating H i PRG detectability from untargetted H i surveys, using both morphological and kinematic information and accounting for geometric and resolution effects. The paper is laid out as follows. Sec. 2 examines the WALLABY H i observations of NGC 4632 and NGC 6156, and Sec. 3 discusses ancillary optical and mid-infrared (MIR) observations. Section 4 presents an exploration of whether these galaxies are consistent with simple, perfectly polar models using a new formalism that we have developed. Section 5 leverages the untargetted nature of WALLABY to estimate the incidence of H i PRGs given the detectability of the NGC 4632 and NGC 6156 model polar rings determined from mocks at different ring projections and angular resolutions. Finally, Sec. 6 gives our discussion and conclusions. Additionally Appendix A describes attempts at modelling warps and inclined rings, while Appendix B describes the mathematical derivation of the perfectly polar models. ## 2 WALLABY observations WALLABY (Koribalski et al., 2020) is an untargetted Southern Hemisphere survey that will detect the H i gas content of \(\sim 210000\) galaxies (Westmeier et al., 2022) out to \(z\sim 0.1\). The survey will cover all of \(-60^{\circ}\leq\mathrm{DEC}\leq-15^{\circ}\) with some extensions reaching to \(\mathrm{DEC}=10^{\circ}\). Westmeier et al. (2022) released the PDR1 observations consisting of three fields observed at the full WALLABY sensitivity centered on the Hydra and Norma clusters, and the NGC 4636 group. Each field covers 60 square degrees of the sky constructed from two adjacent tiles, which themselves are composed from two interleaved 36 beam ASKAP footprints, providing a fairly uniform noise of \(\sim 1.6\) mJy/beam with a beam full-width at half-maximum (FWHM) of \(30^{\prime\prime}\). Once the mosaiced cubes are constructed, the SoFiA source finder (Serra et al., 2015; Westmeier et al., 2021) is used to detect individual H i sources in all fields. The PDR1 catalogue consists of 301 detections in Hydra, 144 detections in Norma, and 147 detections in NGC 4636. The majority of them are marginally resolved, with only 190 having an estimated H i distribution size along the major axis that is wider than 4 beams. In what follows, we define the H i diameter \(D_{\mathrm{H\,i}}\) as the major axis extent within which the axisymmetric surface density distribution exceeds \(1~{}M_{\odot}~{}\mathrm{pc}^{-2}\), measured in 30" WALLABY beams. A visual inspection of PDR1 source datacubes revealed two detections with ring-like H i components that appear to have a very different geometry from that of the inner H i disk: NGC 4632 (WALLABY J124232-000459), and NGC 6156 (WALLABY J163452-603705). H i and optical imaging of these systems are presented in Figures 1-4, and their basic properties are listed in Table 1. The anomalous H i components in both systems are separable from that of the main disks in position-velocity space. We do so for both galaxies using the virtual reality software package iDaVIE1(Comrie et al., 2021), designed to explore and interact with spectral imaging and other 3D data sets (Jarrett et al., 2021). iDaVIE's _'paint'_ tool allows direct editing of the cube masks in an immersive 3D environment, which we have used to define masks that separate the anomalous from main disk gas. This is a particularly powerful tool for low resolution, low \(S/N\) observations as it allows for adjustments at the individual pixel level while still allowing the user to view the entire cube. However, as the separation is done via visual inspection, some quantities calculated by comparing the two components, such as the mass ratio, are approximations. Footnote 1: [https://idavie.readthedocs.io/](https://idavie.readthedocs.io/) In this section, we describe the H i morphologies of NGC 4632 and NGC 6156, both as a whole and separated into main disk and anomalous components. Both here and throughout, we adopt a moment 1 colormap generated using a modified version of the CosmosCanvas package2. The colormap is designed to evoke the gas kinematics themselves, where the blueshifted gas appears to move 'out of the page' and the redshifted gas appears to move 'into the page'. The colormap also emphasizes differences between gas at velocities slightly above and below the systemic velocity, which is particularly useful in this study. Footnote 2: [https://github.com/mlarichardson/CosmosCanvas](https://github.com/mlarichardson/CosmosCanvas) ### Ngc 4632 The anomalous H i gas detected by WALLABY in NGC 4632 was first reported in Lee et al. (2022). NGC 4632 is a member of a triplet system with NGC 4666 and NGC 4668 (Sandage and Bedke, 1994) and is classified as an SAc galaxy (Ann et al., 2015). Recently Morales et al. (2018) included NGC 4632 in their search for tidal features in a subsample of the S\({}^{4}\)G survey (Sheth et al., 2010; Querejeta et al., 2015) from the _Spitzer_ Space Telescope imaging (Werner et al., 2004), and did not find a noticeable tidal feature in the IR imaging. We adopt the Cosmic Flows-III distance to NGC 4632 of 15 Mpc (Tully et al., 2016), placing it considerably more nearby than estimates from pure Hubble flow (D\(\sim\)24.6 Mpc, Westmeier et al., 2022). The WALLABY detection of NGC 4632 is \(\sim 10\) beams in diameter, making it one of the better-resolved PDR1 sources. Figure 1 shows an overlay of the anomalous component onto imaging taken from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP, Aihara et al., 2018), highlighting the mis-alignment of this gas relative to the underlying stellar disk. Figure 2 shows the 3D projections, the moment 0 and moment 1 maps, and velocity profiles of the H i in NGC 4632 divided into the total, main body, and anomalous gas (left, middle, and right columns). In both the 3D projection and the moment 0 map, the main body of NGC 4632 appears as a cigar-shaped structure in position-velocity space, while the anomalous gas appears as a tilted ring with a significantly different orientation than that of the main body. The ring is less distinct from the main body in the moment 1 map and velocity profile given its sky projection, an issue to which we return in Section 5. The mask about the main body gas is constructed using iDaVIE to be relatively tight in order to facilitate the kinematic modelling described in Section 4. We find a ratio of main body to anomalous gas mass of \(M_{\rm anomalous}/M_{\rm body}\sim 0.9\), which may be overestimated due to the tightness of the mask adopted, but the anomalous gas is clearly a significant contribution to the total H i content of NGC 4632. ### Ngc 6156 The second potential H i PRG, NGC 6156, has been classified as an SAB(rs)c pec (de Vaucouleurs et al., 1991), but it is important to note that it is located in the Norma cluster near the zone-of-avoidance. Despite the high extinction towards this galaxy, NGC 6156 has been detected in IRAS (Sanders et al., 2003) with a bright IR flux and is therefore a Luminous Infra-Red Galaxy (LIRG). Figure 3 shows an overview of the WALLABY observations of NGC 6156. In contrast to NGC 4632 in Figure 2, the moment 0 map of the full H i emission for this galaxy shows little sign of anomalous gas, whereas the 3D projection and moment 1 map show an anomalous component roughly perpendicular to the main body in position-velocity space with a bipolar velocity structure. This highlights the importance of examining the full kinematics of the gas distribution to identify anomalous components, which we discuss in the context of polar ring detectability in Section 5. As with NGC 4632, we use iDAVIE to separate out the main body from the anomalous gas in NGC 6156, and these components are shown in the second and third columns of Figure 3 respectively. However, the resolution and lower \(S/N\) of this detection compared to NGC 4632 makes this separation more uncertain. There is a projected spatial overlap between the outer main body and inner anomalous H i in NGC 6156, but the anomalous gas is separated from the main body in velocity space. Given the collisional nature of gas, it is likely separated in Cartesian space as well. It appears that the anomalous component is roughly face-on, given the low line-of-sight velocities in the moment 1 map and single-peaked anomalous H i velocity profile. Figure 4 shows an overlay of the anomalous H i on a composite \(g\)+\(r\)-band CTIO DECam image of NGC 6156, created using the same approach as described for Figure 1. The flux ratio of anomalous to main body H i is \(M_{\rm anomalous}/M_{\rm body}\sim 1.3\). As with NGC 4632, uncertainties in the mask construction render this mass ratio uncertain, but the anomalous gas clearly contributes a great deal to the total H i mass of NGC 6156. ## 3 Supplemental data ### Ngc 4632 The depth of the HSC imaging for NGC 4632 presented in Figure 1 affords a search for low-surface brightness optical features beyond the main disk. Figure 5 shows an exposure time-weighted composite \(g\), \(r\), and \(z\)-band HSC image with a logarithmic stretch to highlight the low surface brightness features. A clear ring-like stellar extension is seen in the image stack that is coincident with the southern portion of the anomalous H i shown by the contours in Figure 5, suggesting a common origin. This structure will be studied in detail in future work (Wang et al., in prep). In addition to this faint feature, Figure 1 shows that the stellar disk is more extended to the Southwest than to the Northeast relative to the centre of light. In addition to the WALLABY and Subaru HSC observations for NGC 4632, we obtained archival custom-built mosaics from the WISE Extended Source Catalogue (WXSC; Jarrett et al., 2013, 2019). The left-hand panel of Figure 6 shows the WISE W1 3.4 \(\mu\)m WXSC observations overlayed with the H i contours of the WALLABY moment 0 data. The W1 flux peak in NGC 4632 is mildly offset to the Southwest, ie. on the side of the disk minor axis that is more extended in the HSC imaging (Figure 1). We use the WISE colors and W1 luminosity (Jarrett et al., 2023) to estimate the stellar mass \(M_{\star}\) of NGC 4632 and find \(\log(M_{\star}/M_{\odot})=9.69\pm 0.08\), making it somewhat less massive than the Milky Way. This stellar mass coupled with the total H i mass gives a gas fraction \(\sim 0.3\) dex greater than the trend seen in the xGASS galaxies observed in Catinella et al. (2018). We also estimate the star formation rates using the W3 and W4 luminosities (Cluver et al., 2017, see also Cluver et al., 2022, in prep) as \(0.73\pm 0.07\) M\({}_{\odot}\)yr\({}^{-1}\), which is well within the star formation main sequence (Figure 7). ### Ngc 6156 The high extinction in the direction of NGC 6156 limits the quality of ancillary optical imaging. In the DECam imaging shown in Figure 4, the spiral structure to the South appears more flocculent than that to the North. The WISE W1 WSXC image in Figure 6 shows hints of the structure seen in the optical but with a more extended outer disk, the difference likely arising from the high optical extinction in the region. The WALLABY main body emission appears well-aligned with the W1 emission, with a similar ellipticity. Following the same procedure as for NGC 4632, we use the different WISE images to estimate \(\log(M_{\star}/M_{\odot})=10.75\pm 0.08\) and SFR=\(17.60\pm 1.83\) M\({}_{\odot}\) yr\({}^{-1}\) for NGC 6156. Again, the gas fraction, \(M_{\rm H{\textsc{i}}}/M_{\star}\), is \(\sim 0.3\) dex above the general trend of Catinella et al. (2018), indicating that the galaxy is indeed gas-rich. Moreover, the high star formation Figure 1: Anomalous H i component of the potential H i PRG NGC 4632, overlayed on a composite \(grz\) image from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP, Aihara et al., 2018), with the white bar in the bottom-left showing the image size scale. The anomalous gas clearly has a different orientation from that of the stellar disk. The H i intensity map in this figure is created by binning velocity channels together into several individual intensity maps, assigning a colour to each individual map, and blending them into one image. This retains some velocity information while rendering the H i intensity distribution, which we emphasize by using only subtle colours for velocity. Moment 0 and moment 1 maps of the total emission, main body and anomalous H i components of NGC 4632 are shown with linear intensity and velocity scales as well as with coordinate axes in Figure 2. Figure 2: An overview of the WALLABY PDR1 H i detection of NGC 4632. From left to right, the columns show the full detection, the gas associated with the galaxy main body (found using a custom 1DAVIE mask), and the anomalous gas. For illustrative purposes, panels A–C in the top row show 3D projections of each component using [22]. In these panels, the \(E,N,Z\) axes correspond to RA, DEC, and \(V_{\rm los}\). Panels D–F (second row) and panels G–I (third row) show moment 0 (panels D–F) and moment 1 (panels G–I) maps, respectively, with the magenta circles denoting the beam FWHM. Finally, panels J–L (bottom row) show the H i profiles for each component. Figure 3: An overview of the WALLABY PDR1 H i detection of NGC 6156. The panels are the same as in Figure 2. rate is above the star forming main sequence (see Figure 7) indicating that the galaxy is currently star bursting, consistent with its bright IRAS emission (Sanders et al., 2003). ## 4 Consistency with polar rings It is clear from Sec. 2 that NGC 4632 and NGC 6156 contain a significant quantity of anomalous gas. It is possible that this anomalous gas is some sort of polar structure (ring or disk), an inclined ring, or a warp. Observationally and in some theoretical models, warps and polar rings may be considered as part of the same continuum of kinematically misaligned galaxies. The distinction between an extreme warp or a polar structure is ill-defined, but in general warps exhibit a smooth transition between the main body and the anomalous gas while rings or disks exhibit a sharp discontinuity. Kinematic modelling can distinguish between these different classes of objects, provided that the observations have sufficiently high angular resolution and \(S/N\)(Jozsa et al., 2009; Arnaboldi et al., 1997). Unfortunately, our data are of low resolution with a correspondingly low \(S/N\) (c.f. Figs. 2 and 3), making the detailed modelling required to distinguish warps from polar rings difficult. We attempt to fit warped models in Appendix A for completeness; only those applied to NGC 4632 converge to solutions, which themselves are inconclusive. In this section, we therefore adopt the more restricted approach as described below. There are reasons to posit that NGC 4642 and NGC 4632 are not necessarily the same. The \(S/N\) ratio is \(\sim 0. NGC 6156 are H i PRGs. Firstly, the anomalous gas seen in NGC 4632 appears to be well separated from the main body, strongly suggesting a ring rather than a warped disk. This is reinforced by the faint stellar ring in the HSC imaging is coincident with the southern portion of the anomalous gas (Fig. 1). Moving to NGC 6156, the anomalous gas appears to be mostly face-on, but the hint of rotation implies a position angle that is \(\sim 180^{\circ}\) to the main body. Beyond these morphological clues, another reason to suspect that the anomalous gas is polar is that PRGs are far more stable and long-lived than inclined rings or extreme warps, on which the torque from the main disk are strong (Bournaud & Combes, 2003; Brook et al., 2008). As such, if the anomalous gas has an extreme morphology, it is more likely to be a polar ring than a ring or disk at an intermediate angle. We therefore consider whether or not perfectly polar models of the anomalous gas can reproduce the key features observed in the H i content of NGC 4632 and NGC 6156. We adopt a tilted-ring (TR) approach, which is widely used to kinematically model galaxies. Section 4.1 describes TR modelling in general, while Sec. 4.2 describes our method of perfectly polar ring modelling. Sections 4.3 and 4.4 present our models for NGC 4632 and NGC 6156 respectively. ### General Tilted Ring Modelling The basic ideas of TR modelling were proposed in Warner et al. (1973) and Rogstad et al. (1974), and since that time it has become one of the most widely used methods of generating kinematic models of H i galaxies. There are a variety of 2D TR modelling codes like ROTCUR (Begeman, 1989; van der Hulst et al., 1992) and 2DBAT (Oh et al., 2018) that fit velocity maps (moment 1 maps). More recently TR modelling methods have been developed to fit 3D data cubes directly. Some of the more common codes are TiRiFiC (Jozsa et al., 2007), 3DBAROLO (Di Teodoro & Fraternali, 2015), and FAT (Kamphuis et al., 2015), but there are a number of other codes available as well (e.g. KinMS, Davis et al. 2013 and GBKFIT, Bekiaris et al. 2016). A TR model is described by a set of rings, which in turn are characterized by a set of geometric and kinematic parameters. In 3D methods, mock observations are generated and compared to observations. These mock observations are made by first filling the rings with tracer particles and placing them in a data cube. Then the mock cube is convolved with an appropriate beam and frequency response function so that it may be directly compared to the data. The best fitting model is typically found by repeating this process and Figure 6: WISE W1 3.4 \(\mu\)m WSXC images of NGC 4632 (left panel) and NGC 6156 (right panel) with the H i gas overlayed. In both panels the greyscale WISE images are plotted in logarithmic units, while the main body and anomalous H i are shown as red and blue contours, respectively. For NGC 4632 in the left panel, the red contours are at \((5,10,20)\) M\({}_{\odot}\) pc\({}^{-2}\) and the dashed and solid blue contours are at \((2,6)\) M\({}_{\odot}\) pc\({}^{-2}\). For NGC 6156 in the right panel, the red and blue contour levels are \((5,10,15)\) M\({}_{\odot}\) pc\({}^{-2}\) and \((0.5,1.5)\) M\({}_{\odot}\) pc\({}^{-2}\) respectively. The cyan circles in the bottom-left corner of both panels show the WALLABY beam FWHM, and coordinates in both panels are given relative to the centre points listed in Table 1. Figure 7: The star formation rate as a function of stellar mass, both derived from WISE mid-infrared measurements. The two potential H i PRGs are indicated with points and estimated uncertainties, where NGC 4632 is at lower mass and with an SFR that is consistent with normal field galaxies, and NGC 6156 has a higher mass and excess SFR indicating a starburst phase. For comparison purposes, the contours and greyscale represent a large sample of nearby bright galaxies from the WISE Extended Source Catalogue (Jarrett et al., 2013, 2019). either minimizing some goodness of fit statistic or exploring the parameter space using Bayesian statistics. Below, we describe a TR model in which the anomalous gas is a perfectly polar ring, which has many fewer free parameters than a generalized TR model in which the disk is allowed to warp (Jozsa et al., 2009; Arnaboldi et al., 1997). ### Perfectly Polar Ring Modelling In a perfectly polar model, the ring has a sky-plane inclination, \(i_{r}\), and position angle, \(PA_{r}\), such that, when rotated back to the galaxy plane using the main body inclination, \(i_{g}\), and position angle, \(PA_{g}\), the ring's inclination with respect to the plane of the galaxy, \(i_{r,g}\) will be \(90^{\circ}\). To ensure this orientation we have developed a formalism relating the galaxy-ring plane geometry to the projected geometry. The full derivation of the polar ring geometry is found in Appendix B. In this formalism, we have defined the angle that the ring makes with the approaching side of the galaxy as \(\beta\). This can be seen in the left-hand panel of Figure 8, which shows a sketch of a polar ring system in galaxy plane coordinates (left) and sky plane coordinates (right). Given \(\beta\) and a galaxy's observed inclination \(i_{g}\) and position angle \(PA_{g}\), the ring's inclination \(i_{r}\) and position angle \(PA_{r}\) can be calculated. Defining \(\theta=PA+90^{\circ}\) for simplicity, and setting \(\theta_{g}=0^{\circ}\), the inclination and position angle of a circular polar ring is \[\cos(i_{r})=-\sin(i_{g})\cos(\beta)\, \tag{1}\] \[\tan(\theta_{r})=\frac{\sin(\beta)}{\cos(\beta)\cos(i_{g})}. \tag{2}\] Then, for any given galaxy position angle, the ring's position angle is \[PA_{r}=\theta_{r}-90^{\circ}+PA_{g}. \tag{3}\] One thing to note is that in Eq. 2, the \(\beta\) terms have not been simplified to \(\tan(\beta)\). This is done to allow the range of \(\theta_{r}\) to span \(0^{\circ}\leq\theta_{r}\leq 360^{\circ}\). The relation between the observed galaxy geometry and \(\beta\) to the ring's observed geometry allows for a relatively straightforward approach to kinematically model PRGs. First, it is necessary to construct a 'tight' mask that isolates the gas belonging to the host galaxy from the anomalous gas, and we use the iDAVIE masks described in Sec. 2 for this purpose. The next step is to kinematically model the galaxy body to determine its geometry and approximate outer rotation of the galaxy. The geometry is necessary for the polar ring modelling, and the rotation velocity can be used as a consistency check for the modelled ring. We used the 3DBAROLO 3 code (Di Teodoro and Fraternali, 2015) to generate the galaxy model as it is fast and can be run with a user supplied mask. This ensures that the model is only applied to the gas contained within the 'tight' mask. Given the low resolution and \(S/N\), during this step the galaxy body is modelled as a 'flat' disk with a constant inclination and position angle. Following Deg et al. (2022), we fit 2 rings per beam. Finally, we model the anomalous gas as a perfectly polar ring. For simplicity we treat the gas as single ring with three free parameters, \(\beta\), \(v_{rot}\), and \(\Sigma\) (the ring surface density). This is a limited enough number of free parameters that they can be explored using a basic grid search. To be clear, this step involves comparing the combined ring plus galaxy model to the SoFiA-masked datacube. Footnote 3: [https://editeodoro.github.io/Bbarolo/](https://editeodoro.github.io/Bbarolo/) ### Ngc 4632 Armed with the 'tight' mask for NGC 4632, we are able to proceed with the modelling. An additional difficulty in the TR modelling of the galaxy is the importance of the initial inclination estimate. In 3DBAROLO the code will either estimate the TR geometric parameters (position angle and inclination) from the observed moment maps or the user can supply estimates. In low resolution, low \(S/N\) observations, 3DBAROLO tends to find final models with very similar parameters to those initial estimates due to the flatness of the likelihood surface. In fact, we find that 3DBAROLO is able to produce acceptable fits for initial estimates of \(i_{g}\geq 55^{\circ}\). The obvious choice is to select the optical inclination as the initial estimate (\(\sim 65^{\circ}\) based on the Subaru HSC images). But, a secondary problem arrives at high inclinations. As the galaxy becomes more edge-on, Eq. 2 shows that the difference between the galactic position angle and the ring position angle increases. Supplying 3DBAROLO with an initial estimate of \(i_{g}=65^{\circ}\) yields a final model with \(i_{g}=67^{\circ}\). Such an inclination for the galaxy body causes ring models to have too large of a difference in position angles between the galaxy and the ring to match the observations. In order to obtain a position angle difference similar to that seen in the upper right panel of Figure 2, we chose an initial estimate of \(i_{g}=60^{\circ}\) for 3DBAROLO, which yields a final galaxy body model with \(i_{g}=62.5^{\circ}\). With the galaxy body modelled, it is then possible to model the polar ring using a different set of software than Figure 8: A sketch of a PRG orientation. The yellow circle/oval represents the host galaxy in galaxy plane coordinates (left) and sky plane coordinates (right) while the red dashed line/oval represents the polar structure. In the galaxy plane coordinates, \((x_{g},y_{g})\), the positive \(x_{g}\) axis is aligned with the approaching side of the galaxy, represented by the blue arrow in both panels. The angle of the polar ring with respect to the \(x_{g}\) axis is \(\beta\), which, when coupled with the galaxy’s inclination, \(i_{g}\) and position angle, PA\({}_{g}\), uniquely determines the ring’s inclination, \(i_{r}\), and position angle, PA\({}_{r}\), in the sky plane. 3DBAROLO. MCGSuite4 (Mock Cube Generator Suite; Lewis 2019, Spekkens et al. in prep) is a code that generates standalone TR models from either scaling relations or directly from a set of ring parameters. We model the polar ring by searching the relevant TR parameters and using MCGSuite to produce mock data cubes that can be compared to the observed data. To get the orientation, rotation speed, and surface density, a grid search of \(\beta\), \(v_{rot}\), and \(\Sigma\) is performed. The MCGSuite realization of each full model (body + ring) is then compared to the SoFiA-masked data. Figure 9 shows the final best fitting model, as determined by having the lowest \(\chi^{2}\) value during the mock cube - observation comparison. This model has \(\beta=335^{\circ}\pm 5^{\circ}\). Footnote 4: [https://github.com/CIRADA-Tools/MCGSuite](https://github.com/CIRADA-Tools/MCGSuite) Figure 9 illustrates that the best-fitting model reproduces some of the key features in the NGC 4632 H i detection: the velocity map shows the dual peaks seen in the data. In addition, the difference between the main body position angle and the ring position angle broadly agrees with the observed structure. Nonetheless, there are also differences between the model and the data, such as the sharpness of the transition between the galaxy body and ring region and the inclinations of the main body and ring. It is possible that these differences imply that the ring that is not perfectly polar, but our attempt a modelling such as structure is not a better fit to the data on the whole (see Appendix A). Given that that the perfectly polar model reproduces key features of the observations (a ring like structure with a similar velocity map), it is reasonable to conclude that NGC 4632 is indeed plausibly an H i PRG. ### Ngc 6156 NGC 6156 is analyzed using the same procedure as NGC 4632. The 'tight' body mask is input into 3DBAROLO and the galaxy body is modelled. As with NGC 4632, supplying an initial estimate for the inclination is critical. The optical observations seen in Fig. 4 as well as the images of NGC 6156 obtained in the Carnagie-Irvine Galaxy Survey (CGS, Ho et al., 2011), suggest that the galaxy (\(\sim 40^{\circ}\)). This is consistent with aspect ratio of the H i moment 0 map in Fig. 3E, but inconsistent with the degree of rotation seen in the moment 0 map of Fig. 3H. The WISE image in Figure 6, which we expect to more reliably trace the disk geometry in this high-extinction region, has a morphology that appears more inclined than the optical image and more consistent with the main body H i velocity. Given the moment 1 map of the galaxy main body and the morphology of the anomalous gas as well as the optical morphology, we provide 3DBAROLO with an initial inclination estimate of \(i_{g}=50^{\circ}\), yielding a galaxy model with \(i_{g}=51^{\circ}\). NGC 6156 is an even lower resolution and \(S/N\) detection than NGC 4632. Thus the small shift from the initial inclination estimate is unsurprising given flatness of the likelihood surface. Given the question of the inclination, we did attempt to model the galaxy using a \(40^{\circ}\) initial estimate for the main body. However, this model yielded a ring that is even more inclined, with lower rotation velocity, in poorer agreement with the data than the model adopted below. With the main body model fit from 3DBAROLO the same grid-based search of \(\beta\), \(v_{rot}\), and \(\Sigma\) can be done for the ring. As with NGC 4632, 3D realizations of each body+ring model are made using MCGSuite and the SoFiA-masked observed data cube is compared with these realizations to find the best fit. Figure 10 shows the best fitting model for NGC 6156, which has \(\beta=153^{\circ}\pm 5^{\circ}\). Figure 10 shows that the best-fitting perfectly polar model for NGC 6156 produces a moment 1 map with a similar velocity structure to the data. However, model moment 0 map has a ring that is more distinct, more inclined, and at a somewhat different position angle from the observed gas. Moreover, the low ring velocity relative to the outermost point of the galaxy's rotation curve raises the question of whether or not it is dynamically stable. These may be indications that the anomalous gas is better described by a warped disk or a ring with \(i_{r,g}\neq 90^{\circ}\) than a polar ring. However, the warped disk models that we attempted to apply to these data did not converge to a solution (see Appendix A), precluding further investigations in this regard until higher-resolution and \(S/N\) data are available. Thus while the perfectly polar models of NGC 6156 do exhibit differences from the data, it remains possible that the anomalous gas is a polar structure and and alternate model could be not be found. We therefore consider NGC 6156 to be a plausible H i PRG given the quality of the available data. ## 5 Polar ring detectability in wallaby The detection of two potential H i PRGs in the WALLABY pilot fields is exciting and challenging as they were unexpected. Reshetnikov et al. (2011) estimated the incidence rate of PRGs, \(f_{i}\), in the local Universe to be \(\sim 1/1000\). If the two are indeed H i PRGs, then given the \(\sim 200\) PDR1 detections that are resolved by more than 4 beams is certainly inconsistent with the canonical rate. However, hints of a higher incidence rate for H i PRGs are seen in Serra et al. (2012), as well as in the ionized gas study of Cao et al. (2022). These both show incidence rates closer to 1%, but they are also both targetted surveys, which makes an inference of the true incidence rate much more difficult due to their selection functions. As an untargetted survey, future WALLABY data releases will certainly contain enough detections to determine if there is indeed a higher incidence rate of H i PRGs. Given the untargeted observing approach for WALLABY, the incidence rate of H i PRGs in PDR1 can be estimated from the two potential H i PRGs that were identified using a relatively simple process. Firstly, detecting a PRG must depend on the geometry of the system and the resolution of the galaxies. That is, there is a geometric detection rate \(f_{g}(b)\) where \(b\) is the resolution of the galaxy in beams. Then the number of detected PRGs should be \[N_{\rm detected}=f_{i}\int f_{g}(b)N(b)db\, \tag{4}\] where \(N_{\rm detected}\) is the number of detected H i PRGs, \(b\) is the resolution of the galaxies, and \(N(b)\) is the number of galaxies at a given resolution. We note that if NGC 4632 and NGC 6156 harbour strong warps instead of true H i PRGs, the incidence rate we estimate nonetheless applies to these extreme kinematically distinct components. In order to estimate \(f_{i}\), it is first necessary to quantify \(f_{g}(b)\). To understand the dependence of H i PRG detectability on geometry and resolution, we generated a suite of mock PRG cubes based on our best fitting NGC 4632 and NGC 6156 models using MCGSuite. The suite consists of 49 cubes for each model at resolutions of 3, 4, 6, 8, and 10 beams across for a total of 490 mock observations. These cubes span a range of main body inclinations between 0\({}^{\circ}\) and 90\({}^{\circ}\) and \(\beta\) values between 0\({}^{\circ}\) and 180\({}^{\circ}\). Each cube is generated with a noise of 1.6 mJy/beam, matching them to WALLABY. In order to mimic a WALLABY analysis, SoFIA-2 is run on the cublets to generate a mask. We then visually examined each moment 0 and moment 1 map individually to determine if a signature of a polar ring is clearly apparent. If the feature can possibly be mistaken for noise, we do not count it in the determination of \(f_{g}(b)\). This determination is subjective, so the calculation of \(f_{g}(b)\) from these maps should be considered as an approximation. Figures 11 and 12 show the moment 0 and moment 1 maps for the 10 beam resolution NGC 4632 models. The moment 0 panels are shown using the same brightness scale for all panels to show the difference that the inclination makes in the detectability. Similarly, a linear stretch is used for the brightness as this is the typical stretch used when examining many galaxies in a large survey to find interesting outliers, like PRGs. At this resolution, which is roughly the observed resolution of NGC 4632, all orientations in the moment 0 maps show a clear ring structure except perhaps for the Figure 10: The best fitting perfectly polar ring model of NGC 6156. The panels are the same as in Figure 9. Figure 9: The best fitting perfectly polar ring model of NGC 4632. The moment 0 maps of the galaxy and the model are shown in panels A and B respectively, while the moment 1 maps are shown in panels C and D. The velocity scale in panels C and D is centered at the systemic velocity of the warped 3DBAROLO model. The rotation curve, deprojected surface density, inclination, and position angle profiles are shown in panels E-H respectively. The polar ring parameters are indicated by stars and are the last radial point in each of the right-hand panels. As with Figs. 2 and 3, the moment 0 maps use a linear stretch which explains the differing spatial extents of the moment 0 and moment 1 maps. \(\beta=0^{\circ}\), \(i_{g}=45^{\circ}\) panel. For the moment 1 maps in Figure 12, the situation is similar, with most orientations showing a clear sign of a polar ring. However, in the \(\beta=0^{\circ}\), \(i_{g}=30^{\circ}\) and \(45^{\circ}\) panels, the velocity structure of the polar ring and main disk are orientated such that the polar ring signature is hidden. A more complicated picture emerges when looking at the NGC 6156 models at 6 beams across shown in Figs. 13 and 14. While the ring is clear in the low inclination panels of Fig. 13, as well as the \(\beta=90^{\circ}\) panels, at intermediate inclinations the low and high \(\beta\) orientations do not indicate the presence of a polar ring. It is worth noting here that the PRG candidate shown in Figure 1 of Nishimura et al. (2022) bears a striking resemblance to the \(i_{g}=0^{\circ}\) moment 0 maps. Moving to the moment 1 maps, a clear polar ring signature is seen in most of the high inclination models. The presence of a rotating disk surrounded by low rotation gas seen in many of the panels is a typical polar ring signature that may not always be appreciated when examining velocity maps. It is in fact this signature that suggested that NGC 6156 is a PRG initially. An interesting point to note about the 6 beam resolution models is that the moment 0 and moment 1 maps are complementary. At intermediate resolutions, the orientations where the ring is most easily detected in the moment 0 maps are where the velocity maps provide little indication of a ring structure. Similarly, where the moment 1 maps easily see polar rings are the orientations where the moment 0 maps provide less information. This complementarity may allow WALLABY and other interferometric studies to detect many more PRGs than previous works (provided the observations are of sufficient resolution). While the NGC 4632 and 6156 models are significantly different, a key reason for the lower detectability in Figs. 13-14 compared to Figs. 11-12 is the resolution. To highlight this effect, Figure 15 shows the moment maps for a single geometry of the NGC 4632 model (top rows) and NGC 6156 model (bottom rows) at different resolutions. In both cases, at 3 beams across no polar features can be detected and at 10 beams across the polar ring is clearly detected. For the purposes of quantifying \(f_{g}(b)\) using the full suite of models, if a polar ring feature is identifiable in either the moment 0 or moment 1 map of a particular model, then we consider that PRG model detectable. The bottom row of Figure 16 shows \(f_{g}(b)\) for the NGC 4632 model (red lines) and NGC 6156 model (blue lines). An interesting trend seen is that, at low resolutions, polar ring features are more easily detected in the moment 1 maps, but at high resolutions they are more easily seen in the moment 0 maps. In both models, at 10 beams across \(f_{g}\sim 1\), while at 3 beams across \(f_{g}\sim 0\). To obtain a total detectability fraction at a given resolution, the NGC 4632 and NGC 6156 detectability fractions are averaged together. And to apply this at any resolution, we applied a spline fit to the points. The full curve for \(f_{g}(b)\) is shown as the black line in the bottom row of Figure 16. Armed with a \(f_{g}(b)\), it is possible to estimate the incidence of PRGs in WALLABY PDR1. Rearranging and discretizing Eq. 4 gives \[f_{i}=\frac{N_{\rm detected}}{\sum_{b}f_{g}(b)N(b)}. \tag{5}\] The upper row of Figure 16 shows \(N(b)\) of the WALLABY PDR1 detections, while the bottom row shows \(f_{g}(b)\), allowing an incidence rate to be calculated relatively easily. Using \(N_{\rm detected}=2\), \(f_{i}\approx 2\%\), which implies that there are approximately 8 PRGs among the 592 galaxies in WALLABY PDR1, despite only 2 being detected. In order to estimate the uncertainty on \(f_{i}\), we repeated the calculation using \(N_{\rm detected}=1\) and \(N_{\rm detected}=3\), yielding \(f_{i,\rm low}\approx 1\%\) and \(f_{i,\rm high}\approx 3\%\) respectively. This detection rate is a simple approximation based on a number of assumptions that may prove incorrect. The biggest source of error is whether our potential H i PRGs are in fact extreme warps. This is perhaps unlikely for NGC 4632, which would require a fairly extreme warp to reproduce the morphology of the H i but it is certainly possible for NGC 6156. Additionally, our geometric detection analysis is based on visual classifications that are certainly subjective. Moreover, we have not explored the effects of confusion due to the warps that are present in nature. Given this, our high detection rate should be taken as an upper limit on the occurrence of PRGs in WALLABY PDR1. ## 6 Conclusions Using H i alone, we have detected 2 potential H i PRGs in the WALLABY pilot fields, NGC 4632 and NGC 6156. NGC 4632 shows a clear ring structure in the moment 0 map, while NGC 6156 shows a characteristic ring pattern in the velocity map. Once the existence of the anomalous gas was recognized in the moment maps, we investigated the plausibility of this gas being consistent with a polar ring. To that end, we first separated that anomalous gas from the host galaxy's gas using the iDAVIE VR interactive tool. This allowed for the construction of kinematic models for the host galaxies using 3DBAROLO. We then developed a formalism relating the projected inclination and position angle of a polar ring to the host galaxy's inclination, position angle, and the angle the ring makes with respect to the approaching side of the galaxy. This allowed the exploration of perfectly polar models to find reasonable fits to the data using MCGSuite. The kinematic model fit for NGC 4632 is significantly better than the fit for NGC 6156, but it is not perfect. The model moment 0 map contains more of the features seen in the observations compared to the NGC 6156 model and observations. However, our perfectly polar models are very strict and do not encompass the full range of PRG's seen in nature. Polar structures may be slightly elliptical and have inclinations from the galaxy body in the range \(75^{\circ}\geq i_{r,g}\geq 90^{\circ}\). Exploring these parameters will likely yield much better fits to the data and comparing those models to those of extreme warps will help to confirm whether NGC 4632 is indeed a PRG. At this point, we argue that NGC 4632 is a potential H i PRG until such sophisticated modelling and comparisons to deeper observations are completed. For the second galaxy in our study, the model fit for NGC 6156 represents a bigger challenge. The lower resolution and \(S/N\) of NGC 6156 causes the modelling and interpretation of the anomalous gas to be more ambiguous. The likely orientation of the ring structure means that the polar ring signature appears in the moment 1 map as a rotating disk surrounded by low-rotation gas. While the rotating disk can be clearly separated from the anomalous gas, the kinematic model is difficult to construct. The low \(S/N\) of the observations means that goodness-of-fit surface is quite flat, which makes distinguishing good fits from poor fits fairly difficult. Nonetheless, we are able to generate a model with a similar polar ring signature in the moment 1 map as observed, although this model does have an abnormally low rotation velocity for the ring. Like NGC 4632, we expect that a great improvement in the fitting can be made in future work by relaxing the circular and polarity assumptions. In particular, slight deviations from polarity will allow the model ring to be more face-on, which will in turn lead to a higher model rotation velocity. Given the starbursting nature of NGC 6156 as well as its status as a LIRG, it will be important to explore alternate explanations for the anomalous gas seen in WALLABY PDR1. It may have been generated via (supernovae) outflows or some accretion event that would produce a warp rather than a ring. Given the importance of the galaxy inclination for the PRG interpretation coupled with the more face-on appearance of the stars, an investigation into the possibility of the gas being a warp is critical. However, given the low \(S/N\) and resolution of this data, untangling the full morphology of the gas will require both greater sensitivity and resolution. Nonetheless, the reg Figure 11: Moment 0 maps computed from different projections of the best fitting H i PRG model for NGC 4632, where the observed geometric parameters \(\beta\) (y-axis) and \(i_{g}\) (x-axis) are varied. These models are built to have resolutions of 10 beams across, which is consistent with observations of NGC 4632. The small magenta circle represents the beam size. The noise level in each model cube is 1.6 mJy/beam, which is consistent with WALLABY. The green checkmarks and show models that are recognizable as PRGs from the moment 0 map, whereas the red x denotes the model that would not be recognizable. ular structure of the anomalous gas as well as our ability to reproduce the salient features of the moment 1 map using our rudimentary model (namely the rotating disk signature surrounded by apparently face on gas) suggest that it is possible that NGC 6156 is a PRG, and that it should be considered a potential H i PRG until deeper observations are available. Finally, we explored the detectability of PRGs in WALLABY by generating identical models with different observed geometries. At low resolutions, it is nearly impossible to detect polar rings, while at high resolutions, PRGs are easily detected in either the moment 0 or moment 1 maps. Combining this with the distribution of WALLABY PDR1 galaxy resolutions allows the estimation of the incidence rate of PRGs. We find \(\sim 1\%\leq f_{i}\leq 3\%\), which is significantly higher than the canonical rate of 0.1% as measured in Reshetnikov et al. (2011). Our rate for multi-component H i PRGs is in line with the incidence rate seen by Serra et al. (2012), who found 3 PRGs around early type galaxies out of 166 observations. It is worth noting here that we have not yet investigated whether there are any WALLABY detections where single component H i detections are polar to stellar observations. This rate is also similar to the rate of misaligned multi-component gas seen in Cao et al. (2022). There are reasons to suspect that PRGs in general, and H i PRGs in particular, are more common than previous stellar studies have suggested. Firstly, the detection of stellar polar rings can be difficult due to their low surface brightness. For instance, the faint overdensity detected Figure 12: Same as Figure 11, but for the moment 1 maps of different projections of the best fitting H i PRG model of NGC 4632 at a 10 beam resolution. around NGC 4632 in Figure 5 is near the noise limit. H i gas has a much smaller dynamical range of surface brightnesses, making it easier to detect with new sensitive arrays. Additionally, the interferometric nature of the WALLABY H i provides the velocity structure which is where the key indications of NGC 6156 being a potential H i PRG are found. Moreover, given that these two potential H i PRGs are found in H i it is possible that H i rings have different formation mechanisms than stellar rings. It is worth noting here that most optical PRGs have been found around S0/E galaxies, while these structures are around spirals. This might be due to a selection bias in detecting PRGs, or it may be a signature of a different origin. Uncovering a larger sample of PRGs will help to determine whether there is real dependence on the host morphology for PRGs. One other potential reason for the increased incidence of potential H i PRGs in WALLABY PDR1 is the environment. The PDR1 fields were targeted to relatively nearby clusters and groups. Moreover, the NGC 4632 and NGC 6156 are at the outskirts of a massive cluster and group respectively. These environments have frequent tidal interactions and gas-rich mergers (relative to field galaxies), and both of these have the potential to generate PRGs. Moving to the future, we will be able to investigate the incidence rate of PRGs in different environments as WALLABY will cover the majority of the southern sky. This will remove a source of cosmic variance that may cause our large estimated incidence rate. Yet another possible reason for the higher incidence rate is that H i rings may be an evolutionary stage in PRGs. In Bekki (1997), simulations of PRG formation showed that Figure 13: Same as Figure 11, but but for the moment 0 maps of different projections of the best fitting H i PRG model of NGC 6156 at a 6 beam resolution. star formation converted some of the gas in PRGs to stars. It is possible that our potential H i PRGs may eventually evolve into stellar PRGs (assuming that the morphology is correct). However, simulations also show that polar structures are not permanent (Khoperskov et al., 2021). If this evolution picture is correct, some PRGs would first form in H i then convert to optical structures, and finally fall back to the galaxy disk. In such a situation, it may be that the H i rings last longer than optical rings, leading to a higher incidence rate. To investigate this idea detailed observations of the optical counterparts to the anomalous gas will be necessary to determine the local star formation rates. If PRGs are indeed more common than previously estimated, WALLABY will find an abundance of new PRGs. Predictions for the full WALLABY survey contained in Koribalski et al. (2020) indicate that \(\geq 10^{4}\) galaxies will be resolved by \(\geq 5\) beams. If the detection rate found in PDR1 holds true, this implies that WALLABY will detect hundreds of PRGs. Moreover, WALLABY plans to observe a subset of galaxies at a higher angular resolution of \(\sim 10^{\prime\prime}\). This increased resolution means that most of those "postage stamps" will be resolved enough to see polar rings or other large scale kinematic misalignments. However, it is important to recognize that our incidence rate estimate is based on only two galaxies, which is well within the realm of Poisson statistics. As WALLABY moves into the future, this possibility will rapidly be resolved. Accurately measuring the rate of PRGs and the more general rate of extreme kinematic misalignments in gas is an interesting probe of galaxy formation. Large scale cosmological simulations are producing Figure 14: Same as Figure 11, but for the moment 1 maps of different projections of the best fitting H i PRG model of NGC 6156 at a 6 beam resolution. these types of objects (e.g. Maccio et al., 2006; Roskar et al., 2010), which will allow a comparison between the observed and simulated formation rates similar to the work of Elgali et al. (2018) on the formation of ring galaxies (where there is a stellar ring in the disk plane that are generally formed by drop through interactions). Regardless of the precise PRG rate, the two potential H i PRGs found in WALLABY PDR1 are exciting and unique objects. Unlike most PRGs, they have been detected in H i rather than in stellar structures, they are gas rich, and we have been able to construct plausible kinematic models for the galaxies. As better observations and more sophisticated models are obtained for both galaxies, it will be possible to constrain the parameters of the ring progenitor (if they are indeed formed via mergers or flybys). Understanding whether these are from interactions or gas accretion will provide constraints on galaxy formation and evolution. ## Acknowledgement We would like to thank the anonymous referee for their helpful comments and suggestions. Thanks to G. Meurer for useful comments. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. The Australian SKA Pathfinder is part of the Australia Telescope National Facility ([https://ror.org/05qajvd42](https://ror.org/05qajvd42)) which is managed by CSIRO. Figure 15: A demonstration of how angular resolution affects the detectability of H i PRGs from both moment 0 and moment 1 maps. This figure shows both galaxies at a single orientation (\(i=45^{\circ}\), \(\beta=30^{\circ}\)) at the five resolutions that were examined. The circles at the top left corner of each frame represents the beam size for that figure. The green checkmarks and red x’s indicate panels where we were able or unable to determine the galaxy is PRG based on the specific moment map respectively. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. WALLABY acknowledges technical support from the Australian SKA Regional Centre (AusSRC) and Astronomy Data And Computing Services (ADACS). This research uses services or data provided by the Astro Data Lab at NSF's National Optical-Infrared Astronomy Research Laboratory. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work made use of the Inter-University Institute for Data Intensive Astronomy (IDIA) visualization lab5. IDIA is a partnership of the University of Cape Town, the University of Pretoria and the University of the Western Cape. This work made use of the iDaVIE-v (immersive Data Visualisation Interactive Explorer for volumet Figure 16: The number of observed PDR1 galaxies as a function of the number of beams across the major axis of that detection (top panel), and the estimated fraction of H i PRGs that would be detectable at that resolution (bottom panel). In the top panel, the the arrows indicate the resolution of each the detected potential H i PRGs. In the bottom panel, the dotted, dashed and solid lines represent the detectability fraction from moment 0, moment 1 and the combination of moment 0 and moment 1 information respectively for the NGC 4632 model (magenta) and the NGC 6156 model (blue). ric rendering) software (DOI - 10.5281/zenodo.4614116 - [https://idavie.readthedocs.io/](https://idavie.readthedocs.io/) ) Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. T.H.J. acknowledges support from the National Research Foundation (South Africa). AB acknowledges support from the Centre National d'Etudes Spatiales (CNES), France. P.K. is supported by the BMBF project 05A20PC4 for D-MeerKAT. K.S. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). ## Data availability The WALLABY data are available through the WALLABY data access portal at both the CSIRO ASKAP Science Data Archive (CASDA) and the Canadian Astronomy Data Center (CADC). The galaxy models and modelling scripts will be shared upon request to the corresponding author.
2309.11834
Gaussian beam quantum radar protocol
We present an entangled quantum radar protocol. It consists in scanning the sky with a thin Gaussian beam and measuring the travel time of the radiation reflected from the target, as in conventional radars. Here the Gaussian beam is composed of $N$ photons entangled in the frequency degrees of freedom. We show that this provides a $\sqrt{N}$ quantum enhancement over the unentangled case, as is usual in quantum metrology.
Lorenzo Maccone, Yi Zheng, Changliang Ren
2023-09-21T07:18:13Z
http://arxiv.org/abs/2309.11834v1
# Gaussian beam quantum radar protocol ###### Abstract We present an entangled quantum radar protocol. It consists in scanning the sky with a thin Gaussian beam and measuring the travel time of the radiation reflected from the target, as in conventional radars. Here the Gaussian beam is composed of \(N\) photons entangled in the frequency degrees of freedom. We show that this provides a \(\sqrt{N}\) quantum enhancement over the unentangled case, as is usual in quantum metrology. In this paper we introduce a radar protocol that can achieve quantum enhanced ranging and target detection. Other quantum radar protocols [1; 2; 3; 4] are typically based on the quantum illumination primitive [5; 6]: they can only discriminate the target presence or absence _at a pre-determined specific point_, which implies that one must scan the whole 3d space in search of a target, which is impractical and time-consuming. Recently, a protocol that claimed to achieve quantum enhanced ranging capabilities was proposed in [7]. Unfortunately, its theoretical analysis was flawed as it was based on an incorrect optical transfer function [8]. This paper then, to our knowledge, presents the only known three dimensional quantum enhanced radar protocol that can give quantum-enhanced ranging to the target: we remind that radar stands for RAdio Detection And Ranging. A protocol for enhanced ranging in the idealized one-dimensional case was presented in [9], and this protocol is a sort of 3d extension of it. The analysis presented here is agnostic to the wavelength used, so the same protocol can be used also in the optical regime (lidar). It is also more practical than the previous protocol [7] since it does not require wideband radiation entangled in the wave vector \(\vec{k}\) which would require large antennas or antenna arrays: the protocol presented here only employs Gaussian thin beams that can be produced with small antennas (or lasers). The beams are entangled only in the frequency degrees of freedom, which is more practical. There is no quantum enhancement in the transversal direction, although this feature can be added to our protocol, for example using the techniques presented in [10], namely by injecting squeezed vacuum in the modes orthogonal to the Gaussian mode used by the protocol, or with similar techniques. As in the case of most other quantum metrology protocols [11; 12; 13; 14; 15], we show an enhancement in the precision of the order of \(\sqrt{N}\), where \(N\) is the number of photons employed in the ranging procedure. Namely, we show that this protocol can achieve the Heisenberg bound in precision in the ideal noiseless situation. As usual, the situation becomes extremely more complicated in the presence of noise, such as loss of photons, but the usual general procedures and techniques to deal with noise can be applied also in this case [16; 17; 18; 19], e.g. one can increase the robustness by reducing the entanglement (and hence the precision gain) [20]. The outline follows: we start in Sec. I by showing how one can consistently derive the correct transfer functions in quantum optics, based on the quantization of the electromagnetic (EM) field which is reviewed (to set the notation) in App. A. We then show in Sec. II how these techniques can be used to give a quantum description of the usual classical radar protocols. This is useful to show what are the ultimate bounds (due to quantum mechanics) that can be achieved by these protocols in the absence of entanglement. Finally, in Sec. III we introduce and analyze our proposed protocol, and show its \(\sqrt{N}\) enhancement. ## I Quantization of optics through transfer functions In this section we review how optical transfer functions can be consistently quantized. The notation and the framework we employ is given in App. A. The linearity and shift-independence form of the Helmholtz equation (11) implies that its solutions \(U_{\omega}(\vec{r})\) can be shifted in space, e.g. along the \(z\) axis [21]: \[U_{\omega}(\vec{r}\,^{\prime})=\int d^{2}\vec{r}_{t}\;U_{\omega}(\vec{r}_{t},z )\;h_{\omega}(\vec{r}\,_{t}\,^{\prime},\vec{r}_{t},z^{\prime}-z)\;, \tag{1}\] where the \(t\) index represents the two-dimensional transverse vector \(\vec{r}_{t}=(x,y)\), and where \(\vec{r}=(\vec{r}_{t},z)\) and \(h_{\omega}\) is the transfer function that takes the solution \(U_{\omega}\) at the \(xy\) plane at position \(z\) and, with a convolution, moves it to the \(xy\) plane at position \(z^{\prime}\) (so the left-hand-side is independent of \(z\)). This allows us to obtain the field at all positions starting from the boundary values of the field in the plane \(xy\) at position \(z\). Of course, the general solution of the field is given by (10), where one must sum \(U_{\omega}\) over all components \(\omega\). Indeed, replacing Eq. (1) into (11), we obtain the whole field \(A(t,\vec{r}\,^{\prime})\) at position \(\vec{r}\,^{\prime}=(\vec{r}\,^{\prime}_{t},z^{\prime})\) from a field on the plane \(xy\) at position \(z\) at time \(t=0\) (boundary conditions): \[A^{+}(t,\vec{r}\,^{\prime})=\int d\omega\,d^{2}\vec{r}_{t}U_{\omega}(\vec{r}_ {t},z)\:h_{\omega}(\vec{r}\,^{\prime},\vec{r}_{t},d)e^{-i\omega t}, \tag{2}\] where, for simplicity of notation, we consider only the positive-energy component of the field \(A^{+}\), namely only the first term in the integral of (11). This transfer function formalism is developed in the classical case but it can be transferred to the quantum case by first expressing the solutions \(U_{\omega}(\vec{r})\) in terms of plane waves \(e^{i\vec{k}\cdot\vec{r}}\), and then associating to each plane wave an amplitude \(a(\vec{\kappa})\) as done in the customary EM quantization (see App. A). Namely, \[A^{+}(t,\vec{r}\,^{\prime})=\!\int\!d^{2}\vec{r}_{t}\;d^{3}\vec{\kappa}\:h_{ \omega_{\kappa}}(\vec{r}\,_{t}\,^{\prime},\vec{r}_{t},d)a(\vec{\kappa})e^{-i( \omega_{k}t-\vec{\kappa}\cdot\vec{r})}, \tag{3}\] where \(\vec{r}=(\vec{r}_{t},z)\), \(\vec{r}\,^{\prime}=(\vec{r}\,^{\prime}_{t},z^{\prime})\), and the integral over \(\omega\) is contained in the integral over \(\vec{\kappa}\), since \(\omega_{\kappa}=c\kappa\). [More rigorously, the integral over \(\omega\) comes from (2), whereas in the input field we are considering only the \(\omega\) component \(U_{\omega}(\vec{r})\), so we need to integrate only on the directions \(\vec{\kappa}/\kappa\) as discussed below Eq. (12).] For example, \(\vec{r}\) may represent the object plane and \(\vec{r}\,^{\prime}\) the image plane in an imaging apparatus, whose transfer function is given by \(h_{\omega}\)[22]. Eq. (3) is the main result of this section. This is the field _operator_, so by itself it says nothing about the physics: operators in quantum mechanics only acquire values when applied to states, e.g. the probability \(p(t,\vec{r})\propto|\langle 0|A^{+}|\psi\rangle|^{2}\). Alternatively, we may be interested in other expectation values of the field in state \(|\psi\rangle\). The field degrees of freedom (including its boundary conditions) are encoded into \(|\psi\rangle\). E.g., for a single photon with \(\psi(\vec{\kappa})\propto\alpha(\vec{\kappa})\) [see Eq. (13)], we have \[\langle 0|A^{+}(t,\vec{r}\,^{\prime})|\psi\rangle\] \[=\int d^{3}\vec{\kappa}\:d^{3}\vec{\kappa}\:d^{2}\vec{r}_{t}\:h_{ \omega^{\prime}}\:e^{-i(\omega^{\prime}t-\vec{\kappa}\cdot\vec{r})}\alpha( \vec{\kappa})\langle 0|a(\vec{\kappa}\,^{\prime})a^{\dagger}(\vec{\kappa})|0\rangle\] \[=\int d\omega\:d^{2}\vec{r}_{t}\:h_{\omega}(\vec{r}\,^{\prime}_{ t},\vec{r}_{t},d)\:U_{\omega}(\vec{r})\, \tag{4}\] where we used the fact that integrating \(\alpha(\vec{\kappa})e^{i\vec{k}\cdot\vec{r}}\) over the directions of \(\vec{\kappa}\), one obtains \(U_{\omega}(\vec{r})\) with \(\omega=c\kappa\), as is clear by the comparison between Eqs. (11) and (12). This result is what one would expect from (1) by integrating over \(\omega\) both members. ### Free field transfer function The specific form of the function \(h_{\omega}\) depends on what is present between the two \(xy\) planes at \(z\) and \(z^{\prime}\), and on the approximations used. In the case of vacuum propagation with the Fresnel approximation, we get ([21], Eq. 4.1-14) \[h_{\omega}(\vec{r}\,_{t}\,^{\prime},\vec{r}_{t},d)=\frac{i\kappa}{2\pi d}e^{ -i\kappa(\vec{r}_{t}-\vec{r}\,_{t}\,^{\prime})^{2}/d}\:e^{-i\kappa d}\:, \tag{5}\] with \(d=z^{\prime}-z\) the distance between the two planes. While the Rayleigh-Sommerfeld diffraction can give better results in some cases, in the regimes we are interested, the Fresnel approximation that gives rise to (5) is sufficient to our aims. We will be using Eq. (5) in the following. ## II Quantum treatment of a classical radar/lidar protocol A radar/lidar works by scanning the sky with a directional beam and measuring the time it takes for it to be bounced back. The direction of the beam and the time of flight suffice to do a full 3d localization of the target. In this section we analyze a classical radar/lidar protocol using quantized light to show what are the ultimate bounds imposed by quantum mechanics to such classical (unentangled) protocols. As directional beam, we consider a Gaussian beam. For simplicity we will consider the target as a perfectly (or partially) reflecting mirror orthogonal to the beam direction, of size larger than the beam waist at the target location. In this way we are guaranteed that the beam that returns to the antenna is still in a (possibly attenuated) Gaussian beam, see Fig. 1. The case in which the target is smaller than the beam waist should also not be too difficult: the returning beam will be a spherical wave originating at the target. The Gaussian beam for every frequency \(\omega\) has an amplitude \(U_{\omega}(\vec{r})=\varphi(\omega)G_{\omega}(\vec{r})\), where \(\varphi(\omega)\) is the spectral Figure 1: Sketch of the quantum radar protocol. A Gaussian beam composed by frequency-entangled photons bounces off the target and returns to the sender’s location. By measuring the average photon round-trip time, the sender can recover the target’s position with quantum enhanced accuracy. amplitude (the amplitude for each frequency \(\omega\) in the light) and \(G_{\omega}\) is ([21], Eq. 3.1-7) \[G_{\omega}(\vec{r})\propto\frac{1}{W(z)}e^{-\frac{\kappa\vec{r}_{t}^{2}}{2z_{0}W ^{2}(z)}}e^{-i[\kappa z+\frac{\kappa\vec{r}^{2}}{2z_{0}(z)}-\arctan(z/z_{0})]} \tag{6}\] where \(W(z)\equiv\sqrt{1+z^{2}/\bar{z}_{0}^{2}}\), \(R(z)\equiv 1+z_{0}^{2}/z^{2}\), \(z_{0}\) is a (length) constant that, together with the direction of the \(z\) axis, fully specifies \(G_{\omega}(\vec{r})\). It is possible to check that this solution for fixed \(z\) is propagated to an arbitrary \(z^{\prime}\) through the transfer function (5). The field \(A\) is obtained by integrating \(U_{\omega}\) over \(\omega\), as in Eq. (A1): \[A(t,\vec{r})=\int d\omega\:e^{-i\omega t}\:\varphi(\omega)\:G_{\omega}(\vec{r} )\;. \tag{7}\] We now consider a single photon in a Gaussian beam1. Since the light intensity \(|A(\vec{r})|^{2}\) at each point is directly proportional to the probability of finding the photon there (as discussed above) [21; 23], we can choose \(\tilde{\psi}(\vec{r})\propto A=\int d\omega\:\varphi(\omega)\:G_{\omega}( \vec{r})\), using (7) with \(t=0\) (because of the Heisenberg picture) and the proportionality constant chosen by the normalization condition. Namely, the photon wavepacket has probability amplitude proportional to \(G_{\omega}(\vec{r})\) for each frequency \(\omega\), and the probability amplitude of having frequency \(\omega\) is given by \(\varphi(\omega)\). So the state is Footnote 1: It would be more appropriate to use a coherent state (or a thermal state) to model a classical beam, but since the photons in coherent states are completely uncorrelated (Poissonian statistics), one can easily obtain the same arrival statistics as a coherent state \(|\alpha\rangle\) for its thermal mixtures by considering what happens to \(N=|\alpha|^{2}\) uncorrelated single photons. (Of course the photon number statistics will be different!) \[|\psi\rangle=\int d^{3}\vec{\rho}\:\tilde{\psi}(\vec{\rho})\:a^{\dagger}(\vec {\rho})|0\rangle=\int d^{3}\vec{\kappa}^{\prime}\:\tilde{G}(\vec{\kappa}^{ \prime})\:a^{\dagger}(\vec{\kappa}^{\prime})|0\rangle\;, \tag{8}\] where \(\tilde{G}(\vec{\kappa})\) is the Fourier transform of \(\varphi(\omega)G_{\omega}(\vec{\rho})\), which clearly only contains the frequency \(\omega\) (the amplitude \(\varphi\) is included in \(\tilde{G}\)). We can write the field at the image plane, i.e. the detector position \(\vec{r}^{\,\prime}\) in terms of the field at the target position \(\vec{r}\) using (3), with the transfer function (5). Then, the probability amplitude of finding the photon in \(t,\vec{r}^{\,\prime}\) is \[\langle 0|A^{+}(t,\vec{r}^{\,\prime})|\psi\rangle= \tag{9}\] \[\int d^{2}\vec{r}_{t}\:d^{3}\vec{\kappa}\:h_{\omega}(\vec{r}_{t} \:^{\prime},\vec{r}_{t},d)\:e^{-i(\omega_{k}t-\vec{\kappa}\cdot\vec{r})}(0|a( \vec{\kappa})\times\] \[\int d^{3}\vec{\kappa}^{\prime}\tilde{G}_{\omega^{\prime}}(\vec{ \kappa}^{\prime})\:a^{\dagger}(\vec{\kappa}^{\prime})|0\rangle=\] \[\int d^{2}\vec{r}_{t}\:d^{3}\vec{\kappa}\:h_{\omega}(\vec{r}_{t} \:^{\prime},\vec{r}_{t},d)\:e^{-i(\omega t-\vec{\kappa}\cdot\vec{r})}\:\tilde{ G}(\vec{\kappa})\propto\] \[\int d^{2}\vec{r}_{t}\:d\omega\:h_{\omega}(\vec{r}_{t}\:^{\prime},\vec{r}_{t},d)\:e^{-i\omega t}\:\varphi(\omega)\:G_{\omega}(\vec{r}_{t},z),\] where we used the commutator (A6) in the second equality, and in the third we used the far field condition \(\kappa_{z}\sim\kappa\gg\kappa_{x},\kappa_{y}\) to separate the integral over \(\vec{\kappa}\) into a frequency and a transverse part \(\vec{\kappa}_{t}\): \(d^{3}\vec{\kappa}\propto d\omega d^{2}\vec{\kappa}_{t}\), so that the integral of \(e^{i\vec{\kappa}\cdot\vec{r}}\psi(\vec{\kappa})\) over the transverse part of \(\vec{\kappa}\) gives the spatial field at frequency \(\omega\), namely \(U_{\omega}(\vec{r})=\varphi(\omega)G_{\omega}(\vec{r}_{t},z)\) with \(\vec{r}=(\vec{r}_{t},z)\) [compare Eqs. (A2) and (A4)]. Eq. (9) is compatible with what one would expect from the transfer function of the classical amplitudes: see Eq. (1) when the time evolution of the output field is added and both members are integrated over \(\omega\). We now use the fact that the free space transfer function (5) applied to a Gaussian beam (6) translates it forward by a factor \(d\), the distance between target and receiver (namely, a Gaussian beam is transformed in a Gaussian beam thanks to the hypothesis that the target is a partially reflecting mirror larger than the beam waist). Then, (9) becomes \[\langle 0|A^{+}(t,\vec{r}^{\,\prime})|\psi\rangle=\int d\omega\:e^{-i\omega t} \varphi(\omega)G_{\omega}(\vec{r}_{t}\:^{\prime},z+d)\;. \tag{10}\] As expected, at the image plane at position \(z^{\prime}\), it gives a pulse that is delayed by the transit time to the target. To see this, consider the expression (10) at the center of the image plane \(\vec{r}_{t}\:^{\prime}=0\) where, from (6) we see that \(G_{\omega}(\vec{r}_{t}\:^{\prime}=0,z)\propto e^{-i[\kappa z-\arctan(z/z_{0})]}\), so that (10) becomes \[\int d\omega\:\:e^{-i\omega[t+(z+d)/c]-i\arctan((z+d)/z_{0})} \varphi(\omega)\] \[=\tilde{\varphi}(t+(z+d)/c)\:e^{-i\arctan((z+d)/z_{0})}\;, \tag{11}\] where \(\tilde{\varphi}\) is the Fourier transform of \(\varphi\). Eq. (11) describes a pulse of spectral amplitude \(\varphi(\omega)\) and temporal amplitude \(\tilde{\varphi}(t)\) that is delayed by an amount \((z+d)/c\), where \(d\) is the distance between target and receiver and \(z=d\) is the position of the target. By measuring the time of arrival of the photon, one can obtain twice the distance \(2d\) to the target, as expected for a radar. The statistical error in this measurement is given by the width \(\Delta\tau\) of \(\tilde{\varphi}(\tau)\), proportional to the inverse of the bandwidth \(\Delta\omega\) of \(\varphi(\omega)\). Now we could do the same calculation with a coherent state \(|\alpha\rangle\) instead of a single photon state (8), with \(|\alpha\rangle=\bigotimes_{\vec{\kappa}}|\alpha(\vec{\kappa})\rangle\) with \(|\alpha(\vec{\kappa})\rangle\) eigenstates of \(a(\vec{\kappa})\): \(a(\vec{\kappa})|\alpha(\vec{\kappa})\rangle=\alpha(\vec{\kappa})|\alpha(\vec{ \kappa})\rangle\). This calculation should give exactly the same outcome as a classical field amplitude \(\alpha(\vec{\kappa})\), see Eq. (A4). ## III Quantum Radar/Lidar Protocol We now show how one can obtain an increased localization precision by using frequency-entangled light. For simplicity of notation, we will consider only the case of \(N=2\) entangled photons. This can then be extended to arbitrary \(N\). For the \(N\)-photon state \(|\psi_{N}\rangle\) of (A8), the probability of detecting them at \(t_{1},\vec{r}_{1},\cdots,t_{N},\vec{r}_{N}\) is [23] \[p\propto|\langle 0|A^{+}(t_{1},\vec{r}_{1})\cdots A^{+}(t_{N},\vec{r}_{N})|\psi_{N} \rangle|^{2}\;. \tag{12}\] Consider the biphoton entangled state with wavefunction \[\tilde{\psi}_{2}(\vec{r},\vec{\rho})\propto\int d\omega\:\varphi(\omega)\:G_{ \omega}(\vec{r},\vec{\rho})\;, \tag{13}\] which gives the probability amplitude of finding the two photons at positions \(\vec{r}=(\vec{r}_{t},z)\) and \(\vec{\rho}=(\vec{\rho}_{t},\rho_{z})\) (in the Heisenberg picture there is no time evolution), and where \[G_{\omega}(\vec{r},\vec{\rho})\equiv \tag{14}\] \[\frac{1}{W(z+\rho_{z})}e^{-\frac{\kappa(\vec{r}_{t}^{2}+\vec{\rho }_{t}^{2})}{2z_{0}W^{2}}}e^{-\tilde{[}\kappa(z+\rho_{z})+\frac{\kappa(\vec{r}_ {t}^{2}+\vec{\rho}_{t}^{2})}{2zR}-\arctan\frac{z+\rho_{z}}{z_{0}}]},\] which represents two photons of identical frequency \(\omega\) in a Gaussian beam, see Eq. (6). Except for the multiplicative term \(1/W\) and the arctan term, \(G_{\omega}\) is basically a product of two Gaussian beam single-photon amplitudes. So we can reuse the calculations above for the single-photon amplitude to find that the temporal amplitude at the center of the image plane \(\vec{r}_{t}=\vec{\rho}_{t}=0\) at the image plane position \(z=\rho_{z}=z^{\prime}\) is given by the analogous of (11): \[\langle 0|A^{+}(t_{1},\vec{r})A^{+}(t_{2},\vec{\rho})|\psi\rangle= \tilde{\varphi}(t_{1}+t_{2}+2(z+d)/c)\:e^{i\theta}\;, \tag{15}\] with \(\theta\) some irrelevant phase factor. From this, it is clear that the time of arrival sum \(t_{1}+t_{2}\) has an uncertainty \(\Delta\tau\),the width of \(\tilde{\varphi}\). Which means that the average time of arrival \((t_{1}+t_{2})/2\) is estimated to be the correct value \(d+z=2d\) with a statistical error \(\Delta\tau/2\). Instead, from (11) we saw that, using a single photon state, one estimates the time of arrival with an uncertainty \(\Delta\tau\), so the average time of arrival of two photons will be estimated with an uncertainty \(\simeq\Delta\tau/\sqrt{2}\). The \(\sqrt{2}\) enhancement in precision is the \(\sqrt{N}\) gain that one expects from entanglement in quantum metrology. The biphoton analysis done here can be straightforwardly extended to the case of \(N\) entangled photons in a Gaussian beam. Namely, a state with wavefunction \[\psi(\vec{r}_{1},\cdots,\vec{r}_{N})\propto\int d\omega\varphi(\omega)G_{ \omega}(\vec{r}_{1},\cdots,\vec{r}_{N})\;, \tag{16}\] where \(G_{\omega}\) is a trivial generalization of (14). It gives a \(\sqrt{N}\) enhancement in the average photon time of arrival, which translates into a \(\sqrt{N}\) precision enhancement in the longitudinal localization for each point in the sky scanned by the \(N\)-photon Gaussian beam (16), when one measures the average arrival time \(\sum_{i}t_{i}/N\). ## IV Conclusions In conclusion we have presented a quantum radar protocol that uses entanglement in the frequency/wavelength degrees of freedom to provide an quantum enhancement equal to the square root \(\sqrt{N}\) of the number \(N\) of entangled photons employed. We have shown in detail how the optical transfer function formalism can be employed in the fully quantum regime we analyze here. ## Appendix A Quantization of the EM field In this appendix we review the usual theory for the quantization of the electromagnetic field. This is useful to set the notation we use in the paper, and also to keep track of the specific roles that all the radiation degrees of freedom play in our protocol. Specifically, it is useful to understand the peculiar role of the frequency degree of freedom of the radiation that our protocol hinges on. ### Classical EM in the Coulomb gauge Start from the Maxwell equations in vacuum in the Coulomb gauge for the scalar and vector potentials \(\Phi\) and \(\vec{A}\): \(\nabla^{2}\Phi(t,\vec{r})=0\), \(\Box\vec{A}(t,\vec{r})=\frac{\partial}{\partial t}\vec{\nabla}\Phi\), where \(\vec{r}=(x,y,z)\) is the spatial position, and \(\Box=\nabla^{2}-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}\) is the d'Alamabertian with \(\vec{\nabla}=(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{ \partial}{\partial z})\) in Cartesian coordinates. Conventionally, since we are interested only in the quantization of the electromagnetic waves, one chooses the specific solution \(\Phi=0\), which implies \(\Box\vec{A}(t,\vec{r})=0\). For simplicity of notation we will consider scalar fields \(A(t,\vec{r})\) from now on: the vectorial part can be added by introducing two independent components, connected to the two polarizations of the em field (there are two polarizations because, in the Coulomb gauge, the potential \(\vec{A}\) is transverse: \(\vec{\nabla}\cdot\vec{A}=0\)). We separate the temporal and spatial degrees of freedom by taking a Fourier transform over time: \[A(t,\vec{r})=\int_{-\infty}^{+\infty}d\omega\:e^{-i\omega t}\:U_{\omega}(\vec{r })\;, \tag{17}\] where \(U_{\omega}\) is the component at frequency \(\omega\). Since \(A\) is real, we must have \(U_{-\omega}=U_{\omega}^{*}\), this condition can be enforced automatically if we separate the integral into a sum of two and change variable in the second: \[A(t,\vec{r})=\int_{0}^{\infty}d\omega[e^{-i\omega t}U_{\omega}(\vec{r})+e^{+i \omega t}U_{\omega}^{*}(\vec{r})]\;. \tag{18}\] For each component at frequency \(\omega\), it is clear from (17) that \(\Box A=0\) becomes the Helmholtz equation \[\nabla^{2}U_{\omega}(\vec{r})=\frac{\omega^{2}}{c^{2}}U_{\omega}(\vec{r}). \tag{19}\] A convenient2 solution is in terms of plane waves \(U_{\omega}(\vec{r})=\alpha(\vec{\kappa})e^{\pm i\vec{\kappa}\cdot\vec{r}}\). The real and imaginary part (or the modulus and phase) of \(\alpha\) are the two integration constants, and the wave direction \(\vec{\kappa}/\kappa\) parametrizes all the solutions, while we must choose \(|\vec{\kappa}|\) such that \(\omega=\vec{c}\cdot\vec{\kappa}=\pm|\vec{\kappa}|c\). The sign parametrizes two classes of solutions: we must choose \(+\) if the wave vector \(\vec{\kappa}\) is parallel to the wave velocity \(\vec{c}\), or \(-\) if it is antiparallel. The first case refers to the retarded waves, the second to the advanced waves ([24], sec. 6.4). We usually choose past boundary conditions, so we only use retarded waves3. Summarizing, the vacuum solution of the Maxwell equations is of the form Footnote 3: If we were to choose future boundary conditions, we would need to consider only advanced waves, and if we were to choose mixed future-and-past boundary conditions [25], we would have to keep both solutions, which, incidentally, is a real problem in quantum field theory, as this leads to a non-Hamiltonian evolution of the electromagnetic field? Advanced waves can be seen as propagating negative energy in the forward time direction or positive energy in the negative time direction. \[A(t,\vec{r})=\int_{\mathbb{R}^{3}}d^{3}\vec{\kappa}[\alpha(\vec{k})\,e^{-i( \omega_{\kappa}t-\vec{\kappa}\cdot\vec{r})}+\alpha^{*}(\vec{k})\,e^{i(\omega_{ \kappa}t-\vec{\kappa}\cdot\vec{r})}]\;, \tag{10}\] where \(\alpha(\vec{\kappa})\) is the positive-frequency amplitude4 and \(\omega_{\kappa}=|\vec{\kappa}|c\), so that the integral over \(\vec{\kappa}\) takes care of the integral over \(\omega\) in (10) (its modulus) and of the integral over the directions \(\vec{\kappa}/\kappa\) that enumerate all plane waves. Footnote 4: Note that the Maxwell equations are solved also by negative frequency plane waves of the type \(\alpha_{-}(\vec{\kappa})\,e^{-i(\omega_{\kappa}t+\vec{\kappa}\cdot\vec{r})}\) but, as discussed above, we will ignore these solutions, by choosing past boundary conditions as is done usually. ### Quantum em: observables The energy of the electromagnetic field is \(H=\frac{\epsilon_{0}}{2}\int d^{3}\vec{r}[E^{2}(t,\vec{r})+c^{2}B^{2}(t,\vec{r })]\), where \(\vec{E}\) and \(\vec{B}\) are the electric and magnetic fields. In terms of the amplitudes \(\alpha(\vec{\kappa})\), one can show that, in the Coulomb gauge, the energy is \[H=\frac{1}{2}\int d^{3}\vec{\kappa}\,(P_{\vec{\kappa}}^{2}+\omega_{\kappa}^{2 }X_{\vec{\kappa}}^{2})\;, \tag{11}\] with \(P_{\vec{\kappa}}\propto i(\alpha_{\vec{\kappa}}^{*}-\alpha_{\vec{\kappa}})\) and \(X_{\vec{\kappa}}\propto(\alpha_{\vec{\kappa}}^{*}+\alpha_{\vec{\kappa}})\). Eq. (11) is the energy of a collection of independent (noninteracting) harmonic oscillators (one for each value of \(\vec{\kappa}\)), so we can quantize by considering \(X\) and \(P\) as "position" and "momentum" operators, promoting the amplitudes \(\alpha\) to operators \(a\). Namely, we impose \([X_{\vec{\kappa}},P_{\vec{\kappa}}]=i\delta(\vec{\kappa}-\vec{\kappa}^{\prime})\), where the delta shows that they are independent oscillators for each \(\vec{\kappa}\). From the definitions of \(X\) and \(P\), this implies the commutators \[[a(\vec{\kappa}),a^{\dagger}(\vec{\kappa}^{\prime})]=\delta(\vec{\kappa}-\vec {\kappa}^{\prime})\;,[a(\vec{\kappa}),a(\vec{\kappa}^{\prime})]=0\;. \tag{12}\] The quantization of the general solution of the Maxwell equations in the Coulomb gauge \(\Box A=0\), is then \[A(t,\vec{r})=\int d^{3}\vec{\kappa}\,[a(\vec{k})\,e^{-i(\omega_{\kappa}t-\vec {\kappa}\cdot\vec{r})}+a^{\dagger}(\vec{k})\,e^{i(\omega_{\kappa}t-\vec{ \kappa}\cdot\vec{r})}]\;, \tag{13}\] namely Eq. (10) quantized. Importantly, since we are introducing the time evolution in the operators, we are working in the Heisenberg picture (or in the interaction picture with the free-field Hamiltonian to evolve the operators). We are working with the vector potential field \(A\), but the electric field and magnetic fields are trivially obtained from it: \(\vec{E}=-\frac{\partial}{\partial t}\vec{A}-\vec{\nabla}\Phi\), \(\vec{B}=\vec{\nabla}\times\vec{A}\), which give expressions very similar to (13), except for the fact that the derivatives introduce a minus sign between the two terms of the right-hand-side (as \(\vec{A}\), also \(\vec{E}\) and \(\vec{B}\) have two independent components since they are transverse: \(\vec{\nabla}\cdot\vec{B}=0\) and, in vacuum, \(\vec{\nabla}\cdot\vec{E}=0\)). The intensity of the field is proportional to the time averaged square \(\langle E^{2}\rangle_{t}\). This is basically equal to the average photon number in the field. Indeed, from (10) and the fact that \(\vec{E}=-\frac{\partial}{\partial t}\vec{A}\) we have that, classically, \(E^{2}=-\omega(\alpha^{2}e^{i\phi}+(\alpha^{*})^{2}e^{-i\phi}-2|\alpha|^{2})\) for a classical field with only a single \(\vec{\kappa}\), with \(\Phi=\omega t-\vec{\kappa}\cdot\vec{r}\). This almost matches the quantum result one would get for a coherent state \(|\alpha\rangle\) for which \(a|\alpha\rangle=\alpha|\alpha\rangle\) (which represents a classical field). A coherent state has \(E^{2}=-\omega(\alpha^{2}e^{i\phi}+(\alpha^{*})^{2}e^{-i\phi}-2|\alpha|^{2}-1)\), where the -1 term comes from the commutator \([a,a^{\dagger}]=1\) (valid for the quantization of a field with single \(\vec{\kappa}\) vector). The time average removes the terms with the phase, leaving only the average photon number for a coherent state, i.e. \(\langle\alpha|a^{\dagger}a|\alpha\rangle=|\alpha|^{2}\). So while \(E^{2}\) does not coincide with the average photon number, the time-averaged \(E^{2}\) essentially does for classical fields. Similar considerations apply also to states with fixed photon number we consider below, where \(\langle a^{2}\rangle=0\). ### Quantum em: states In the classical case, we can choose a specific form of the \(\alpha(\vec{\kappa})\) to obtain a specific solution of the Maxwell equations (which can be done by choosing appropriate boundary conditions for the field). In the quantum case, the \(\alpha\to a(\vec{\kappa})\) are operators. There are two ways to assign a value to them: (i) have them act on eigenstates of the field (which implies that the field is in a state where there are no quantum fluctuations of the field). From the form of Eq. (13), it is clear that the eigenstates of the field are quadrature eigenstates for each \(\vec{\kappa}\), where the quadrature is \(Q_{\varphi}\equiv(a\,e^{-i\varphi}+a^{\dagger}e^{i\varphi})/\sqrt{2}\). These eigenstates are unphysical as they are infinitely squeezed states with infinite average energy \(\hbar\omega_{\kappa}\langle a^{\dagger}(\kappa)a(\kappa)\rangle\). (ii) we can calculate the field expectation value on an arbitrary state \(|\psi\rangle\) of the field (which implies that we can calculate the average field, because there are quantum fluctuations: measuring the field multiple times, we would get different results). How do we choose \(|\psi\rangle\)? It is the state of the em degrees of freedom. Since these are given by a harmonic oscillator for each \(\vec{\kappa}\), the Hilbert space is a Fock space for each \(\vec{\kappa}\), so the most general state is given by \[|\psi\rangle=\sum_{n}\gamma_{n}|\psi_{n}\rangle,\text{ with } \tag{10}\] \[|\psi_{n}\rangle=\!\int\!\!d^{3}\vec{\kappa}_{1}\cdots d^{3}\vec {\kappa}_{n}\;\psi_{n}(\vec{\kappa}_{1},\cdots,\vec{\kappa}_{n})a^{\dagger}( \vec{\kappa}_{1})\cdots a^{\dagger}(\vec{\kappa}_{n})|0\rangle,\] where \(\gamma_{n}\) is the probability amplitude to have \(n\) photons, \(\psi_{n}\) is the joint probability amplitude that they are in modes \(\vec{\kappa}_{1},\cdots,\vec{\kappa}_{n}\), and \(|0\rangle\) is the vacuum. In this paper we only consider states with a single component \(\gamma_{N}=1\). In the Heisenberg picture, the state does not evolve in time. The wavefunction normalization condition is \[\int d^{3}\vec{\kappa}_{1}\cdots d^{3}\vec{\kappa}_{n}\;|\psi_{n}(\vec{\kappa} _{1},\cdots,\vec{\kappa}_{n})|^{2}=1\;\forall n\;. \tag{11}\] More rigorously, the state (10) refers only to the situation in which all the photons are in different modes (namely, the \(\psi_{n}\) does not contain Dirac \(\delta\)s over the \(\vec{\kappa}_{i}\)). The most general situation is to have the \(n\) photons distributed into \(m\leqslant n\) modes (\(m=1\) if all the \(n\) photons are in one mode, \(m=n\) if there is one photon per mode, as above). The indistinguishable nature of the photons implies that only \(m\) of the \(\begin{pmatrix}m\\ n\end{pmatrix}\) possibilities can be tracked, namely we can only know that \(n_{i}\) photons are in mode \(\vec{\kappa}_{i}\) for \(i=1,\cdots,m\). In this case, the wavefunction is \(\psi_{n}(n_{1},\vec{\kappa}_{1},\cdots,n_{m},\vec{\kappa}_{m})/\sqrt{n_{1}! \cdots n_{m}!}\) with \(\sum_{i}n_{i}=n\), where the factorials appear because the \(n_{i}\) photon Fock state in a mode is given by \(|n_{i}\rangle=(a^{\dagger})^{n_{i}}|0\rangle/\sqrt{n_{i}!}\) and where \(\psi_{n}\) is the joint probability amplitude that the \(n\) photons are partitioned as \(\{n_{i}\}\)_and_ that their wave vectors are \(\{\vec{\kappa}_{i}\}\). For single photon states, only the term \(n=1\) of (10) survives. The spatial dependence of the wavefunction can be obtained by taking the Fourier transform \(\tilde{\psi}\) of \(\psi\equiv\psi_{1}\): \[\int d^{3}\vec{\kappa}\;\psi(\vec{\kappa})\;a^{\dagger}(\vec{\kappa})|0 \rangle=\int d^{3}\vec{r}\;\tilde{\psi}(\vec{r})\;a^{\dagger}(\vec{r})|0 \rangle\;, \tag{12}\] where \(a(\vec{r})\propto\int d^{3}\vec{\kappa}a(\vec{\kappa})e^{i\vec{\kappa}\cdot \vec{r}}\) is the annihilator of a photon at position \(\vec{r}\), so that \(\tilde{\psi}(\vec{r})\) is the probability that the photon is in \(\vec{r}\). (Note that, except in the limit discussed in the next subsection, this is _not_ in general equal to the probability amplitude of measuring the photon at position \(\vec{r}\), since there is a difference between the position of the photon and of its energy, a well known problem in quantum field theory, e.g. [23; 26; 27]. Indeed, as is clear from the above analysis, the photon is obtained from the quantization of the vector potential \(A\), which is a gauge-dependent quantity, whereas its energy is, clearly, a gauge-independent quantity.) We can choose \(\psi(\vec{\kappa})={\cal N}\alpha(\vec{\kappa})\), where \(\alpha(\vec{\kappa})\) is the Fourier transform of the classical solution \(A(t,\vec{r})\) of (11) and \({\cal N}\) is a normalization for Eq. (11). Indeed, as discussed below, \(|A(t,\vec{r})|^{2}\) is the light intensity at position \(t,\vec{r}\), so it is proportional to the probability of finding the photon at such position, so \(A\) is the probability amplitude, and its Fourier transform \(\alpha(\vec{\kappa})\) is the probability amplitude in the \(\vec{\kappa}\) space. ### Quantum em: photodetection It can be shown that for photodetectors with efficiency \(\eta\), sufficiently small temporal resolution \(\tau\), and spatial resolution \(\sigma\), the probability of a photodetection at spacetime position \((t,\vec{r})\) is [23]\(p(t,\vec{r})\propto\eta\tau\sigma\langle\psi|[A^{+}(t,\vec{r})]^{\dagger}A^{+}( t,\vec{r})|\psi\rangle\). In the case in which the system state \(|\psi\rangle\) contains a single photon, we can use the fact that \(a\) is the photon annihilator to simplify it to \(p\propto|\langle 0|A^{+}|\psi\rangle|^{2}\). To show this, consider \(|\psi\rangle=\int d^{3}\vec{\kappa}^{\prime}\psi(\vec{\kappa}^{\prime})a^{ \dagger}(\vec{\kappa}^{\prime})|0\rangle\), with \(\psi(\vec{\kappa}^{\prime})\) the probability amplitude that the photon has wave vector \(\vec{\kappa}^{\prime}\) (so that its Fourier transform can be interpreted as the probability amplitude that the photon is in position \(\vec{r}\)). Then \[A^{+}|\psi\rangle=\int d^{3}\vec{\kappa}\;d^{3}\vec{\kappa}^{ \prime}\psi(\vec{\kappa}^{\prime})a(\vec{\kappa})e^{-i(\omega_{n}t-\vec{\kappa} \cdot\vec{r})}a^{\dagger}(\vec{\kappa}^{\prime})|0\rangle=\] \[\tilde{\psi}(\vec{r}-\vec{c})|0\rangle\;, \tag{13}\] where \(\vec{c}\) is the speed of light with the direction \(\vec{\kappa}/\kappa\) of the beam. Eq. (13) follows from the commutator (11) and the fact that \(a|0\rangle=0\), and where \(\tilde{\psi}\) is the Fourier transform of \(\psi\). [Note the use of the Heisenberg picture: the time evolution is only in the operator \(A^{+}\), not in the state.] ###### Acknowledgements. This work received support from EU H2020 QuantERA ERA-NET Cofund in Quantum Technologies, Quantum Information and Communication with High-dimensional Encoding (QuICHE) under Grant Agreement 731473 and 101017733, from the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under Contract No. DE-AC02-07CH11359. L.M. acknowledges support from the PNRR MUR Project PE000023-NQSTI and from the National Research Centre for HPC, Big Data and Quantum Computing, PNRR MUR Project CN0000013-ICSC. Y.Z. thanks Jin-Shi Xu for the discussions. C.R. was supported by the National Natural Science Foundation of China (Grant No.12075245, 12247105), the Natural Science Foundation of Hunan Province (2021JJ10033).
2306.00164
Hyperfine Spectroscopy of Isotopically Engineered Group-IV Color Centers in Diamond
A quantum register coupled to a spin-photon interface is a key component in quantum communication and information processing. Group-IV color centers in diamond (SiV, GeV, and SnV) are promising candidates for this application, comprising an electronic spin with optical transitions coupled to a nuclear spin as the quantum register. However, the creation of a quantum register for these color centers with deterministic and strong coupling to the spin-photon interface remains challenging. Here, we make first-principles predictions of the hyperfine parameters of the group-IV color centers, which we verify experimentally with a comprehensive comparison between the spectra of spin active and spin neutral intrinsic dopant nuclei in single GeV and SnV emitters. In line with the theoretical predictions, detailed spectroscopy on large sample sizes reveals that hyperfine coupling causes a splitting of the optical transition of SnV an order of magnitude larger than the optical linewidth and provides a magnetic-field insensitive transition. This strong coupling provides access to a new regime for quantum registers in diamond color centers, opening avenues for novel spin-photon entanglement and quantum sensing schemes for these well-studied emitters.
Isaac B. W. Harris, Cathryn P. Michaels, Kevin C. Chen, Ryan A. Parker, Michael Titze, Jesus Arjona Martinez, Madison Sutula, Ian R. Christen, Alexander M. Stramma, William Roth, Carola M. Purser, Martin Hayhurst Appel, Chao Li, Matthew E. Trusheim, Nicola L. Palmer, Matthew L. Markham, Edward S. Bielejec, Mete Atature, Dirk Englund
2023-05-31T20:19:01Z
http://arxiv.org/abs/2306.00164v2
# Hyperfine Spectroscopy of Isotopically Engineered ###### Abstract A quantum register coupled to a spin-photon interface is a key component in quantum communication and information processing. Group-IV color centers in diamond (SiV-, GeV-, and SnV-) are promising candidates for this application, comprising an electronic spin with optical transitions coupled to a nuclear spin as the quantum register. However, the creation of a quantum register for these color centers with deterministic and strong coupling to the spin-photon interface remains challenging. Here, we make first-principles predictions of the hyperfine parameters of the group-IV color centers, which we verify experimentally with a comprehensive comparison between the spectra of spin active and spin neutral intrinsic dopant nuclei in single GeV- and SnV- emitters. In line with the theoretical predictions, detailed spectroscopy on large sample sizes reveals that hyperfine coupling causes a splitting of the optical transition of SnV- an order of magnitude larger than the optical linewidth and provides a magnetic-field insensitive transition. This strong coupling provides access to a new regime for quantum registers in diamond color centers, opening avenues for novel spin-photon entanglement and quantum sensing schemes for these well-studied emitters. + Footnote †: These two authors contributed equally + Footnote †: These two authors contributed equally + Footnote †: These two authors contributed equally ## I Introduction Spin-photon interfaces are the backbone for many quantum communication [1; 2; 3; 4], transduction [5; 6], sensing [7; 8; 9], and computing schemes [10; 11], with one of the most promising physical implementations being color centers in the solid-state [12]. Group-IV color centers in diamond are particularly appealing for spin-photon interfaces, offering excellent optical properties when integrated into nanostructures [13; 14; 15; 16; 17], insensitivity to charge noise [18], and high-fidelity spin control [19; 20]. In these systems, localized electronic states around a lattice defect provide a local spin qubit, while the transitions between ground and excited electronic orbital energy levels provide optical access for readout, initialisation, and spin-photon entanglement generation [21; 20; 22]. Central to many color-center-based quantum protocols is the presence of a local quantum register coupled strongly to the spin-photon interface, which allows for the storage of quantum information while spin-photon entanglement operations are being performed [23; 24; 25]. The most common implementation of this local register is with a proximal spin-active nucleus of the host lattice, coupled to the electron in the spin-photon interface by hyperfine coupling, such as \({}^{13}\)C in diamond [26; 27; 28]. \({}^{13}\)C memories have been used in demonstrations of quantum networks [3; 22; 29; 30], and even allow for tens of nuclear spin memories coupled to a single spin-photon interface [31]. However, the inclusion of the \({}^{13}\)C register is inherently non-deterministic, reducing the yield of good quantum registers. Intrinsic dopant nuclear spins, on the other hand, allow for the deterministic inclusion of a quantum register [4; 32]. For many applications where a large number of quantum registers is not required, these properties make dopant nuclei easier to integrate into quantum systems that require high-yield, high-fidelity control. While the electronic spin properties of group-IV color centers have been investigated [33; 34; 35; 36], the development of a thorough understanding of the coupling to the intrinsic nuclear spin register is needed. Each group-IV element has at least one nuclear isotope with spin \(I>0\), but only the spin-1/2 \({}^{29}\)Si in the SiV- has been used as a memory qubit [4], with a reported ground-state hyperfine coupling of approximately 70 MHz [19; 37]. Hyperfine coupling in other group-IV color centers remains under-studied, with only a few recent reports of hyperfine coupling, including to the intrinsic nuclear spin of GeV- [38], to weakly coupled [39] and strongly coupled [20] \({}^{13}\)C nuclei using the SnV- and to the intrinsic nuclear spin of \({}^{117}\)SnV-[40]. In this paper, we develop a detailed model of the intrinsic nuclear memory of group-IV color centers and predict the hyperfine parameters for each group-IV isotope using density functional theory. We then compare these predictions with our experimental observations of optical hyperfine signatures for \({}^{73}\)GeV\({}^{-}\), \({}^{117}\)SnV\({}^{-}\), and \({}^{119}\)SnV\({}^{-}\), allowing the assignment of isotopic signatures directly to the optical spectrum. We demonstrate that SiV\({}^{-}\), GeV\({}^{-}\), and SnV\({}^{-}\) have substantially higher hyperfine coupling strengths than NV\({}^{-}\), and we capture a proportionality between hyperfine strength with atomic mass of the color center. This leads to an optical signature of SnV\({}^{-}\) spin-active isotope hyperfine splitting that is 13 times larger than the homogeneous linewidth of the optical transitions, allowing for direct optical access of the nuclear spin. ## II First-principles model of group-IV color center hyperfine interaction Group-IV color centers consist of a single dopant atom from column 4 of the periodic table sitting at a \(D_{\rm 3d}\)-symmetric interstitial site between two neighboring carbon vacancies in the diamond lattice, as shown in Fig. 1(a). The electronic structure is a single hole orbiting the defect [34], which exists in a ground \(E_{\rm g}\) or excited \(E_{\rm u}\) state, as illustrated in Fig. 1(b). We model the state of the hole in either manifold as a tensor product of the orbital and spin degrees of freedom, resulting in a basis of the four states \[\left|\psi_{\rm gnd(exc)}\right>=\left|e_{\rm g(u)\pm}\right>\otimes\left| \uparrow/\downarrow\right> \tag{1}\] where the quantisation axis of the orbital and spin degrees of freedom is along the \(D_{\rm 3d}\) symmetry axis. The Hamiltonian for the hole-spin system, \(\hat{H}_{\rm S}\), is a sum of contributions from spin-orbit (\(\hat{H}_{\rm SOC}\)), strain (\(\hat{H}_{\rm Egx/y}\)), and magnetic-field (\(\hat{H}_{\rm B/L}\)) coupling. Hence, \[\hat{H}_{\rm S}=\hat{H}_{\rm SOC}+\hat{H}_{\rm Egx}+\hat{H}_{\rm Egy}+\hat{H}_ {\rm B}+\hat{H}_{\rm L} \tag{2}\] where, in the basis defined in Eq. 1, \(\hat{H}_{\rm SOC}=\frac{1}{2}\lambda\,\sigma_{z}^{\rm orb}\sigma_{z}^{\rm S}\), \(\hat{H}_{\rm Egx/y}=-\alpha\sigma_{z/y}^{\rm orb}\), \(\hat{H}_{\rm B}=g\mu_{\rm B}\mathbf{B}\cdot\hat{\bf S}\), \(\hat{H}_{\rm L}=q\mu_{\rm B}B_{z}\sigma_{z}^{\rm orb}\), \(g\) is the electron g-factor, \(\mu_{\rm B}\) is the Bohr magneton, \(\hat{\bf S}=\frac{1}{2}(\sigma_{x}^{\rm S},\sigma_{y}^{\rm S},\sigma_{z}^{\rm S})\) is the standard electron spin operator, and \(\sigma_{i}^{\rm orb/S}\) are the Pauli matrices applied to the orbital/spin degree of freedom. The forms of the four-level Hamiltonian for these perturbations are inferred from group theory [34; 41; 42], generally up to a constant factor that must be calculated from first-principles [36] or Figure 1: First-principles hyperfine structure of group-IV color centers. (a) Split-vacancy structure of a representative group-IV color center, \({}^{117}\)SnV\({}^{-}\), highlighting the interaction between the electron and nuclear spins. The group-IV dopant is shown in red, the nearest-neighbor carbon atoms in black and the lattice vacancies in gray. (b) Level structure showing \(E_{\rm g}\) and \(E_{\rm u}\) manifolds for a spin-1/2 group-IV color center. Spin-orbit coupling (SOC), \(\lambda\), splits each manifold into two branches (only shown for \(E_{\rm g}\)). The addition of the hyperfine interaction with a spin-1/2 nucleus at zero strain splits each branch into two degenerate levels with aligned or anti-aligned nuclear and electron spins. (c, d, e) Cross-section of the spin density along the plane shown in panel a for \({}^{29}\)SiV\({}^{-}\), \({}^{73}\)GeV\({}^{-}\), and \({}^{117}\)SnV\({}^{-}\). The presence of a heavier group-IV ion results in an increased spin density at the inversion symmetry center. (f, g, h) Energies of the lower branch ground state hyperfine levels as a function of strain, \(\alpha\), shown in panel a and expressed as a fraction of spin-orbit coupling strength, for \({}^{29}\)SiV\({}^{-}\), \({}^{73}\)GeV\({}^{-}\), and \({}^{117}\)SnV\({}^{-}\). Red and blue indicate the total angular momentum squared of the electro-nuclear system, \(\langle J^{2}\rangle\), and correspond to the cases when electron and nucleus are aligned or anti-aligned respectively. measured experimentally [34; 43]. We do not model separately the symmetry-breaking Jahn-Teller distortion, as its effects can be absorbed into an effective reduction of the spin-orbit interaction [36]. ### Hyperfine interaction model The nuclear spin interacts with magnetic field via the nuclear Zeeman interaction \[\hat{H}_{\rm I}=g_{\rm I}\mu_{\rm N}\mathbf{B}\cdot\hat{\mathbf{I}} \tag{3}\] where \(g_{\rm I}\) is the nuclear \(g\)-factor, \(\mu_{\rm N}\) is the nuclear magneton, and \(\hat{\mathbf{I}}\) is the standard nuclear spin operator for a spin \(I\) nucleus. To model the hyperfine interaction, we expand the basis in Eq. 1 to include the nuclear spin degree of freedom quantized along the defect axis of symmetry, \(|m_{\rm I}\rangle\). The nucleus interacts with the electronic spin-orbit system via the hyperfine interaction \[\hat{H}_{\rm HF}=\hat{\bf S}\cdot{\bf A}\cdot\hat{\bf I} \tag{4}\] where \({\bf A}={\bf A}_{\rm FC}+{\bf A}_{\rm DD}\) is the hyperfine tensor [27; 28]. As discussed in Appendix C, the terms \({\bf A}_{\rm FC}\) and \({\bf A}_{\rm DD}\) correspond to the Fermi contact interaction and the dipole-dipole interaction respectively. The Fermi contact is isotropic, and so can be expressed as a single scalar parameter \(A_{\rm FC}\) multiplied by the identity matrix. For a group-IV nucleus located at the inversion-symmetric point in the color center, the \(D_{\rm 3d}\) symmetry restricts \({\bf A}_{\rm DD}\) to be a diagonal matrix with elements \(A_{\rm DD}=-2A_{\rm DD}^{xx}=-2A_{\rm DD}^{yy}=A_{\rm DD}^{zz}\) in the frame defined in Fig. 1(a) [44]. Eq. 4 then becomes \[\hat{H}_{\rm HF}=A_{\perp}\left(\hat{S}_{x}\hat{I}_{x}+\hat{S}_{y}\hat{I}_{y} \right)+A_{\parallel}\hat{S}_{z}\hat{I}_{z}. \tag{5}\] The \(A_{\parallel}=A_{\rm FC}+A_{\rm DD}\) term is a shift of the energy levels that depends on the alignment of the electronic and nuclear spins. The \(A_{\perp}=A_{\rm FC}-2A_{\rm DD}\) terms mix states with identical orbital angular momentum but different spin and nuclear quantum numbers. In the absence of other perturbations, the hyperfine levels form two hyperfine manifolds with total angular momentum \(J=I\pm S\) separated in energy by \(A_{\parallel}\), with further splitting \(\frac{3}{2}A_{\rm DD}\) between the \(m_{\rm J}\) sublevels within each manifold. Nuclear spin-orbit and quadrupole coupling, discussed in Appendix D, also result in additional terms in the hyperfine interaction Hamiltonian. These terms contribute less than 5% of the total hyperfine interaction strength, so we do not include these in our model. The Jahn-Teller distortion mentioned previously also affects the hyperfine interaction, however the change in interaction strength is less than 10% of \(A_{\rm FC}\) for SiV\({}^{-}\) as discussed in Appendix D. We also exclude this effect from our model on the grounds that it is not strong enough to cause qualitative change in the system. This exclusion also allows us to avoid modeling the vibronic electron-photon modes for computational simplicity. The final equation for the electro-nuclear system is the sum of the terms in Eq. 2, Eq. 3, and Eq. 5: \[\hat{H}=\hat{H}_{\rm S}+\hat{H}_{\rm I}+\hat{H}_{\rm HF}. \tag{6}\] ### Hyperfine parameters from DFT To estimate the values of \(A_{\rm FC/DD}\), we perform density functional theory (DFT) calculations with Quantum Espresso [45], as detailed in Appendix C. The resulting spin density in the ground state is plotted in Fig. 1(c-e) for SiV\({}^{-}\), GeV\({}^{-}\), and SnV\({}^{-}\) respectively, and the calculated values for the hyperfine parameters are shown in Tab. 1. Normalizing for the nuclear gyromagnetic ratio, the hyperfine interaction increases with the mass of the group-IV dopant nucleus. This is explained by the increasing contribution of the dopant orbitals compared to the carbon dangling bonds for the heavier elements. The increased dopant orbital contribution to the spin density results in a large spin density near the dopant nucleus, and therefore an increased hyperfine interaction. This general trend is also present in other systems, such as group-V donors in silicon [46; 47], and molecular systems [48]. ### Effect of strain on hyperfine structure The electronic spin-orbit interaction, \(\hat{H}_{\rm SOC}\), contained within the \(\hat{H}_{\rm S}\) term, separates states in which the hole-spin is aligned/anti-aligned with the orbital angular momentum by an amount \(\lambda\), as shown in Fig. 1(b). The hyperfine shift due to the \(A_{\parallel}\) term remains unaffected by this spin-orbit splitting. However, the splitting also means that the upper and lower branches contain states with opposite \(e_{\rm g(u)\pm}\) orbital character. As the \(A_{\perp}\) terms can only mix states with the same orbital character, in the limit of \(\lambda\gg A_{\perp}\) these terms are perturbatively suppressed. The eigenstates cease to have well-defined total angular momentum, and are simply of the form \(\left|e_{\rm g(u)\pm}\right\rangle\left|\uparrow/\downarrow\right\rangle\left| m_{\rm I}\right\rangle\), with eigenvalues separated by \(\frac{1}{2}A_{\parallel}\), as can be seen in Fig. 1(b). When transverse \(E_{\rm gx(y)}\) strain is introduced into the system, as parameterized by the amount of strain \(\alpha\left(\beta\right)\), the upper and lower orbital states are further mixed together. The hyperfine levels are affected by the orbital mixing, as shown in Fig. 1(f-h) for \(E_{\rm gx}\) strain using the DFT ground state values for \({}^{29}\)SiV\({}^{-}\), \({}^{73}\)GeV\({}^{-}\), and \({}^{117}\)SnV\({}^{-}\). \(E_{\rm gy}\) strain only changes the orbital character of the orbitals in the upper/lower branch, and has an identical effect on the hyperfine energy levels. In the limit of large strain \(\alpha,\beta\gg\lambda\), the upper and lower branches have well-defined orbital character, and the \(A_{\perp}\) hyperfine terms are again able to mix the states \(\left|\uparrow/\downarrow\right\rangle\left|m_{\rm I}\right\rangle\) within each branch. Each orbital branch then separates into two hyperfine manifolds of well-defined total spin angular momentum \(J\), separated by \(A_{\rm FC}\), and with a spacing of \(\frac{3}{2}A_{\rm DD}\) between the \(m_{\rm J}\) sublevels. The isotopes \({}^{29}\)Si and \({}^{117}\)Sn are spin-1/2 nuclei, and as such they exhibit splitting into \(J=0\) singlet, and \(J=1\) triplet states, whereas the spin-9/2, \({}^{73}\)Ge nucleus has a more complicated splitting into \(J=4\) and \(J=5\) states. ### Hyperfine structure in optical spectra The strength of the optical transitions is proportional to the matrix element \(\langle\psi_{\rm exc}|\hat{d}|\psi_{\rm grad}\rangle\), where \(\hat{d}\) is the electric dipole operator. Since \(\hat{d}\) only acts on the orbital degree of freedom [34], optical transitions cannot flip the nuclear or electronic spins, and only the spin-conserving transitions contribute significantly to the transition (see Appendix E). We are in the limit of large spin-orbit coupling, where the electro-nuclear ground state Hamiltonian in Eq. 6 produces a series of equally spaced hyperfine levels in both the upper and lower branches of the ground state separated by \(\frac{1}{2}A_{\parallel}^{\rm end}=\frac{1}{2}(A_{\rm FC}^{\rm end}+A_{\rm DD }^{\rm end})\). Similarly, the excited state has hyperfine levels equally spaced by \(\frac{1}{2}A_{\parallel}^{\rm exc}\). The net result is that the electronic spin C-transition splits into a series of hyperfine transitions, with an optical hyperfine spacing \(A_{\rm PLE}=\frac{1}{2}(A_{\parallel}^{\rm exc}-\frac{1}{2}A_{\parallel}^{\rm end})\). For the spin-1/2 isotopes discussed in this paper, the hyperfine interaction results in four total transitions. At zero strain, these occur in two degenerate pairs separated by \(A_{\rm PLE}\): at lower frequency between the \(m_{\rm J}=\pm 1\) hyperfine states in the ground/excited level (\(C_{\rm H1}\)), and between the two \(m_{\rm J}=0\) states at a higher frequency (\(C_{\rm H0}\)). The two peaks and their splitting \(A_{\rm PLE}\) are labeled for \({}^{117}\)SnV- in Fig. 2(a). The \(m_{\rm J}=\pm 1\) states are unaffected by strain, while the \(m_{\rm J}=0\) states mix and separate in both ground and excited manifolds. The strain-induced mixing splits the \(C_{\rm H0}\) peak in the spectrum into two peaks, labeled \(C_{\rm H0}^{\prime}\) and \(C_{\rm H0}^{\prime\prime}\), each having half the intensity of the \(C_{\rm H1}\) peak, with a splitting \(\delta\) labeled in Fig. 2(a). The predicted hyperfine interaction for SnV- is 10 times larger than the expected 35 MHz lifetime-limited linewidth of the transition [39], making the hyperfine transitions directly resolvable. Similarly for the spin-9/2 \({}^{73}\)Ge isotope, we expect 20 hyperfine transitions, in 10 degenerate pairs which are equally spaced by \(A_{\rm PLE}\). The hyperfine parameters predicted by DFT for \({}^{73}\)GeV- are small because of \({}^{73}\)Ge's small gyromagnetic ratio compared to the other group-IV elements, with the Fermi contact parameter predicted to be 48 MHz in the ground state. The resulting \(A_{\rm PLE}=-13.78\) MHz is smaller than the expected lifetime-limited linewidth for GeV- of 26 MHz [49], and the hyperfine level is therefore not predicted to be optically resolvable. Nevertheless, the overlapping transitions result in a spectral line broadened by approximately \(9|A_{\rm PLE}|=124\) MHz with a very non-Lorentzian flat-topped lineshape. Strain splits the flat-topped emission peak into two peaks corresponding to the transitions between the \(J=5\) levels at lower energy, and \(J=4\) levels at higher energy, see Fig. 2(b). The complete hyperfine model of the group-IV color centers discussed in this section gives a unique isotopic spectral signature for each emitter which we expect to be able to measure experimentally. Some further discussion on the effects of strain on the hyperfine spectrum is found in Appendix E. ## III Isotope-selective spectroscopy To identify the spectroscopic signature of the group-IV color centers, we prepared diamonds plates with isotope-selective implantation of Si, Ge, and Sn, with isotope regions identified by the QR codes etched into the sample [50] (see Appendix A for more details). A high-temperature annealing process combines the group-IV dopant with a vacancy to form the group-IV color center. Fig. 3(a), show the four possible transitions between the spin-orbit split levels of group-IV color centers, labeled A through D, which are optically addressable, with the exact frequency of these transitions depending on which group-IV dopant is present in the color center and the residual strain in the sample. In addition, the exact isotope of the dopant also affects the transition energy, with a \(\sqrt{M}\) dependence on isotope mass due to differences in the vibrational ground state energy [51]. To demonstrate this, we took photoluminescence (PL) measurements and fit a Gaussian lineshape to the C ZPL peak of several ensembles of emitters in order to extract the central frequency. Fig. 3(b) shows the distribution of this extracted cen Figure 2: Predicted spectra using DFT parameter and lifetime-limited linewidths for (a) \({}^{117}\)SnV– and (b) \({}^{73}\)GeV– at zero strain in the left panel, and a strain \(\alpha/\lambda=0.15\) in the right panel. tral PL frequency for both \({}^{28}\)SiV-and \({}^{29}\)SiV-. A clear shift of 83(8) GHz in the central frequency can be seen, which is in good agreement with previous observation of 87 GHz [51]. This shift allows for the differentiation between the two isotopes without isotope selective implant and therefore the selection of a \({}^{29}\)SiV center to make use of the spin-1/2 nucleus [4]. Fig. 3(c) shows the same measurement for \({}^{73}\)GeV-and \({}^{74}\)GeV-. A shift of 13(7) GHz can be seen, in line with previous experimental results of 15 GHz [52; 53] and prediction based on the expected \(\sqrt{M}\) dependency [51]. This means distinguishing single GeV centers using non-resonant excitation is unlikely, given the GeV centers typically have an inhomogeneous distribution of emission of several 10s of GHz [54]. Similar measurements across 7 isotopes of SnV are shown in Fig. 3(d). The shift of the zero phonon line (ZPL) is hidden within the inhomogeneous distributions of each isotope. The model and previous experimental results would suggest a shift between neighboring isotopes of \(\approx\)10 GHz [55]. This is below the resolution of the spectrometer used to measure PL in this experiment, which combined with inhomogeneous distribution and non-resonant power broadening [56], masks the isotope shift. The shift between isotopes further apart in mass is also hidden, likely due to the imperfect mass selectivity which brings these ensemble PL distributions towards a central value. ## IV Hyperfine photoluminescence excitation spectra To quantify the hyperfine parameters experimentally, we performed photoluminescence excitation (PLE) on the isotope-selectively implanted samples (see Appendix B). We collected statistics on color centers by performing wide-field PLE (WFPLE) experiments [50] on regions of the sample with \({}^{73}\)GeV-, \({}^{74}\)GeV-, \({}^{117}\)SnV-, \({}^{118}\)SnV-, \({}^{119}\)SnV-and \({}^{120}\)SnV- with an implant dose of approximately 100 ions per site [57]. This dose is sufficiently low that each site should only contain on the order of 1-5 emitters given color center creation yields of approximately 1-5% [15]. This may be further improved with recently developed in-situ photoluminescence spectroscopy for the formation of deterministic yield emitter arrays [58]. Given the large inhomogeneous linewidth compared to the typical homogeneous linewidth, we expect fewer than 0.1% of emitters to have spectrally overlapping emission peaks within the same site. ### PLE Spectroscopy of GeV For the I=9/2 nucleus of \({}^{73}\)GeV-, we expect ten overlapping spin conserving transitions at zero field, as shown in Fig. 4(a) and discussed in Section II. When comparing the average WFPLE spectra of 242 \({}^{73}\)GeV- emitters and 195 \({}^{74}\)GeV- emitters in Fig. 4(b) we see that the \({}^{73}\)GeV- spectrum is substantially broader. In Fig. 4(c), we plot a histogram of the Lorentzian linewidth fits of the \({}^{73/74}\)GeV- emitters, and see that the average linewidth for \({}^{73}\)GeV- is 262(7) MHz, roughly 70 MHz broader than the 190(5) MHz average linewidth for \({}^{74}\)GeV- under the same laser power. To confirm that this broadening is a result of overlapping hyperfine transitions, we performed confocal PLE using chirped optical resonance excitation (CORE) [17] (see Appendix B) on a \({}^{73}\)GeV-emitter at varying mag \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} Isotope & Spin & \(g_{\rm I}\) & \(A_{\rm FC}^{and}\) & \(A_{\rm DD}^{and}\) & \(A_{\rm FC}^{exc}\) & \(A_{\rm DD}^{exc}\) & \(A_{\rm PLE}\) & \(A_{\rm PLE}\) & DFT error (\%) \\ & & & (DFT, MHz) & (DFT, MHz) & (DFT, MHz) & (DFT, MHz) & (DFT, MHz) & (exp., MHz) & DFT error (\%) \\ \hline \({}^{29}\)Si & 1/2 & -1.110 & 64.20 & -2.34 & -30.68 & 32.57 & -29.98 & – & – \\ \({}^{73}\)Ge & 9/2 & -0.195 & 48.23 & -1.35 & 5.03 & 14.30 & -13.78 & -12.5(5) & 9.2 \\ \({}^{115}\)Sn & 1/2 & -1.836 & 1275.04 & -24.47 & 386.74 & 230.43 & -316.70 & – & – \\ \({}^{117}\)Sn & 1/2 & -2.000 & 1389.09 & -26.65 & 421.34 & 251.05 & -345.02 & -445(9) & 22 \\ \({}^{119}\)Sn & 1/2 & -2.092 & 1453.27 & -27.89 & 440.80 & 262.65 & -360.96 & -484(8) & 26 \\ \end{tabular} \end{table} Table 1: Summary of hyperfine parameters predicted by DFT and measured experimentally Figure 3: Characterization of isotope dependent photoluminescence. (a) Level structure of the group-IV color centers and photoluminescence transitions. (b-d) Distribution of the frequency shift of the C-transition for ensembles of SiV–, GeV–, and SnV–. Solid lines are the Gaussian kernel density estimates. netic field, shown in Fig. 4(d). We fit this data using the optical model of the isotope developed in Section II, varying only the linewidth and \(A_{\rm{PLE}}\). These are found to be 72(3) MHz and 12.5(5) MHz respectively, within 10% of the DFT prediction. At zero field, the PLE spectrum with near-lifetime limited linewidth exhibits a flat-topped line shape due to the overlapping transitions (see Appendix E). As the magnetic field is increased, the electro-nuclear levels separate into spin-aligned and spin-anti-aligned groups of energy levels. At around 0.1 T only the transitions from the two groups near zero detuning still overlap, and a characteristic central hump surrounded by two broad shoulders appears, further highlighting that this lineshape comes from multiple overlapping hyperfine transitions. ### PLE Spectroscopy of SnV As discussed in Section II, for SnV\({}^{-}\) spin-1/2 isotopes we expect multiple spectrally distinct hyperfine transitions directly observable in the PLE spectra, as illustrated in Fig. 5(a). This is shown experimentally in the average WFPLE spectra averaged over approximately 100 emitters for each of the isotopes \({}^{117}\)SnV\({}^{-}\), \({}^{118}\)SnV\({}^{-}\), and \({}^{120}\)SnV\({}^{-}\) in Fig. 5(b). It is clear that additional spectral features appear for the two spin-1/2 isotopes \({}^{117}\)Sn and \({}^{119}\)Sn that are not present for the spin-0 isotopes \({}^{118}\)Sn and \({}^{120}\)Sn. We fit the PLE spectrum for each SnV\({}^{-}\) individually with either a single Lorentzian peak (representing a spin-0 isotope), or three Lorentzian peaks with a 2:1:1 intensity ratio (corresponding to a spin-1/2 isotope). For each spin-1/2 isotope fit, we extract the parameters \(A_{\rm{PLE}}\) and \(\delta\), as illustrated for a representative emitter in Fig. 5(c). The 3-peak hyperfine feature is observed for more than 80% of the emitters in the two spin-1/2 isotope-implanted regions, whereas it is present in less than 20% of the emitters in the spin-0 isotope-implanted regions (see Tab. 2). Performing a \(\chi^{2}\) test on the number of emitters with the multi-peak PLE spectra in the spin-0 vs spin-1/2 regions, we conclude with a high degree of certainty that the multi-peak PLE is associated with the spin-1/2 isotopes (\(p<10^{-5}\)). The bulk of the emitters of the wrong type likely come from imperfect isotope separation during the implantation (see Appendix A). We therefore assign the multi-peak feature to the spin-1/2 isotopes of tin. This result is also in line with measurements performed on \({}^{117}\)SnV\({}^{-}\) in nanophotonic structures [40], and clarifies a previous report of SnV\({}^{-}\) hyperfine interaction strength of 40 MHz [20]. We now assess this later report as being due to the hyperfine coupling to a nearest-neighbor \({}^{13}\)C nucleus, since it is closer in magnitude to previous predictions for \({}^{13}\)C hyperfine coupling [28]. The distribution of the hyperfine parameter \(A_{\rm{PLE}}\) for \({}^{117}\)SnV\({}^{-}\) and \({}^{119}\)SnV\({}^{-}\) are shown in Fig. 5(d), and the mean value is summarized for all isotopes in Tab. 2. We note that \(A_{\rm{PLE}}\) is larger for \({}^{119}\)Sn than it is for \({}^{117}\)Sn. This is to be expected since the hyperfine coupling parameters are directly proportional to the \begin{table} \begin{tabular}{c|c|c|c|c} Isotope & Spin & Number of & Fraction w/Hyper- & \(A_{\rm{PLE}}\) (S.E.) \\ & & emitters. & fine Peaks (\%) & (MHz) \\ \hline \({}^{117}\)Sn & 1/2 & **136** & **87.5** & **445(9)** \\ \({}^{118}\)Sn & 0 & 119 & 16.0 & – \\ \({}^{119}\)Sn & 1/2 & **109** & **84.4** & **484(8)** \\ \({}^{120}\)Sn & 0 & 93 & 6.4 & – \\ \end{tabular} \end{table} Table 2: Hyperfine statistics for the different SnV\({}^{-}\) isotopes. Figure 4: Hyperfine characterisation of GeV\({}^{-}\) through PLE. (a) Hyperfine level structure of the spin-9/2 \({}^{73}\)GeV\({}^{-}\) electro-nuclear system. (b) Average WFPLE spectrum for \({}^{73}\)GeV\({}^{-}\) and \({}^{74}\)GeV\({}^{-}\). (c) Distribution of emitter linewidths for \({}^{73/74}\)GeV\({}^{-}\) showing the larger linewidth of \({}^{73}\)GeV\({}^{-}\). (d) PLE spectrum of a \({}^{73}\)GeV\({}^{-}\) emitter in the low power regime, as a function of magnetic field at 33\({}^{\circ}\) to the high symmetry axis, showing a total lineshape. Fit in black. nuclear gyromagnetic ratios [27]. We find the ratio \(A_{\rm{PLE},^{119}\rm{Sn}}/A_{\rm{PLE},^{117}\rm{Sn}}=1.09(0.04)\) to be in agreement with \(g_{\rm{119}\rm{Sn}}/g_{\rm{117}\rm{Sn}}=1.05\). We next performed PLE measurements at varying magnetic field on a strained \({}^{117}\)SnV-within a different device, detailed in Appendix B.3, with results shown in Fig. 5(e). We fit this data using the hyperfine model from Section II, finding \(A_{\rm{PLE}}=-459(3)\) MHz, \(\alpha=55(3)\) GHz, and a power broadened linewidth of \(336(3)\) MHz. The simpler level structure compared to \({}^{73}\)GeV- allows us to directly track the trajectory of the transitions as a function of field strength. The magnetic field breaks the degeneracy of the \(C_{H1}\) peak's two constituent transitions, producing two peaks labeled \(C_{H1}^{\pm}\), corresponding to transitions between the ground and excited \(m_{\rm{J}}=\pm 1\) total angular momentum states. Due to the combined effect of an anti-crossing of the \(m_{\rm{J}}=0\) ground states at zero magnetic field and the much weaker anti-crossing of the \(m_{\rm{J}}=0\) excited states (see Fig. 5(e) inset), the \(C_{H0}\) transitions exhibit an anti-crossing near zero-field. For a sufficiently strained emitter, such as the \({}^{119}\)SnV- in Fig. 5(c), the strong coupling maintains optical access to the hyperfine levels at this ground level anti-crossing point. Operating at this anti-crossing makes the levels magnetically insensitive to first order, suppressing the effect of magnetic noise [59, 60, 61]. Nuclear spin bath magnetic noise has been shown to have a large effect on the coherence of group-IV color centers in previous Figure 5: Hyperfine characterization of SnV– via PLE. (a) Hyperfine level structure of a spin-1/2 SnV– electro-nuclear system. (b) Averaged WFPLE spectrum for \({}^{117}\)SnV–, \({}^{118}\)SnV–, \({}^{119}\)SnV–, and \({}^{120}\)SnV–. (c) Typical WFPLE spectrum (purple) of a strained \({}^{119}\)SnV–, showing the hyperfine peaks with relative heights 2:1:1 (gray), split by the hyperfine parameter \(A_{\rm{PLE}}\) and strain parameter \(\delta\), with the combined fit with all transitions in pink. Peaks are labeled \(C_{H0}^{\prime}\), \(C_{\rm{H0}}^{\prime\prime}\) and \(C_{\rm{H1}}\) as explained in the text. (d) Histogram of the optical hyperfine splitting \(A_{\rm{PLE}}\) for the spin-1/2 nuclei \({}^{117}\)/\({}^{119}\)SnV–. (e) PLE spectrum of a \({}^{117}\)SnV– emitter as a function of magnetic field, with fit in black, showing the three peaks at zero field split into four peaks labeled \(C_{\rm{H1}}^{+}\), \(C_{\rm{H1}}^{-}\), \(C_{\rm{H0}}^{\prime}\), and \(C_{\rm{H0}}^{\prime\prime}\). Transition frequencies from the fit are overlaid in red, highlighting the avoided crossing of the \(C_{H0}\) transitions near zero-field. Inset shows an illustration of the ground/excited level structure with avoided crossings at zero magnetic field. work [20; 22; 62], so operating in such a regime may improve coherence. ## V Conclusions and discussion Our microscopic model of the electro-nuclear system and the first-principles calculations present a full theoretical framework for understanding the hyperfine coupling of the negatively charged group-IV color centers in diamond. We predict a large increase in the hyperfine parameters moving down the group-IV column of the periodic table due to the increasing contribution of the dopant orbitals to the spin density. We further show that spin-orbit coupling and strain must both be accounted for when modeling the resulting hyperfine levels. Using isotope-selective ion implantation of group-IV elements in diamond, we are able to identify the optical hyperfine signatures of \({}^{73}\)GeV\({}^{-}\), \({}^{117}\)SnV\({}^{-}\), and \({}^{119}\)SnV\({}^{-}\). In particular, the spin-active SnV\({}^{-}\) color centers show a clearly resolvable optical multi-peak feature compared to the spin-neutral isotopes due to the large hyperfine coupling to the intrinsic tin nucleus. The hyperfine parameters and resulting PLE features predicted from DFT, as well as the measured PLE features are shown in Table 1. The first-principles predictions are in sufficiently good agreement (within \(\sim\)20%) to prove instructive in identifying the hyperfine signatures. The remaining discrepancy between theory and experiment is likely due to the simplifications made in the modeling, such as the exclusion of Jahn-Teller distortion, and the use a local functional instead of a more sophisticated hybrid functional. Both the GeV\({}^{-}\) and SnV\({}^{-}\) nuclear registers bring interesting challenges and opportunities for quantum information applications. The spin-9/2 memory in \({}^{73}\)GeV\({}^{-}\) has a 10-level nuclear memory, allowing for the possibility of the generation of large cluster states by making use of the local memory [23]. Its nuclear quadrupole moment also means that the nuclear spin can potentially be driven directly by an electric field [63]. While the SnV\({}^{-}\) isotopes have a more conventional spin-1/2 intrinsic memory, the strong hyperfine coupling means that the hyperfine levels can be accessed optically at zero field. With non-zero strain applied to the defect at this zero-field operating point, our model predicts a magnetic-field insensitive transition between the \(|J=0/1,m_{J}=0\rangle\) ground states. These states could thus combine the key attributes of a spin-photon interfaces: strong, direct, and stable optical transitions; environmentally insensitive hyperfine 'clock states'; and additional hyperfine states for quantum memory. The direct optical access of the hyperfine levels also allows for the possibility of direct transfer of photon states to the nuclear memory without using the electron spin as an intermediary, which may enhance spin-photon entanglement fidelity compared to existing schemes [4]. Optical initialization and readout of the nuclear spin via this hyperfine optical transition has been demonstrated in a separate work [40]. The presence of the strongly coupled memory in the well-established group-IV color center platform will allow future experiments to leverage their bright, high-quality optical emission in a new regime of quantum experiments. The clear identification of the hyperfine parameters of GeV\({}^{-}\) and SnV\({}^{-}\) in this paper therefore sets the implementation roadmap for future work to use these nuclear spins as local memories for quantum information applications. More generally, the theory and experimental methods developed here also present a new route to tailor other spin-photon interfaces where the interplay between spin-orbit coupling, crystal strain, and hyperfine interaction may also be important. These include color centers in other materials systems, such as silicon [64; 65], silicon carbide [66; 67; 68], rare-earth elements implanted in solids [60], and other emerging material platforms [69]. This paper demonstrates that the selection of dopant isotopes can have a large effect on the resulting color center properties through the hyperfine structure. Given the wide availability of isotopically-selective implantation, we see this as a useful tool and future standardized step in the fabrication of spin-photon interfaces. ###### Acknowledgements. This work was supported in part by the STC Center for Integrated Quantum Materials (CIQM) NSF Grant No. DMR-1231319, the National Science Foundation (NSF) Engineering Research Center for Quantum Networks (CQN) awarded under cooperative agreement number 1941583, and the MITRE Moonshot Program. We acknowledge support from the ERC Advanced Grant PEDESTAL (884745), the EU Quantum Flagship 2D-SIPC. C.P.M. acknowledges support from the EPSRC DTP, R.A.P from the General Sir John Monash Foundation and a G-research Grant, J.A.M. from the Winton Programme and EPSRC DTP and A.M.S. from EPSRC/NQIT. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE's National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the article do not necessarily represent the views of the U.S. DOE or the United States Government. M.S. acknowledges support from the NASA Space Technology Graduate Research Fellowship Program. We would additionally like to thank Hamza Raniwala and Hyeongrak Choi for helpful discussion. I.B.W.H and C.P.M. contributed equally to this work.
2306.17418
ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homolog
A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph. We show that while this dual graph is a coarse quantization of input space, it is sufficiently robust that it can be combined with persistent homology to detect homological signals of manifolds in the input space from samples. This property holds for a variety of networks trained for a wide range of purposes that have nothing to do with this topological application. We found this feature to be surprising and interesting; we hope it will also be useful.
Yajing Liu, Christina M Cole, Chris Peterson, Michael Kirby
2023-06-30T06:20:21Z
http://arxiv.org/abs/2306.17418v1
# ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homology ###### Abstract A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph. We show that while this dual graph is a coarse quantization of input space, it is sufficiently robust that it can be combined with persistent homology to detect homological signals of manifolds in the input space from samples. This property holds for a variety of networks trained for a wide range of purposes that have nothing to do with this topological application. We found this feature to be surprising and interesting; we hope it will also be useful. Machine Learning, Polyhedral Decompositions, Persistent Homology, Persistent Homology, Persistent Homology ## 1 Introduction The rectified linear unit (ReLU) function is the default activation function for many well known, deep, feed forward neural networks (AlexNet, ResNet, Inception, SqueezeNet, etc). These networks frequently use some convolutional layers to decrease the number of parameters being trained and often utilize skips to decrease issues from overfitting. The importance of ReLU feed forward neural networks (FFNNs) has motivated an extensive investigation of their properties from a variety of aspects to demystify their "black-box nature". In this paper, we start from the observation that, regardless of their use of skips and/or convolutions, a ReLU FFNN \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) decomposes the input space (\(\mathbb{R}^{m}\)) into convex polyhedra and assigns to each polyhedron a unique binary vector that encodes the ReLU activation pattern of all nodes in the ReLU layers of the network. More precisely, the neural network assigns a binary vector to each input point, assigning identical binary vectors to input points lying in the same polyhedra. Despite this granularity imposed on input space, we found that the homology of manifolds embedded in the input space of the network can still be detected via persistent homology applied solely to distance measures built from the binary vectors of the points in the sampling. After explaining the connection between polyhedra and binary vectors in a ReLU FFNN, we describe methods for finding the binary vectors associated with the nearby neighbors of a given polyhedron in the decomposition. For small networks, one can (in principle) use these methods to determine all the polyhedra in the decomposition and determine their proximity within the decomposition. Toward this goal, we exploit the following observation: _two polyhedra share a facet if and only if their associated binary vectors differ in exactly one bit_. Due to this property, we propose the Hamming distance between binary vectors as a proxy for proximity between polyhedra in the decomposition. More precisely, if \(\mathcal{X}\) is a collection of data points in the input space, each \(x\in\mathcal{X}\) lies in one of the polyhedra and can be assigned the binary vector, \(s(x)\), that labels the polyhedron. Using the Hamming metric, we build a network driven distance matrix between pairs of binary vectors associated to data points in \(\mathcal{X}\). Using a software package such as Ripser (see [https://live.ripser.org](https://live.ripser.org)), we illustrate how the coarse distance measure corresponding to a polyhedral decomposition of input space can be used to extract homological information about a manifold lying in the input space from sample points on the manifold. It is worth noting that the Hamming distance between polyhedral bit vectors is an approximation for the smallest number of polyhedral steps between two polyhedra (i.e. for the shortest path between vertices on the dual graph of the polyhedral decomposition). (Jamil et al., 2023) has demonstrated the efficacy of the Hamming distance in illuminating the mechanisms underlying neural network functionality. The paper is organized as follows: Section 2 introduces the basic background on ReLU FFNNs, the polyhedral decomposition associated to a network, the binary vectors assigned to each polyhedron, the linear model that determines each polyhedron, the linear program for filtering out redundant inequalities in the linear model, and the affine linear function that is attached to each polyhedron. Section 3 outlines algorithms for determining the binary vectors that occur for polyhedra in the entire input space and for polyhedra that occur within a bounded region. Section 4 describes how the polyhedral decomposition, realized through binary vectors, can be combined with persistent homology to uncover topological features in data. Section 5 concludes the work and sketches directions for future research. ## 2 Neural Networks and Polyhedral Decompositions This section gives the basic definitions of a ReLU neural network and describes the connection to polyhedral decompositions and binary vectors. ### Notation for Neural Networks Consider an \((L+1)-\)layer ReLU feedforward neural network: \[\mathbb{R}^{m}\ \xrightarrow[]{(W_{1},b_{1})}\ \mathbb{R}^{h_{1}}\ \xrightarrow[]{(W_{2},b_{2})} \mathbb{R}^{h_{2}}\to\ldots\to\mathbb{R}^{h_{L-1}}\] \[\xrightarrow[]{(W_{L},b_{L})}\ \mathbb{R}^{h_{L}}\xrightarrow[]{(W_{L+1},b_{L+1})} \mathbb{R}^{n}. \tag{1}\] In this model, \(\mathbb{R}^{m}\) is the input space, \(\mathbb{R}^{n}\) is the output space, and \(h_{i}\) corresponds to the number of nodes at layer \(i\). Layer 0 corresponds to the input space and layer \(L+1\) corresponds to the output space (so \(h_{0}=m\) and \(h_{L+1}=n\)). We let \(W_{i}\in\mathbb{R}^{h_{i}\times h_{i-1}}\) and \(b_{i}\in\mathbb{R}^{h_{i}}\) denote the weight matrix and bias vector of layer \(i\), respectively. The activation functions for the hidden layers (layers \(1,\ldots,L\)) are assumed to be ReLU functions (applied coordinate-wise) while the map to the last layer (the output layer) is assumed to be affine linear (without a ReLU function being applied to the image). Recall that the ReLU function is the map \(ReLU:\mathbb{R}\to\mathbb{R}\) given by \[ReLU(a)=\begin{cases}a&\text{if }a>0\\ 0&\text{if }a\leq 0.\end{cases} \tag{2}\] The ReLU map is a map on real numbers that is piecewise linear and continuous. It can be naturally extended to a piecewise linear continuous map on vector spaces (which we also denote as ReLU). More precisely, we define \(ReLU:\mathbb{R}^{h_{i}}\to\mathbb{R}^{h_{i}}\), by applying the ReLU function to each coordinate of \(x\in\mathbb{R}^{h_{i}}\). Let \(w_{i,j}\) denote the \(j^{th}\) row of \(W_{i}\) and let \(b_{i,j}\) denote the \(j^{th}\) entry of \(b_{i}\). Given an input data point \(x\in\mathbb{R}^{m}\), we denote the output of \(x\) in layer \(i\) as \(F_{i}(x)\). Thus, with this notation we have \(F_{i}(x)\in\mathbb{R}^{h_{i}}\), \(F_{0}(x)=x\), and \[\begin{split} F_{i}(x)&=\text{ReLU}(W_{i}F_{i-1}(x )+b_{i})\\ &=\begin{bmatrix}\max\{0,w_{i,1}F_{i-1}(x)+b_{i,1}\}\\ \vdots\\ \max\{0,w_{i,h_{i}}F_{i-1}(x)+b_{i,h_{i}}\}\end{bmatrix}.\end{split} \tag{3}\] ### Definitions of Binary Vectors Consider model (1). Given an input data point \(x\in\mathbb{R}^{m}\), for each hidden layer \(i\) (so \(1\leq i\leq L\)), we introduce a binary (bit) vector \[s_{i}(x)=[s_{i,1}(x)\ \ldots\ s_{i,h_{i}}(x)]^{\top}\in\mathbb{R}^{h_{i}},\] where \(s_{i,j}(x)\) (with \(1\leq j\leq h_{i}\)) is defined as follows: \[s_{i,j}(x)=\begin{cases}1&\text{if }w_{i,j}F_{i-1}(x)+b_{i,j}>0\\ 0&\text{if }w_{i,j}F_{i-1}(x)+b_{i,j}\leq 0.\end{cases} \tag{4}\] Thus, for each point \(x\in\mathbb{R}^{m}\), we have a sequence of binary vectors \(s_{1}(x),s_{2}(x),\ldots,s_{L}(x)\). We can stack the binary vectors associated to \(x\) to make a long column vector \[s(x)=[s_{1}^{\top}(x)\ \ldots\ s_{L}^{\top}(x)]^{\top}\in\mathbb{R}^{h}, \tag{5}\] where \(h=\sum_{i=1}^{L}h_{i}\) is the total number of nodes in the hidden layers. We call \(s(x)\) the binary vector of \(x\). Different points from the input space \(\mathbb{R}^{m}\) can have the same binary vector. We next show that the set of points that have the same binary vector, \(\{x^{\prime}:s(x^{\prime})=s(x),x^{\prime}\in\mathbb{R}^{m}\}\), form a convex polyhedron in \(\mathbb{R}^{m}\). ### Linear Model for Binary Vectors Let \(s_{1},s_{2},\ldots,s_{L}\) denote a given sequence of binary vectors for model (1). For each layer \(i\)\((1\leq i\leq L)\), we describe inequality constraints that must be satisfied for an input data point to have the same binary vector \(s_{i}\). To describe the inequalities in a consistent manner, we introduce a sign vector of \(1^{\prime}s\) and \(-1^{\prime}s\) for each hidden layer. For layer \(i\), define \(s^{\prime}_{i}=[s^{\prime}_{i,1}\ \ldots\ s^{\prime}_{i,h_{i}}]^{\top}\) with \[s^{\prime}_{i,j}=\begin{cases}1&\text{if }s_{i,j}=0\\ -1&\text{if }s_{i,j}=1.\end{cases} \tag{6}\] In layer 1, because \(F_{1}(x)=\text{ReLU}(W_{1}x+b_{1})\), any data point \(x\) that has the bit vector \(s_{1}\) satisfies the following linear inequality: \[\text{diag}(s^{\prime}_{1})(W_{1}x+b_{1})\leq 0 \tag{7}\] where \(\text{diag}(v)\) is a square diagonal matrix with the elements of vector \(v\) on the main diagonal. The inequality (7) is established by a fundamental principle: the affine output generated by the first hidden layer, in response to input \(x\), must satisfy a greater-than-zero condition for nodes where the corresponding binary vector is set to 1, and a less-than-or-equal-to-zero condition for nodes where the corresponding binary vector is set to 0. Suppose that \(x\) has the bit vector sequence \(s_{1},s_{2},\ldots,s_{L}\). Let \(\hat{W}_{j}=W_{j}\text{diag}(s_{j-1})\hat{W}_{j-1}\) and \(\hat{b}_{j}=W_{j}\text{diag}(s_{j-1})\hat{b}_{j-1}+b_{j}\) for \(2\leq j\leq L\) with \(\hat{W}_{1}=W_{1},\hat{b}_{1}=b_{1}\). By model (1), the following equation holds for \(1\leq j\leq L\): \[F_{j}(x)=\text{ReLU}(W_{j}F_{j-1}(x)+b_{j})=\text{diag}(s_{j})(\hat{W}_{j}x+ \hat{b}_{j})\] where \(F_{0}(x)=x\). More generally, any data point \(x\) that has \(s_{j}\) as its bit vector for layer \(j\) should satisfy the following linear inequalities: \[\text{diag}(s^{\prime}_{j})\hat{W}_{j}x\leq\text{diag}(s^{\prime}_{j})(-\hat{b}_ {j}). \tag{8}\] Let \(A_{j}=\text{diag}(s^{\prime}_{j})\hat{W}_{j}\) and \(c_{j}=\text{diag}(s^{\prime}_{j})(-\hat{b}_{j})\) for \(1\leq j\leq L\). Combining (7) and (8), we have \[Ax\leq c, \tag{9}\] where \[A=[A_{1}^{\top}\ A_{2}^{\top}\ \dots\ A_{L}^{\top}]^{\top}\ \text{and}\ c=[c_{1}^{\top}\ c_{2}^{ \top}\ \dots\ c_{L}^{\top}]^{\top}\] with \(A_{i}\in\mathbb{R}^{h_{i}\times m}\) and \(c_{i}\in\mathbb{R}^{h_{i}}\). Note that the set \(P=\{x\in\mathbb{R}^{m}:Ax\leq c\}\) is a convex polyhedron (potentially empty, potentially unbounded). Considering the totality of points in the input space, the weights of a ReLU neural network lead to a decomposition of the input space into a collection of bounded and unbounded polyhedra and each polyhedron has a corresponding bit vector. Note that a random bit vector may or may not correspond to a non-empty polyhedra. It is not difficult to see that for a polyhedron \(P\) of full dimension (the same dimension as the ambient space), there is a unique smallest subset of inequalities, which we denote by \((A^{\prime},c^{\prime})\), that one can obtain from \((A,c)\) that leaves the polyhedron \(P\) unchanged. Thus, \(A^{\prime}\) is built from a subset of rows of \(A\) and \(c^{\prime}\) is the corresponding subset of rows of \(c\) such that the set of points that satisfy \(Ax\leq c\) is the same as the set of points that satisfy \(A^{\prime}x\leq c^{\prime}\). Write \[A=\begin{bmatrix}a_{1}\ a_{2}\ \dots\ a_{h}\end{bmatrix}^{\top}\ \text{ and }\ \ c= \begin{bmatrix}c^{1}\ c^{2}\ \dots\ c^{h}\end{bmatrix}^{\top}\] with \(a_{i}\in\mathbb{R}^{m}\) and \(c^{i}\in\mathbb{R}\). To determine if \(a_{i}x\leq c^{i}\) is a redundant constraint, first define \[\tilde{A}=\begin{bmatrix}a_{1}\ a_{2}\ \dots\ a_{i-1}\ a_{i+1}\dots\ a_{h} \end{bmatrix}^{\top}\] and \[\tilde{c}=\begin{bmatrix}c^{1}\ c^{2}\ \dots\ c^{i-1}\ c^{i+1}\ \dots\ c^{h} \end{bmatrix}^{\top}.\] Next, consider the following linear program \[\begin{array}{l}\operatorname*{maximize}_{x}\ \ a_{i}^{\top}x\\ \text{s. t.}\ \ \tilde{A}x\leq\tilde{c}.\end{array} \tag{10}\] If the optimal objective value of (10) is less than or equal to \(c^{i}\), then the \(i\)th linear inequality is redundant, and we can remove \(a_{i}^{\top}\) and \(c^{i}\) from \(A\) and \(c\), respectively. We determine \((A^{\prime},c^{\prime})\) by iterating this process to remove all redundant constraints. Given a bit vector \(s\), the \(i\)-th entry of \(s\) is called active if the \(i\)th row of \(A\) is in \(A^{\prime}\), and inactive otherwise. For any input \(x\) in the polyhedron determined by the bit vector \(s\), the output of \(x\) is determined by the single affine map: \(G(x)=W_{L+1}\text{diag}(s_{L})\hat{W}_{L}x+W_{L+1}\text{diag}(s_{L})\hat{b}_{ L}+b_{L+1}\). We summarize some of the points of this section, together with implications (many of which we leave to the reader) in the bullet points that follow. We emphasize that the polyhedral decomposition of input space has an associated dual graph. This graph has vertices corresponding to polyhedra and edges corresponding to polyhedra that share an \(m-1\) dimensional face. It is not hard to show that this dual graph is bipartite (refer to Appendix B). We will be using this graph, implicitly, to detect homological signals via persistent homology. For the interested reader, we point out the following references related to topological features and combinatorial features of the type of neural networks we are considering (Grigsby et al., 2022; Masden, 2022). We would also like to acknowledge the quickly-growing library of work in the area of polyhedral theory, a survey of which can be found in (Huchette et al., 2023). The following is a summary of the key ideas presented either explicitly or implicitly in this section: * A ReLU FFNN determines a continuous piecewise linear map \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\). * The neural network leads to a decomposition of \(\mathbb{R}^{m}\) into a collection of bounded and unbounded convex polyhedra. * A binary vector, supported on the hidden nodes of the network, can be attached to each point in the domain of the network. The value of the binary vector associated to a given input \(x\in\mathbb{R}^{m}\), at a given node, is \(1\) if ReLU was not applied at the node and \(0\) if ReLU was applied at the node (the ReLU activation pattern). * If \(h\) denotes the number of hidden nodes in the network, then there are, a priori, \(2^{h}\) possible binary vectors and thus \(2^{h}\) possible polyhedra in the decomposition of input space. In reality, the number of realized convex polyhedra in the domain is much smaller. * If two points lie in the interior of the same convex polyhedron then they determine the same binary vector. * If two points are in the interior of distinct convex polyhedra then they determine distinct binary vectors. * From the ReLU activation pattern for points in a convex polyhedron, one can determine an affine linear equation that represents the behavior of the neural network on points in the polyhedron. If the polyhedron is denoted by \(P\), then for points in \(P\), the function \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) can be expressed as \(F(x)=A_{P}x+b_{P}\) for some matrix \(A_{P}\) and some vector \(b_{P}\). * Input values that determine a value of \(0\) on some node, before applying ReLU, are precisely the input values that lie on the boundary of distinct convex polyhedra. This is due to the fact that applying ReLU to \(0\) has the same effect as not applying ReLU to \(0\). * If two convex polyhedra, \(P_{1},P_{2}\), share an \((m-1)\)-dimensional face, then the binary vectors associated to each of the polyhedra differ in one bit. Furthermore, the affine linear functions \(A_{P_{1}}+b_{P_{1}}\) and \(A_{P_{2}}+b_{P_{2}}\) agree on this \((m-1)\)-dimensional face. * Any polyhedral decomposition of \(\mathbb{R}^{m}\) has a natural dual graph with vertices corresponding to \(m\)-dimensional polyhedra and edges corresponding to polyhedra sharing an \(m-1\) dimensional facet. * The Hamming distance between binary vectors (that represent two polyhedra) can be used as an approximation for the smallest number of polyhedral steps between the two polyhedra (i.e. the length of a minimal geodesic on the dual graph). ## 3 Algorithms for Bit Vector Search and Examples The number of polyhedra in the input space or within a bounded region of the input space provides a measure of the network's expressivity and complexity. Upper and lower bounds on the maximal number of polyhedra obtainable from a given ReLU FFNN architecture can be found in (Pascanu et al., 2014), (Montufar et al., 2014), (Raghu et al., 2017), (Arora et al., 2018), and (Serra et al., 2018). Several algorithms ((Xiang et al., 2018), (Yang et al., 2020), and (Xu et al., 2021)) have been developed to compute the exact polyhedra decomposition of the input space through layer-by-layer linear inequality solving. Larger decomposition examples can be computed using a method developed by (Vincent and Schwager, 2021), which enumerates all polyhedra in the input space as follows: * Start with a random point \(x\in\mathbb{R}^{m}\) and determine its bit vector \(s(x)\). This bit vector labels a polyhedron \(P\). * Find the active bits in \(s(x)\) for the polyhedron \(P\). * Each active bit corresponds to a neighboring polyhedron. Each neighboring polyhedron has a bit vector that can be obtained by "flipping" one of the active bits for \(P\). Thus, one can find all of the neighboring polyhedra for \(P\), in terms of their binary vectors. * Repeat the process to find the neighbors of each of these newly identified polyhedra. * The number of active bits for a polyhedron is equal to the number of nearest neighbors of the polyhedron. The previous steps continue until we have a list of polyhedra \(\mathcal{P}\) that satisfies the property that for each \(P\in\mathcal{P}\), the set of nearest neighbors of \(P\) is itself a subset of \(\mathcal{P}\). This process leads to a set \(\mathcal{P}\) of convex polyhedra that decompose input space. Such decompositions have been used to define network-imposed distance metrics for model inputs (Balestriero and Baraniuk, 2018). We coarsen these metrics and show they can still be used to detect homological signals. Equating neural networks of various types (convolutional neural networks, residual networks, skip-connected networks, recurrent neural networks) to max-affine spline operators has been carried out by (Balestriero and Baraniuk, 2018). The paper (Sattelberg et al., 2020) investigated the behavior of the ReLU FFNNs based on the structure of the decomposition, together with the affine map attached to each polyhedron. While there are multiple methods that can be used to determine the polyhedral decomposition, and their associated binary vectors, imposed by the weights of a ReLU neural network, all of these methods are woefully inadequate for the large and deep networks that are commonplace today. This is not a fault of the algorithms, many modern networks contain many millions of hidden nodes and have input spaces with dimension well beyond 100,000. There are just too many polyhedra represented by such networks. However, for small networks, any of the methods will work. We present two straightforward methods. The first is a brute-force method formulated as a Linear programming problem. The second, which is the traversal-and-pruning method outlined above, computationally improves upon the first. We demonstrate our methods with examples. Pseudocode for these two methods are provided in Appendix A. ### Algorithms A mixed-integer linear program to count the number of polyhedra in a bounded region can be found in (Serra et al., 2018). We present a linear program that can also count polyhedra, determine if they share a facet, and whose implementational simplicity is useful. Our proposed method yields not only the count of polyhedra in the entire input space \(\mathbb{R}^{m}\) but also their respective binary vectors. To count the number of polyhedra in a bounded region, the linear inequality constraints in (15) need to be augmented to include bounds for variables. For example, when the input is 2-dimensional and the bounded region is determined by \(a_{1}\leq x_{1}\leq b_{1},\ a_{2}\leq x_{2}\leq b_{2}\), the constraints become \(\tilde{A}^{j}x\leq\tilde{c}^{j}\) where \(\tilde{A}^{j}\) is the concatenation of \(A^{j}\) and \(B\) while \(\tilde{c}^{j}\) is the concatenation of \(c^{j}\) and \(d\), with \[B=\begin{bmatrix}1&-1&0&0\\ 0&0&1&-1\end{bmatrix}^{T}\ \operatorname{and}\ d=\begin{bmatrix}b_{1}\ -a_{1}\ b_{2}\ -a_{2}\end{bmatrix}^{T}.\] Algorithm 1 is fast for small neural network structure. However, (Hanin and Rolnick, 2019) proved that the number of bit vectors that correspond to polyhedra is approximately \(h^{m}\) which is much smaller than \(2^{h}\) for large \(h\), highlighting the computational savings that can be procured by only traversing active bit vectors instead of all possibilities as is done by the brute-force method. These savings are what Algorithm 2 has to offer. Algorithm 1 is easy to implement but slow. The idea for Algorithm 2 is based on the fact that each polyhedron corresponding to \(Ax\leq c\) is determined by \(A^{\prime}x\leq c^{\prime}\) and two adjacent polyhedra that share one facet differ by one active bit. This active bit corresponds to the hyperplane containing this facet. The key step for Algorithm 2 is to find the active bits for a given bit vector using the method mentioned in Section 2.2. Once identified, these active bits can then be used to find all neighboring polyhedra, starting a scheme that ripples through the desired domain until all polyhedra are found. The idea behind this algorithm also motivated the reachable polyhedral marching algorithm in (Vincent and Schwager, 2021). We make a small modification to enumerate the polyhedra in a bounded region rather than in the entire, unbounded domain. For a bounded region, the bit vector can be expanded by checking whether the corresponding polyhedron hits the boundary or not. For example, when \(m=2\) and the bounded region is defined by \(a_{1}\leq x_{1}\leq b_{1},a_{2}\leq x_{2}\leq b_{2}\), the bit vector \(s\) can be expanded by 1 more bit with 1's or 0's by checking if its \(Ax\leq c\) solutions hit any of the domain boundaries defined by \(x_{1}=a_{1}\), \(x_{1}=b_{1}\), \(x_{2}=a_{2}\), or \(x_{2}=b_{2}\) (1 if true, 0 otherwise). For the bit vectors that have \(1\) for their last bit, we update their labels with 1 after they are added to \(\mathcal{P}\). The reason is that only if the bit vector of the initial point does not hit the region boundaries, the bit vectors that hit the region boundaries will be derived by flipping the active bits of their neighbors, which means that all its adjacent polyhedra in the bounded region are already in \(\mathcal{P}\). ### Examples In this section, we visualize the polyhedral decomposition, determined by a ReLU FFNN, on a bounded region for two basic models: \[\mathbb{R}^{2}\xrightarrow[\text{ReLU}]{(W_{1},b_{1})}\mathbb{R}^{3} \xrightarrow[\text{ReLU}]{(W_{2},b_{2})}\mathbb{R}^{3}\xrightarrow[\text{ReLU}]{(W_{3},b_{3})}\mathbb{R} \tag{11}\] \[\mathbb{R}^{3}\xrightarrow[\text{ReLU}]{(W_{1},b_{1})}\mathbb{R}^{10} \xrightarrow[\text{ReLU}]{(W_{2},b_{2})}\mathbb{R}^{10}\xrightarrow[\text{ReLU }]{(W_{3},b_{3})}\mathbb{R}^{10}\xrightarrow[\text{ReLU}]{(W_{4},b_{4})} \mathbb{R} \tag{12}\] We first applied model (11) to fit \(f_{1}(x_{1},x_{2})=x_{1}^{2}+x_{2}^{2}-2/3\) using 10000 points uniformly sampled in \([-1,1]^{2}\) and model (12) to fit \(f_{2}(x_{1},x_{2},x_{3})=(x_{1}-1)^{2}+2x_{2}^{2}+x_{3}^{2}+1\) using 125000 points uniformly sampled in \([-1,1]^{3}\). We used TensorFlow to train the above models with batch size of 50 and early stopping criterion based on the convergence of validation loss. We used Algorithm 1 to enumerate all binary vectors in \(\mathbb{R}^{2}\) and in \([-1,1]^{2}\). We used Algorithm 2 to enumerate all binary vectors in \(\mathbb{R}^{3}\) and in \([-1,1]^{3}\). We plotted the polyhedra by finding their extremal vertices. Figure 1(a) and Figure 1(b) provide visualizations of the polyhedral decomposition derived from model (11) and model (12), respectively. For Figure 1(a), there are 25 polyhedra in the bounded region \([-1,1]^{2}\) and 27 polyhedra in \(\mathbb{R}^{2}\). Of these polyhedra, 21 are bounded and 6 are unbounded. The binary vectors are superimposed onto the polyhedra. Note that the binary vectors associated to different polyhedra differ in exactly one bit if and only if they share a facet. For Figure 1(b), there are 1858 polyhedra in the bounded region \([-1,1]^{3}\) and 3331 polyhedra in \(\mathbb{R}^{3}\). ## 4 Persistent Homology and Polyhedral Decompositions It has been well documented that the Euclidean distance between sampled points on a manifold in \(\mathbb{R}^{n}\) can be employed to detect the topology of the manifold. In this section, we provide a description of Vietoris-Rips persistent homology and illustrate how it can be effectively combined with the non-Euclidean distance measure, associated to the polyhedral decomposition, to also identify homological features. Persistent homology has been a rapidly developing branch of topology largely due to its usefulness in data analysis and machine learning (Ghrist, 2008; Zomorodian and Carlsson, 2004; Carlsson, 2009; Edelsbrunner and Harer, 2008) (a collection of additional resources and videos can be found at [https://www.aatrn.net](https://www.aatrn.net)). Work linking persistent homology and neural networks has been appearing with increasing frequency, please see the following for a sampling of some recent works in this direction (Rieck et al., 2018; Zhao et al., 2020; Carriere et al., 2020; Birdal et al., 2021). The description of persistent homology given below is extremely condensed. The interested reader is encouraged to read the survey article (Ghrist, 2008) where additional details can be found. Figure 1: Visualizations of low-dimensional polyhedral decompositions. A detailed discussion on (a) can be found in Appendix B along with the exact binary vectors that label all of the regions of the decomposition. ### Persistent Homology Consider a metric space, \(C\), with distance function \(d:C\times C\to\mathbb{R}\). Let \(x^{(1)},x^{(2)},\ldots x^{(N)}\) be points in \(C\). We can utilize \(d\) to build an \(N\times N\) matrix \(D\) by setting \(D_{i,j}=d(x^{(i)},x^{(j)})\). \(D\) will be hollow (i.e. \(D_{i,i}=0\)), symmetric, and non-negative. From \(D\) we can build a family, \(A(t)\), of \(N\times N\)\(\{0,1\}\)-matrices parameterized by a real parameter \(t\) using the rule \[A(t)_{i,j}=\begin{cases}0&\text{if $D(i,j)>t$ or if $i=j$}\\ 1&\text{else}.\end{cases} \tag{13}\] For each \(t\), \(A(t)\) can be viewed as an adjacency matrix of a graph \(G(t)\). Let \(CL(t)\) denote the clique complex of \(G(t)\)(Hausmann, 1995). By construction, \(CL(t)\) is a simplicial complex and there is a natural inclusion map \[CL(t_{1})\hookrightarrow CL(t_{2})\] whenever \(t_{1}<t_{2}\). Let \(\mathbb{F}\) be a field. A simplicial complex \(S\) has an associated chain complex \(S_{\bullet}\) of \(\mathbb{F}\)-vector spaces. The failure of the chain complex to be exact at location \(i\) is measured by the \(i^{th}\) homology \(H_{i}(S_{\bullet},\mathbb{F})\) (which itself is an \(\mathbb{F}\)-vector space). The inclusion map \(CL(t_{1})\hookrightarrow CL(t_{2})\) induces a chain map \[CL(t_{1})_{\bullet}\to CL(t_{2})_{\bullet}.\] Whenever you have a chain map \(F:S_{\bullet}\to T_{\bullet}\) between chain complexes, \(S_{\bullet},T_{\bullet}\) you get associated linear maps between \(H_{i}(S_{\bullet},\mathbb{F})\) and \(H_{i}(T_{\bullet},\mathbb{F})\) for each \(i\). Thus, the chain map \(CL(t_{1})_{\bullet}\to CL(t_{2})_{\bullet}\) induces, for each \(i\), a linear map \[H_{i}(CL(t_{1})_{\bullet},\mathbb{F})\to H_{i}(CL(t_{2})_{\bullet},\mathbb{F}).\] If we pick values \(t_{1}<t_{2}<\cdots<t_{k}\), then we can build a nested sequence of simplicial complexes \[CL(t_{1})\subset CL(t_{2})\subset\cdots\subset CL(t_{k})\] which leads to \[H_{i}(CL(t_{1})_{\bullet},\mathbb{F})\to H_{i}(CL(t_{2})_{\bullet},\mathbb{F}) \rightarrow\cdots\to H_{i}(CL(t_{k})_{\bullet},\mathbb{F}), \tag{14}\] where the arrows denote linear maps. Each of the \(H_{i}(CL(t_{j})_{\bullet},\mathbb{F})\) are finite dimensional \(\mathbb{F}\)-vector spaces and we can view (14) as a finitely generated graded \(\mathbb{F}[x]\) module, where \(\mathbb{F}[x]\) is the ring of polynomials. This module is frequently called the \(i^{th}\) persistence module associated to the nested sequence of simplicial complexes. Since \(\mathbb{F}[x]\) is a principal ideal domain, the \(F[x]\) module has a decomposition into its invariant factors by the well known structure theorem for finitely generated graded modules over a principal ideal domain. Each invariant factor can be viewed as a homology class that has a birth time \(t_{b}\) and a death time \(t_{d}\) (possibly infinite) and this invariant factor can be represented as an interval \([t_{b},t_{d}]\). This collection of intervals corresponds to the barcode representation of the invariant factors of a persistence module. ### Combining Persistence with a Polyhedral Decomposition First we provide two examples based on the polyhedral decomposition of model (12). _Example 4.1_.: We consider the circle in \(\mathbb{R}^{3}\) (the input space) with parameterization given by \((0,cos(t),sin(t))\). We sampled the circle at \(20\) evenly spaced points \(x^{(1)},x^{(2)},\ldots,x^{(20)}\). Each of these points lie in one of the polyhedra from Figure 1(b). A picture of the polyhedra encountered by the \(20\) sample points is found in Figure 2(a). We recorded the binary vectors for each of the encountered polyhedra. This gave a total of \(20\) binary vectors but only \(19\) were distinct. We labeled these distinct binary vectors \(s^{(1)},s^{(2)},\ldots s^{(19)}\). We built a \(19\times 19\) matrix \(E\) by setting \(E_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). The resulting matrix is hollow, symmetric, and has positive integers in entries off the diagonal. We input this matrix into Ripser (see [https://live.ripser.org](https://live.ripser.org)) and asked for the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 3(a). _Example 4.2_.: We considered the same circle in \(\mathbb{R}^{3}\) as in the previous example but sampled at \(500\) evenly spaced points \(x^{(1)},x^{(2)},\ldots,x^{(500)}\). A picture of the polyhedra encountered by the \(500\) sample points is found in Figure 2(b). We recorded the \(500\) binary vectors for each of the polyhedra that was hit by a data point. Only \(41\) of the bit vectors were distinct (corresponding to \(41\) distinct polyhedra) and we labeled these \(s^{(1)},s^{(2)},\ldots s^{(41)}\). We built a \(41\times 41\) matrix \(F\) by setting \(F_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). We input this matrix into Ripser (see [https://live.ripser.org](https://live.ripser.org)) and asked for the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 3(b). The \(H_{0}\) barcode from the first example indicates connectivity at Hamming distance 4. The \(H_{1}\) barcode indicates a spurious (i.e. short lived) closed loop occurring at Hamming distance 3. The homological signal of the circle appears (and is quite strong) at Hamming distance 4. The \(H_{0}\) barcode from the second example indicates connectivity at Figure 2: Polyhedra from two sample sizes Hamming distance 2. The \(H_{1}\) barcode indicates a long lived loop beginning at Hamming distance 2. In the following two examples, we will consider a similar example but utilizing real images from the ImageNet validation dataset (which contains 50K images). We will calculate the Hamming distance between bit vectors of data points via a much deeper neural network. The network we use is known as ResNet-50, it is a 50-layer convolutional neural network and was pre-trained on ImageNet (Deng et al., 2009). The training images are 224x224x3 thus the input space has dimension 150528. It uses ReLU as an activation function on the outputs from the convolutional layers and contains more than 6,000,000 nodes in its many layers. Like in the previous examples, the activation pattern of each ReLU layer is stored as a bit vector (as defined in Section 2.2). _Example 4.3_.: We started with 3 pictures, denoted by \(A_{1},A_{2},A_{3}\), chosen from the ImageNet validation dataset. They are photos of a miniature poodle, a Persian cat, and a Saluki (see Figure 4). Each photo is represented by a 224 x 224 x 3 array. We generated \(50\) data points using the formulas \(\sin\theta A_{1}+\cos\theta A_{2}\) and \(A_{3}+\sin\theta A_{1}+\cos\theta A_{2}\), respectively, with \(\theta\) consisting of \(50\) points uniformly sampled from \([0,2\pi]\). We calculated the bit vectors via ResNet-\(50\) for the \(50\) data points and label them as \(s^{(1)},s^{(2)},\ldots s^{(50)}\). We built a \(50\times 50\) distance matrix \(G\) by setting \(G_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). We input this matrix into Ripser which returned the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 5. We note that there is homological noise (or homological dust) in Figure 5(a) but none in Figure 5(b). In both examples, there is a strong signal representing the circle. The sample points in the second example have all positive entries while the sample points in the first example do not have this property. We were unsure why (or if) this is related to the homological noise but we found it interesting nevertheless. It is a potentially useful feature that the input space to a neural network simultaneously has two kinds of distance measures. The first derives from the standard Euclidean geometry and the second derives from the coarse geometry implied by the Hamming distance. If one applies an isometry to a data set then its pairwise distance matrix will not change. However, if one applies an isometry with respect to one metric but measure distance via the second metric then one can definitely observe a change. It may be useful to combine information from multiple measurements. One way to carry this out is by using a monotone Boolean function (i.e Boolean functions built using only **and** and **or** operations). The next example illustrates this approach using a small rotation as the isometry in order to produce two distance matrices arising from essentially the same data set. _Example 4.4_.: Using the same \(A_{1},A_{2}\) with Example 4.3, we uniformly generated 50 points using \(\sin\theta A_{1}+\cos\theta A_{2}\) from \([0,2\pi]\) and \([1,2\pi+1]\), respectively. We calculated the bit vectors via ResNet-50 for the \(50\) data points sampled from \([0,2\pi]\) and labelled them as \(s^{(1)},s^{(2)},\ldots s^{(50)}\). Similarly, we label the bit vectors of the \(50\) data points sampled from \([1,2\pi+1]\) as \(\bar{s}^{(1)},\bar{s}^{(2)},\ldots\bar{s}^{(50)}\). We built four distance matrices \(G^{1},G^{2},G^{3}\), and \(G^{4}\) with \(G^{1}_{i,j}\) equal to the Hamming distance between \(s^{i}\) and \(s^{j}\), \(G^{2}_{i,j}\) equal to the Hamming distance between \(\bar{s}^{i}\) and \(\bar{s}^{j}\), \(G^{3}_{i,j}\) equal to the maximum of the Hamming distance between \(s^{i}\) and \(s^{j}\) and between \(\bar{s}^{i}\) and \(\bar{s}^{j}\), and \(G^{4}_{i,j}\) equal to minimum of the Hamming distance between \(s^{i}\) and \(s^{j}\) and between \(\bar{s}^{i}\) and \(\bar{s}^{j}\). The \(H_{1}\) barcodes for the four cases are presented in Figure 6 The distance matrix corresponding to the max function corresponds to requiring that distance matrices \(G_{1}\) and \(G_{2}\) both have their corresponding entry below some threshold. In other words, this corresponds to applying the **and** Boolean function. Similarly, the min function corresponds to applying the **or** Boolean function. Each seems to improve some feature related to the homological noise but it is hard to Figure 4: Images of \(A_{1},A_{2},\) and \(A_{3}\) Figure 3: \(H_{0}\) and \(H_{1}\) barcode plots say which of the two is better. In the next example, we see an overall strengthening of the length of the homological signal for the max function and we see an earlier start of the homological signal for the min function. Homological noise did not make an appearance in the next example. _Example 4.5_.: Using the same \(A_{1},A_{2},A_{3}\) with Example 4.3, we uniformly generated 50 points using \(A_{3}+\sin\theta A_{1}+\cos\theta A_{2}\) from \([0,2\pi]\) and \([1,2\pi+1]\), respectively. As in Example 4.4, we generated 4 different distance matrices \(\hat{G}^{1}\), \(\hat{G}^{2}\), \(\hat{G}^{3}\), and \(\hat{G}^{4}\), respectively. The \(H_{1}\) barcodes for the four cases are presented in Figure 7. ## 5 Conclusion and Future Work A ReLU feedforward neural network induces a finite polyhedral decomposition of the input space. The corresponding dual graph represents this decomposition, where vertices correspond to polyhedra and edges represent shared facets. Data in the input space gets mapped to vertices of this graph. The geometry of the graph can be exploited, via persistent homology, to detect homological signals of manifolds from points sampled from the manifold. Many techniques in data analysis build on the premise that data, sharing a collection of common features, tends to be well approximated by a low dimensional geometric object. It is conjectured that the coarseness of the polyhedral decomposition can be helpful in dealing with the noise present in many data sets (maybe by including monotone Boolean functions to combine multiple distance matrices). In future work, we hope to extend the examples in this paper with the goal of detecting manifolds, known to exist in data, such as \(SO(3)\), higher dimensional tori, and various fiber bundles involving these manifolds. A torus example can be found in Appendix C. Neural networks of the kind considered in this paper have a finite number of "polyhedral resources". The training function, the distribution of training data, the network architecture, and the training method are some of the factors in how the network utilizes its polyhedra. In the networks we considered, we observed the general pattern that there was a dense collection of smaller volume polytopes near the training data, larger volume polytopes in the general vicinity but away from the training data, with a shell of unbounded polyhedra surrounding the polytopes. You can see hints of this tendency in Figure 1(a). In future work, we hope to make more precise characterizations in the direction of this observation. In particular, we are eager to extend this work to depictions that more accurately quantify the number of polytopes that emerge from a neural network and their distributions of volumes, the "polytopes landscape". ## 6 Acknowledgements This work is partially supported by the United States Air Force under Contract No. FA865020C1121 and the DARPA Geometries of Learning Program under Award No. HR00112290074.
2309.11820
Automatic Endoscopic Ultrasound Station Recognition with Limited Data
Pancreatic cancer is a lethal form of cancer that significantly contributes to cancer-related deaths worldwide. Early detection is essential to improve patient prognosis and survival rates. Despite advances in medical imaging techniques, pancreatic cancer remains a challenging disease to detect. Endoscopic ultrasound (EUS) is the most effective diagnostic tool for detecting pancreatic cancer. However, it requires expert interpretation of complex ultrasound images to complete a reliable patient scan. To obtain complete imaging of the pancreas, practitioners must learn to guide the endoscope into multiple "EUS stations" (anatomical locations), which provide different views of the pancreas. This is a difficult skill to learn, involving over 225 proctored procedures with the support of an experienced doctor. We build an AI-assisted tool that utilizes deep learning techniques to identify these stations of the stomach in real time during EUS procedures. This computer-assisted diagnostic (CAD) will help train doctors more efficiently. Historically, the challenge faced in developing such a tool has been the amount of retrospective labeling required by trained clinicians. To solve this, we developed an open-source user-friendly labeling web app that streamlines the process of annotating stations during the EUS procedure with minimal effort from the clinicians. Our research shows that employing only 43 procedures with no hyperparameter fine-tuning obtained a balanced accuracy of 89%, comparable to the current state of the art. In addition, we employ Grad-CAM, a visualization technology that provides clinicians with interpretable and explainable visualizations.
Abhijit Ramesh, Anantha Nandanan, Nikhil Boggavarapu, Priya Nair MD, Gilad Gressel
2023-09-21T06:40:05Z
http://arxiv.org/abs/2309.11820v3
# Automatic Endoscopic Ultrasound Station Recognition with Limited Data ###### Abstract Pancreatic cancer is a lethal form of cancer that significantly contributes to cancer-related deaths worldwide. Early detection is essential to improve patient prognosis and survival rates. Despite advances in medical imaging techniques, pancreatic cancer remains a challenging disease to detect. Endoscopic ultrasound (EUS) is the most effective diagnostic tool for detecting pancreatic cancer. However, it requires expert interpretation of complex ultrasound images to complete a reliable patient scan. To obtain complete imaging of the pancreas, practitioners must learn to guide the endoscope into multiple "EUS stations" (anatomical locations), which provide different views of the pancreas. This is a difficult skill to learn, involving over 225 proctored procedures with the support of an experienced doctor. We build an AI-assisted tool that utilizes deep learning techniques to identify these stations of the stomach in real time during EUS procedures. This computer-assisted diagnostic (CAD) will help train doctors more efficiently. Historically, the challenge faced in developing such a tool has been the amount of retrospective labeling required by trained clinicians. To solve this, we developed an open-source user-friendly labeling web app that streamlines the process of annotating stations _during_ the EUS procedure with minimal effort from the clinicians. Our research shows that employing only 43 procedures with no hyperparameter fine-tuning obtained a balanced accuracy of 90%, comparable to the current state of the art. In addition, we employ Grad-CAM, a visualization technology that provides clinicians with interpretable and explainable visualizations. **Keywords:** station classification, Endoscopic ultrasound (EUS), convolutional neural network, pancreatic cancer ## 1 Introduction Pancreatic cancer is the seventh leading cause of cancer-related deaths worldwide [3]. Early detection is essential to improve the prognosis and survival rate of patients with pancreatic cancer. Despite advances in imaging technology, the survival rate remains low, with a reported 12% survival rate [19]. This is because pancreatic cancer is often asymptomatic until it reaches an advanced stage, making early detection unlikely. Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Endoscopic Ultrasound (EUS), and Positron Emission Tomography (PET) scans are different medical imaging techniques used for diagnosing pancreatic cancer. Among these imaging techniques, EUS uses high-frequency ultrasound waves to produce detailed images of the pancreas and surrounding organs. EUS is considered the most effective method for detecting early pancreatic cancer because it provides the most accurate visual of the size, location, and extent of the tumour [7]. A significant advantage of EUS is its capacity to detect very small tumors, as small as 2mm - 3mm in size; in comparison, CT and MRI can only detect tumors larger than 1cm. The EUS procedure demands a high level of expertise and experience, involving over 225 procotored procedures to be done before being assessed for competency [5]. In other forms of US imaging, the probe location is fixed and easy to control because it is in the examiners hand. However, during EUS, clinicians trained in endoscopy need to interpret complex ultrasound images in real-time using a flexible probe in a constantly moving environment. To obtain complete imaging of the pancreas, practitioners must learn to guide the endoscope into multiple "EUS stations" (anatomical locations), which provide different views of the pancreas. The recognition of the EUS stations is crucial to the EUS procedure as it enables targeted biopsies, accurate diagnosis, and aids in further surgical planning and monitoring. In order to assist doctors in learning the EUS procedure, previous studies have demonstrated the feasibility of computer-aided diagnostic (CAD) systems that use deep learning techniques in order to identify the pancreatic stations and whether or not the tumor is cancerous [6, 16]. However, these studies required retrospective annotated data from expert clinicians, increasing the clinician's workload. To solve this, we have developed and open-sourced a labeling application that streamlines the process for the Endoscopist to annotate the pancreas station during the EUS procedure, adding nearly no effort to the clinician. This type of "real-time" labeling is successful because we train our CAD system on all the data found in the video. We do not require the Endoscopist to only select gold-standard images and manually remove difficult images. We build an explainable AI-assisted tool to help Endoscopists identify different stations during the procedure. Incorporating this AI-assisted tool can improve the accuracy of diagnoses and decision-making in treating pancreatic cancer. Importantly, we demonstrate that a state-of-the-art system can be built with limited data and little labeling effort from the clinicians. By reducing the data requirements for training models, we aim to democratize these complex CAD systems. Our study utilized only 43 EUS procedures, accounting for approximately 10-15% of the data utilized in other related studies [24, 13]. We leverage preprocessing techniques on the EUS images to enhance image quality, thereby improving the overall performance of the model [10]. We also incorporate Grad-CAM visualizations to provide insight into the decision-making process of deep learning models, thus producing an explainable CAD system [20, 17]. This also enables the CAD system to be used as an offline training program, where doctors can practice identifying stations retrospectively. Our experiments show that using preprocessing techniques, we achieve an accuracy of 90%, and without using the preprocessing techniques, we achieve an accuracy of 87.6%. We make the following novel contributions: * Achieve a balanced accuracy of 90% with only preprocessing the input and zero fine-tuning on a small dataset; this is comparable to state-of-the-art that uses transfer learning, fine-tuning, and larger datasets [23, 24, 13]. * Utilize Grad-CAM to provide explainability to the physicians during procedures. * Develop and open-source an EUS labeling application that allows clinicians to annotate station and FNA timestamps during pancreas ultrasound procedures. [https://github.com/Amrita-Medical-AI/eusml-labeller](https://github.com/Amrita-Medical-AI/eusml-labeller). ## 2 Related Work In recent years, deep learning algorithms have shown great promise in detecting various diseases from medical imaging data, including ultrasound images [23]. Studies have shown that deep learning models perform comparably to or better than human healthcare professionals (HCPs) in most cases, particularly in the detection of diseases such as skin cancer, breast cancer, lung cancer, and diabetic retinopathy. These findings suggest that deep learning algorithms may be effective in the task of pancreas station classification using ultrasound images [12]. Yao et al. [23], proposed a framework consisting of a bile duct(BD) segmentation network and station recognition network to classify biliary tract diseases in EUS images. The framework achieved a classification accuracy of 93.3% for BD stations and an F1 score of 0.77 for segmentation for the internal validation set. For classification, the model attained an accuracy of 82.6% for the external validation set. This highlights the potential for deep learning in the context of pancreatic cancer diagnosis and staging. Several other studies have investigated deep-learning techniques for pancreatic station classification. Zhang et al. [24] proposed a transfer learning approach using a pre-trained ResNet model to classify six stations of the pancreas. The authors achieved an accuracy of 82.4% on the external dataset and an accuracy of 86.2% on the per-frame video test. The analysis involved selecting the best images from 311 videos. In our study with 43 videos, less than 15% of their dataset, we achieved an accuracy of 90% on the test dataset. LU et al. [13] proposed a two-stage framework where a convolutional neural network (CNN) was first used to detect the pancreas in the EUS images. Then, a region-based CNN was trained to classify the pancreas into different EUS stations. The authors achieved an accuracy of 95.6% in station recognition. To conduct their investigation, the authors utilized a dataset consisting of 269 procedures, roughly six times our dataset. Building on this, Jarmillo et al. [9] proposed a novel approach for automatically detecting pancreatic tumors in endoscopic ultrasound (EUS) videos using transfer learning. The authors used pre-trained CNN models to classify cancerous pancreatic tumors vs. benign pancreatic tumors and achieved an accuracy of 93.2. The CNN was trained on a dataset of 66,249 EUS images from 55 videos and was evaluated on a test set of 22,097 images from 18 videos. They used pre-processing techniques to remove noise and enhance the tumor region, resulting in improved accuracy and reduced variability in image quality. None of the prior studies systematically experimented with different preprocessing techniques in order to enhance the performance of deep learning models on EUS datasets. All previous work relied on large datasets retrospectively annotated by human clinicians, making scalability difficult. Despite working with a smaller dataset, comprising just approximately 10-15% of the videos used in comparative studies, we achieved a balanced accuracy of 90% for the test set. Our data labeling process did not involve manually annotating "gold standard" images, making it less time-consuming compared to previous approaches. We also introduce the Grad-CAM visualization technique to provide clinicians with transparent and explainable results. This allows for a better understanding of the model's decision-making process and facilitates trust for medical professionals. ### Preprocessing methods As with all machine learning, enhancing data quality will result in improved performance. Due to the nature of video recording, artifacts, and additional noise are often captured in a recording, which degrades the quality of the video. We employ image preprocessing techniques to increase contrast, remove noise, and smoothen/blur the image. * **Contrast-Limited Adaptive Histogram Equalization:** Contrast-limited adaptive histogram equalization (CLAHE) is an image enhancement technique that is widely used to improve the contrast and brightness of images [14]. It is a variant of adaptive histogram equalization (AHE), a non-linear contrast enhancement method. The idea behind histogram equalization is to transform an image's intensity histogram such that the distribution of the pixel intensities is spread out more evenly across the intensity range, thereby enhancing the contrast of the image. However, histogram equalization may produce undesirable results, such as noise amplification in regions of low contrast. CLAHE was introduced to address this limitation by dividing the image into smaller regions and equalizing the histogram separately for each region. This results in a more localized contrast enhancement, which can help preserve the details of the image [18]. \begin{table} \begin{tabular}{c c c c c} \hline Paper & N. Patients & N. Images & Performance & Hyperparameter Tuning \\ \hline [13] & 269 & 18,061 & 94.1\% & Not Specified \\ [24] & 311 & 19,486 & 94.2\% & Fine-tuning \\ [9] & 55 & 66,249 & 93.2\% & Grid search \\ [6] & 41 & 179,092 & 66.8\% & Not Specified \\ Our Work & 43 & 16,081 & 90\% & None \\ \hline \end{tabular} \end{table} Table 1: Comparison of EUS Datasets for Station Classification Figure 1: Transformed EUS Image under different Preprocessing techniques * **Gaussian Smoothing:** Image filtering techniques, such as Gaussian smoothing, are commonly used in image denoising to reduce high-frequency noise while preserving image edges and structures. Gaussian smoothing involves convolving an image with a Gaussian kernel, a bell-shaped function that assigns weights to neighbour pixels based on their distance from the center pixel. The smoothing effect of the Gaussian kernel is determined by its standard deviation, with a higher standard deviation resulting in a more significant blur. Xiao et al. [22] presented a study on developing an automatic brain MRI segmentation scheme with Gaussian smoothing. The paper highlighted the significance of Gaussian smoothing in medical imaging, showcasing its ability to preprocess images, effectively reducing noise, and enhancing the clarity of the features in the image. * **Quantile Capping:** Quantile capping is a technique used in image processing to limit the dynamic range of an image by capping the extreme pixel values. This is done by finding the upper and lower quantiles of the pixel intensity distribution and capping the values outside this range. This process can help improve the visual quality of an image and enhance its contrast. This technique converts the features into a normal distribution, spreading out the most common values for a particular feature. It also lowers the impact of marginal outliers. * **Denoising:** In image processing, denoising techniques are used to remove unwanted noise from images. Non-local Means-Based Denoising(NLM) algorithm is one such image denoising algorithm that utilizes the self-similarity of images to remove noises, while still preserving the important image artifacts. it works by comparing each pixel in the image to all other pixels, and then the average of these similar pixels is used to get the effective denoised pixel value. Heo et al. [8] undertook a systematic review to determine the effectiveness on using NLM algorithm for denoising (Magnetic Resonance)MR images. The study shows that not only was it effective at removing noises from the MR images while still keeping the important image artifacts, but also outperformed other denoising algorithms in terms of peak signal-to-noise ratio(PSNR) value, which is quality meansurement in which higher the PSNR value, better the image quality. * **Fourier Transform:** The Fourier Transform is a mathematical tool for decomposing a signal into its frequency components. In image processing, the Fourier Transform converts an image from its pixel-based spatial domain to its frequency-based domain. This transformation helps analyze the image in terms of its frequency content. Working in the frequency domain has several advantages, such as filtering or enhancing specific frequencies, which can be performed more efficiently. Fourier Transform is used in image compression and for the removal of noise in an image. To remove noise in an image, a mask is applied to the transformed image to suppress the noise frequency components while preserving the desired image information. Various strategies can be employed to design effective masks for noise removal in images. This includes utilizing masks of different shapes and sizes, targeting specific regions of the transformed image, or applying thresholding techniques to identify and suppress noise-containing frequencies. After applying a filter to remove noise frequency components, the inverse Fourier Transform transforms the modified frequency-based image into the spatial domain. ### Cnn The convolutional neural network (CNN) [1] is widely used and considered the most effective for medical image classification tasks [2]. The CNN is a powerful feature extractor; therefore, it can be used to classify medical images and avoid complex and expensive feature engineering. We used three classical convolutional neural network architectures: ResNet, DenseNet, and EfficientNet. A description of these architectures is presented below. * **ResNet:** This architecture solves a common problem in deep learning called the vanishing gradient problem. This problem occurs when deep neural networks have many layers, which makes it difficult for them to learn effectively. ResNets addresses this problem by using a technique called skip connections. The skip connection connects non-contiguous layers using a direct connection. These connections act like shortcuts, allowing information to flow easily through the network. By adding these shortcuts, the network can train deeper and increase performance on classification tasks. The number of trainable parameters of ResNet18 and ResNet34 is 11M and 63.5M, respectively. * **DenseNet:** This architecture uses dense connections between layers through Dense Blocks, which directly connect all layers (with matching feature-map sizes) to one another. Each layer obtains additional inputs from all preceding layers and passes its feature maps on to all subsequent layers. These dense connections help the model gain collective knowledge with fewer layers (with fewer parameters), which helps the model learn faster. The number of trainable parameters of DenseNet121, DenseNet161 and DenseNet201 is 8M, 29M, and 20M, respectively. * **EfficientNet:** This architecture manages the tradeoff between accuracy and computational cost. It achieves this by using a compound scaling technique that scales the network's depth, width, and resolution in a principled manner. Unlike conventional practice that uses arbitrary values to scale these factors. The EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients \(\alpha\), \(\beta\), and \(\gamma\). These scaling coefficients are determined by a small grid search on the original small model [21]. The trainable parameters of EfficientNetB0 and EfficientNetB3 are 5.3M and 12M, respectively. ## 3 Methodology ### Dataset Endoscopists from Amrita Hospitals used our open-source label tool shown in Fig 2 to annotate the timestamps during the EUS procedure. This eliminates the need for retrospective labelling. The station timestamps from the app correlate with the endoscopy machine's screen capture, allowing us to extract the corresponding frames for a station. We used 43 clinical videos, which were captured at 24 frames per second (FPS), and their timestamps of Station 1, Station 2, and Station 3 of the Endoscopy procedure collected over three months as our dataset for this study. This dataset is cleaned and prepared for training using our data preparation step. 3.2. The images were extracted from EUS videos at a rate of 1FPS. Previously, we experimented with different frame rates ranging from 1 to 24, but we settled on 1FPS because other options reduced the model's performance. The dataset comprises 15,545 images, divided as 2,242 for testing and 13303 for training across all three stations, as shown in Table 2. We didn't manually choose or use a special set of 'gold standard' images for training our model. The similarity between nearby interval frames means that testing may involve predicting the class of a patient already in the training set, making accurate assessment difficult. To address this, patient images were not mixed across splits. Instead, the dataset was split based on patients while maintaining balanced proportions of station images in the train, test, and Figure 2: Screenshots of the Label Tool validation sets. This approach evaluates the model's generalization capabilities on unseen images from different patients. ### Dataset Preparation We utilized the EUS videos of patients to generate the dataset as mentioned in the Section 3.1. We identified four distinct types of noises represented in \begin{table} \begin{tabular}{l r r} \hline \hline **Station** & **Train** & **Test** \\ \hline Station 1 & 4,179 & 744 \\ Station 2 & 5,602 & 830 \\ Station 3 & 3,522 & 668 \\ \hline **Total** & 13,303 & 2,242 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of EUS Dataset Figure 3: Images of different stations in EUS Fig 4 - GUI images, white light, green pointers (used by doctors to point on the screen in real-time), and blackened images. We performed a set of data-cleaning steps on the extracted frames. First, we removed the GUI and pink images using two histogram comparison techniques - histogram comparison intersection [11] and histogram comparison Bhattacharya [4]. Histogram comparison intersection measures the overlap between two histograms, while histogram comparison Bhattacharya measures the distance between them. These techniques are chosen because they provide a comprehensive understanding of the similarity between two histograms of two images. It is worth noting that the specific threshold values we have used in our processes were determined through trial and error on our dataset. Thus, they may not be standard and may need to be adjusted for different datasets. We compared the extracted images to a reference image. If the histogram comparison intersection value is less than or equal to 1.031 and the histogram comparison Bhattacharya value is more than or equal to 0.95, we detected the pink photos. If the histogram comparison intersection value is larger than or equal to 0. equal to 1.42 and the histogram comparison Bhattacharya value is less than or equal to 0.18, we remove the GUI image. Next, we removed the blackened images by computing the average intensity values of all pixels in the images. If the average pixel value is less than the threshold of 12, then that image is deemed blackened and removed. Finally, the images containing green pointers were replaced using an image processing technique called inpainting [15]. To further enhance the quality and contrast of the extracted frames, we experimented with several image enhancement algorithms, including Contrast-limited adaptive histogram equalization(CLAHE), Gaussian smoothing, denoising, quantile capping, and Fourier Transform as discussed in section 2.1. The images were normalized and standardized using the mean and standard deviation in the training set. This is to prevent any data leakage and promotes robust generalization to unseen data. The architecture diagram of the proposed framework system is displayed in Fig 5. ### Performance Measures We use balanced accuracy as the primary evaluation metric and weighted precision and weighted recall as the secondary evaluation metrics. We choose balanced accuracy as all mistakes are equally weighted; that is, all mistakes are equally important. Balanced accuracy considers the model's accuracy in each class while also considering the number of images in each class. This ensures that the performance measure is not skewed by the disproportionate number of images in each class. \[\text{balanced accuracy}=\frac{1}{k}\sum_{i=1}^{k}\frac{TP_{i}}{TP_{i}+FN_{i}} \tag{1}\] where, \(TP_{i}\): number of true positives for class \(i\) \(FN_{i}\): number of false negatives for class \(i\) \(k\): total number of classes The use of weighted precision and weighted recall allows us to account for the imbalance in the dataset by giving more weight to the classes with more Figure 5: Diagram of proposed framework system images. This enables us to accurately measure the precision and recall for each class, taking into consideration the proportion of images in each class. \[precision_{weighted}=\frac{n_{1}P_{1}+n_{2}P_{2}+n_{3}P_{3}}{n_{1}+n_{2}+n_{3}} \tag{2}\] \[recall_{weighted}=\frac{n_{1}R_{1}+n_{2}R_{2}+n_{3}R_{3}}{n_{1}+n_{2}+n_{3}} \tag{3}\] where, \(P_{i}\): precision for class \(i\) \(R_{i}\): recall for class \(i\) \(n_{i}\): number of instances of class \(i\) ## 4 Results and Discussion The results in Table 3 summarise the outcomes of our experiments. The results of ResNet18, DenseNet161, and EfficientNetB0 trained on the datasets of preprocessing techniques are shown. DenseNet161, trained on a dataset preprocessed with Gaussian Smoothing, achieved the highest performance, attaining a balanced accuracy of 90%. One surprising result is that models without preprocessing perform only 2.5% lower than the best-performing model. This has a significant downstream implication, as it provides simplicity in the workflow and deployment of the model. Without any complex preprocessing steps, it will increase the inference time of the CAD system during live EUS procedures. Furthermore, the preprocessing techniques Gaussian smoothing, denoising, and FFT outperformed the baseline by 90%, 88.7%, and 88.53%, respectively. Quantile capping and CLAHE, on the other hand, performed poorly on the dataset, with balanced accuracy of 77.39% and 84.87%, respectively. This outcome emphasizes the importance of evaluating the applicability of preprocessing techniques on a case-by-case basis in medical imaging. ### Qualitative Analysis We conducted a qualitative analysis to evaluate the effectiveness of our deep-learning models in differentiating stations in the EUS procedure. We utilized the Grad-CAM technique to visualize the regions of interest (ROIs) that our \begin{table} \begin{tabular}{l c c c c c c c c c} \hline & & Resnet18 & & \multicolumn{3}{c}{Efficientnet\_b0} & \multicolumn{3}{c}{Densenet161} \\ \hline Preprocessing & BA & Precision & Recall & BA & Precision & Recall & BA & Precision & Recall \\ \hline NO-PRE & 85.6 & 85.7 & 85.6 & 82.2 & 82.3 & 82.1 & **87.6** & 88.5 & 87.7 \\ CLAHE & **84.9** & 85.8 & 85.3 & 80.3 & 80.8 & 80.5 & 80.7 & 82.3 & 81.0 \\ DENOISING & 83.9 & 84.2 & 84.1 & 80.8 & 81.4 & 81.0 & **88.7** & 90.0 & 88.9 \\ GAUSSIAN Smoothing & 84.9 & 86.3 & 85.0 & 81.1 & 81.3 & 81.2 & **90.0** & 90.1 & 90.0 \\ QUANTILLE CAP & 75.9 & 76.5 & 76.0 & **77.3** & 78.0 & 77.5 & 59.8 & 70.0 & 61.6 \\ FFT-Normal & 84.4 & 85.8 & 84.6 & 80.2 & 80.4 & 80.2 & **88.5** & 90.0 & 88.8 \\ \hline \end{tabular} \end{table} Table 3: Performance Comparisons, best Balanced Accuracy(BA) is bolded model used to make predictions. Grad-CAM is a technique used in deep learning that produces visual explanations of the decision-making process of a convolutional neural network by highlighting the regions of an input image that are most important for the network's predictions. According to our findings, the features emphasized by our deep learning model to classify the EUS images are consistent with the reference points used by our expert doctor. In essence, the model's attention mechanism appears to be focused on the same visual signals that expert doctors use in EUS recordings to diagnose anomalies. ## 5 Conclusion In this work, we demonstrate that it is possible to build a CAD system to help train doctors in EUS station identification with frugal resources. We created an open-source labeling tool to annotate the timestamps of pancreas stations during the EUS procedure. This form of annotation happens in real-time during the procedure, requiring little extra effort on the behalf of the clinicians. Figure 6: Grad-CAM visualisation on EUS We then demonstrate that it is possible to achieve state-of-the-art results with limited data (15% of other studies). Notably the dataset does not have any hand annotations by doctors for gold-standard images. Instead, we train our models on the raw video footage with only the time stamps for station identification. We tested three models with a number of preprocessing techniques and found the best result was DenseNet161 with an accuracy of 90%. Furthermore, the result also shows that without preprocessing, the model achieved an accuracy of 87.6%. It is desirable to drop preprocessing as it will greatly increase the speed of inference for the real-time application of the CAD. Our results thus provide proof that a simplified approach to obtaining annotated EUS videos and using a small dataset, combined with basic deep learning techniques, can yield competitive performance. We believe in time and collaboration, we can reach much higher capacity datasets and obtain significantly improved models. ## 6 Declarations ### Data Availability The datasets generated during and/or analyzed during the current study are not publicly available as it is sensitive data for which we do not have consent for public distribution. Those wishing to collaborate formally are requested to email. ### Conflict of Interest The authors have no relevant financial or nonfinancial interests to disclose.
2310.20546
Analytical Study of KOH Wet Etch Surface Passivation for III-Nitride Micropillars
III-Nitride micropillar structures show great promise for applications in micro light-emitting diodes and vertical power transistors due to their excellent scalability and outstanding electrical properties. Typically, III-Nitride micropillars are fabricated through a top-down approach using reactive ion etch which leads to roughened, non-vertical sidewalls that results in significant performance degradation. Thus, it is essential to remove this plasma etch induced surface damage. Here, we show that potassium hydroxide (KOH) acts as a crystallographic etchant for III-Nitride micropillars, preferentially exposing the vertical <1-100> m-plane, and effectively removing dry etch damage and reducing the structure diameter at up to 36.6 nm/min. Both KOH solution temperature and concentration have a dramatic effect on this wet etch progression. We found that a solution of 20% AZ400K (2% KOH) at 90 C is effective at producing smooth, highly vertical sidewalls with RMS surface roughness as low as 2.59 nm, ideal for high-performance electronic and optoelectronic devices.
Matthew Seitz, Jacob Boisvere, Bryan Melanson, John Wyatt Morrell, Nithil Harris Manimaran, Ke Xu, Jing Zhang
2023-10-31T15:27:14Z
http://arxiv.org/abs/2310.20546v1
# Analytical Study of KOH Wet Etch Surface Passivation for III-Nitride Micropillars ###### Abstract III-Nitride micropillar structures show great promise for applications in micro light-emitting diodes and vertical power transistors due to their excellent scalability and outstanding electrical properties. Typically, III-Nitride micropillars are fabricated through a top-down approach using reactive ion etch which leads to roughened, non-vertical sidewalls that results in significant performance degradation. Thus, it is essential to remove this plasma etch induced surface damage. Here, we show that potassium hydroxide (KOH) acts as a crystallographic etchant for III-Nitride micropillars, preferentially exposing the vertical \(<1100>\) m-plane, and effectively removing dry etch damage and reducing the structure diameter at up to 36.6 nm/min. Both KOH solution temperature and concentration have a dramatic effect on this wet etch progression. We found that a solution of 20% AZ400K (2% KOH) at 90\({}^{\circ}\)C is effective at producing smooth, highly vertical sidewalls with RMS surface roughness as low as 2.59 nm, ideal for high-performance electronic and optoelectronic devices. III-Nitride, microstructures, surface passivation, plasma etch, wet etch. ## 1 Introduction The search for ever smaller electronic and optoelectronic devices has led to numerous innovations. Large planar devices have become micropillars and nanowires[1], allowing aggressive scaling for enhanced functionalities. This paves the way for higher density integrated transistors for more powerful computing[2], and brighter micro light-emitting diodes (\(\mu\)LEDs) for higher resolution displays for augmented reality (AR) and virtual reality (VR)[3, 4], as well as visible light communication[5]. III-Nitrides including gallium nitride (GaN) offers additional innovations in power electronics and optoelectronics and is a significant research interest due to its wide, direct bandgap, as well as its high electron mobility, and excellent thermal properties[6, 7]. III-Nitride micropillar or nanowire based electronic and optoelectronic devices can be fabricated through either bottom-up or top-down strategies. In a bottom-up approach, device structures are grown on a substrate through molecular beam epitaxy (MBE)[8] or metalorganic chemical vapor deposition (MOCVD)[9]. These growth approaches can result in highly dense forests of individual micropillars or nanowires; however, this unconstrained growth can lead to non-uniformity and coalescence of adjacent pillars or wires[10]. Through the use of a patterned mask on the growth surface, micropillar or nanowire nucleation sites can be intentionally defined, forming ordered arrays of micropillars or nanowires rather than disordered forests[11]. However, non-uniformities between nanowires still limit the effectiveness of these bottom-up approaches. Epitaxial growth can instead be performed at the wafer scale, enabling a top-down fabrication strategy. In this case, similar MBE or MOCVD epitaxial growth produces layered thin films that can be fabricated into a range of devices and geometries through photolithographic patterning and etching[12]. The use of well-established lithographic processing also offers an extremely high degree of uniformity and control over the placement, size, and geometry of the resulting structures compared to the probabilistic growth of bottom-up fabrication. Etching is typically accomplished using reactive ion etching (RIE) with inductively coupled plasma (ICP) using a Cl\({}_{2}\)/Ar gas mixture[12]. However, this RIE process also causes crystalline damage in the form of dangling bonds and surface roughness, which form current leakage paths which can severely limit device performance as well as cause non-radiative recombination in optoelectronic devices[13, 14]. Several fabrication challenges remain unsolved in this top-down fabrication methodology. GaN is a highly chemically inert material system and the effectiveness of wet etching processes are very limited[7]. Due to this, virtually all top-down GaN fabrication involves one or more dry etch process steps. Dry etching is effective at removing material to form device structures and is a highly mature approach. However, these processes also cause damage to the newly exposed surfaces and typically lead to the formation of non-vertical sidewalls[15, 16]. These defects present challenges for both device fabrication and performance. Sinated sidewalls can complicate subsequent deposition steps, and surface damage contributes to non-radiative recombination which significantly reduces the optical output and degrades device efficiency. To optimize device performance, the dry etch damaged III-Nitride micropillar sidewall surface must be fully removed. Therefore, here, we propose an analytical study on the use of KOH-based wet etch that can effectively passivate vertical III-N nanowires and micropillars. This expands upon our previous work[17] which only qualitatively considered catchment temperature. In this work, we demonstrate a KOH-based wet etch that can effectively passivate vertical III-N nanowires and micropillars and investigate the effects of etchant concentration, temperature, and the presence of an insoluble metal mask. The effects of the different etch conditions are analyzed comprehensively, and III-Nitride micropillar sidewall surface roughness was measured and compared from our study. ## 2 Results and Discussion Background and theory Several approaches have been investigated to remove or repair dry etch damage and improve overall device performance and efficiency for III-Nitride nano- and micro-structures. Passivation through atomic layer deposition (ALD) of dielectrics or treatment with sulfide solutions have been shown to minimize surface recombination, improving device performance and efficiency [18, 19]. However, this requires additional time consuming and expensive processing, and adds additional material to the device surface that can affect the optical properties of the device. Additionally, achieving the necessary fully conformal passivation coating presents further challenges. As an alternative, dilute KOH solutions have also been shown to improve dry etched GaN surfaces for edge-emitting lasers [15]. Through this KOH method, damaged material at the surface is etched away, removing the current leakage paths and non-radiative recombination centers, thus improving overall device performance for devices such as \(\mu\)LEDs [17]. However, no comprehensive study has been reported yet for vertical III-Nitride micropillars or nanowires. Thus, in this work, AZ400k, a common commercially available photoresist developer is used as the KOH source. This developer contains 2 wt% KOH in solution and is further diluted to 20%, 40%, or 60% AZ400k with deionized water, resulting in an etchant solution with 0.4%, 0.8%, or 1.2% KOH. This dilute aqueous KOH solution etches GaN-based structures through a two-step process. KOH first acts as a catalyst for the oxidation of GaN by hydroxide ions (OH), forming gallium oxide and ammonia. Subsequently, the KOH acts as a solvent, dissolving and removing the newly-formed gallium oxide. This etch progresses quickly for N-polar GaN crystals, but significantly slower for Ga-polar crystals due to repulsion between the OH ions and the GaN internal polarization field [20, 7]. Due to these interactions, KOH acts as a crystallographic etchant for GaN, preferentially exposing polar planes such as the \(<1\overline{1}00>\) plane while leaving the Ga-polar <0001> plane untouched. This makes KOH an extremely effective means to form vertical structures with highly smoothed sidewalls, as well as to reduce the diameter of etched features without affecting the height of the structure [17]. In this work, we examine the effects of KOH concentration, solution temperature, and the presence of a metal mask on the progression of this wet etch. The presence of a metal mask mimics a self-aligned micropillar fabrication process where deposition of electrical contacts after dry etching becomes impractical due to the small size of the devices [21]. Experimental Two sets of micropillars were prepared from an AlGaN/GaN wafer composed of a 0.46 \(\upmu\)m layer of Al\({}_{0.19}\)Ga\({}_{0.81}\)N atop a 4.7 \(\upmu\)m layer of GaN. As shown in Figure 1, fabrication of the first set of micropillars begins by first coating the sample with 500 nm SiO\({}_{2}\) via plasma enhanced chemical vapor deposition (PECVD) of tetraethyl orthosilicate (TEOS). Samples were coated with lift-off resist and positive photoresist, then patterned via direct-write lithography. A 150 nm layer of nickel (Ni) was thermally evaporated, deposited and lifted-off. This Ni was first used to mask a fluorine-based plasma etch of the SiO\({}_{2}\), revealing the underlying AlGaN surface. The remaining Ni/SiO\({}_{2}\) mask was then used to mask a Cl\({}_{2}\)/Ar etch to form the micropillars themselves. Immersion in HF-containing buffered oxide etch solution dissolved the SiO\({}_{2}\), effectively removing the Ni/SiO\({}_{2}\) mask. This culminated in the formation of a sparse array of 2 \(\upmu\)m tall, 2 \(\upmu\)m diameter micropillars with flared bases and sidewalls roughened by the dry etching process. The second set of micropillars followed a similar fabrication process but omitted the SiO\({}_{2}\) interlayer. As shown in Figure 2, after coating the sample with lift-off resist then positive photoresist and identical patterning via direct-write lithography, the same 150 nm layer of Ni was thermally evaporated and deposited directly on the AlGaN surface, and was lifted-off. This Ni was used as an etch mask for an identical Cl\({}_{2}\)/Ar etch, also resulting in similar sparse arrays of flared-base micropillars, with a Ni mask still covering the top surface. Figure 1: Fabrication process flow for unmasked micropillars Schematic of the micropillar fabrication process consisting of SiO2 deposition, deposition and lift off of a Ni etch mask and its use during dry etching of first the SiO2, then the AlGaN/GaN epitaxial structure. Following dry etching, the remaining SiO2 and metal masks are removed by immersion in BOE and the micropillars undergo a novel KOH-based wet etch to form smooth, vertical sidewalls. After collecting baseline images of the micropillars through scanning electron microscopy (SEM) before any wet etching, pairs of samples consisting of both masked and unmasked AlGaN/GaN micropillars received a series of wet etches under a range of conditions. Each etching step was performed in 20-minute increments, allowing for observation of the progression of the wet etch over time. After each wet etch cycle, the samples were imaged via SEM. All samples received three rounds of wet etching, for a total of 60 minutes etch time. Wet etching was performed using MicroChemicals AZ400K, a KOH-containing photoresist developer at a range of concentrations and solution temperatures. AZ400K was diluted with deionized water to concentrations of 20%, 40%, and 60% to serve as etch solution. These solutions were then heated to 70 "C, 80 "C, and 90 "C under constant stirring to perform the etch. Pairs of samples, one with a Ni mask and one without, were placed in dipper baskets and etched together. After each 20-minute etch cycle, the pair of samples were removed, rinsed in deionized water, blown dry, and imaged via SEM. Following the completion of wet etching, the micropillar sidewall surface roughness was measured via atomic force microscopy (AFM). ### Results One of the challenges of studying crystallographic etching processes is the need to carefully and precisely align features with the preferentially exposed crystal planes. For these experiments, we selected circular micropillars to avoid the need for this precise alignment and instead rely on the inherent symmetry of the micropillars to allow the structures to effectively self-align crystallographically, as shown in Figure 3. This enables us to analyze and observe the progression and effectiveness of the wet etching process while avoiding the possible unintended introduction of additional sidewall roughness due to crystallographic misalignment. After dry etching in Cl2/Ar, the micropillars were formed with slanted, non-vertical sidewalls that were scored with small vertical grooves, indicative of dry etch damage. This damage is steadily removed during the wet etching process but progresses differently for masked and unmasked structures. For structures without a Ni capping layer, the top-most AlGaN layer etches more quickly than the lower GaN layer. As the wet etch progresses, the diameter of the AlGaN layer steadily reduces, exposing more and more of the c-plane GaN surface at the AlGaN-GaN interface. At this newly exposed GaN surface, the KOH etch begins etching downward, revealing the vertical \(<1\bar{1}00>\) plane. However, as this downward etch occurs, the upper AlGaN layer also continues to etch radially inward and further reduce in diameter, exposing additional GaN at the interface. This newly exposed GaN also begins to etch downward, resulting in the formation of numerous steps and terraces down the side of the micropillar, as shown in Figures 4(a-d). Figure 3: Micropillar following wet etching showing smooth crystal planes. SEM image of a N-capped micropillar after undergoing wet etching in KOH-based solution and a schematic diagram of the crystal planes exposed during the wet etch. Figure 2: Fabrication process flow for Ni masked micropillars Structures that are capped with a layer of Ni show a very different etch progression. The inclusion of a completely insoluble mask atop the structure dramatically reduces the radially-inward etch rate of the top AlGaN layer. This slower etch rate enables the formation of smooth, vertical sidewalls without the terraces in the lower GaN layer that are seen in samples without the Ni mask. Since additional GaN is exposed at a much slower rate at the AlGaN-GaN interface, the wet etch can expose the same \(<1\bar{1}00>\) plane along the full height of the pillar, producing a uniform surface, shown in Figure 5(a-d). In all the cases, we observe a reduction in III-Nitride micropillar diameter as the wet etch progresses. This etch rate is highest at the beginning of the etch as the outer surface of the pillar, damaged by the initial dry etch, is quickly removed. The undamaged AlGaN/GaN etches more slowly, leading to an overall reduction in etch rate with time. As shown in Figure 6(a), we observe an etch rate of 32.3 nm/min for the sample with a Ni etch mask during the first 20 minute etch cycle in an 80 \({}^{\circ}\)C solution of 40% AZ400K. This etch rate falls to only 1.9 nm/min during the third and final 20 minute etch cycle. These etch rates are slightly slower than the etch rates observed for the sample without a Ni mask. When exposed to identical etch conditions, we observe an etch rate of 36.6 nm/min for the unmasked sample during the first 20 minute etch cycle. This etch rate falls to 5.2 nm/min during the final 20 minute etch cycle. Both increased solution temperature and increased AZ400K concentration leads to a more aggressive, faster etch. When exposed to an 80 \({}^{\circ}\)C etch solution of 60% AZ400K, we observe an etch rate of 24.7 nm/min for an unmasked sample. When an identical sample is exposed to the same etch solution at 70 \({}^{\circ}\)C, this etch rate falls to just 3.4 nm/min, as shown in Figure 6(b). Solution concentration has a similar, but less dramatic effect on observed etch rate. As shown in Figure 6(c), during the first etch cycle, identical, Ni-capped samples show an etch rate of 32.1 nm/min in a 40% AZ400K solution at 90 \({}^{\circ}\)C compared to an etch rate of 29.3 nm/min in a 20% AZ400K solution. The Figure 4: Wet etch progression for micropillars without a Ni mask (a) prior to wet etch, after a (b) 20 minute, (c) 40 minute, and (d) 60 minute wet etch Figure 5: Wet etch progression for micropillars with a Ni mask (a) prior to wet etch, after a (b) 20 minute, (c) 40 minute, and (d) 60 minute wet etch difference in etch rate becomes larger with time. During the final 20 minute etch cycle, we observe an etch rate of 2.1 nm/min using a 40% solution, but only observe an etch rate of 0.7 nm/min using a 20% solution. These results suggest that the wet etch rate of crystalline Ill-Nitrides, undamaged by prior dry etching, is much more sensitive to etch conditions. Conversely, Ill-Nitrides which had been damaged by a prior dry etch show a more consistent wet etch rate in KOH solution across a range of etch conditions. Based on the SEM images collected following etching under different conditions, the etching of Ni-capped micropillars in a solution of 20% AZ400K heated to 90 \({}^{\circ}\)C yielded the smoothest sidewalls. Post-etch SEM images show that the presence of a Ni etch mask prevents the formation of terraces on structure sidewalls and that lower etchant concentrations were sufficient to produce the desired crystallographic etch. Additional AFM measurements were taken using micropillars which were mechanically removed from their substrate before and after wet etching under these optimized conditions, shown in Figure 7. These measurements indicate that RMS surface roughness reduces from 33.09 nm prior to wet etching to 2.59 nm after 60 minutes wet etching in KOH solution. This significantly reduced RMS is comparable to state-of-the-art epitaxially grown GaN thin film with typical RMS of 2.5 nm,[22] clear quantitative evidence of the removal of dangling bonds and surface damage, improving device performance and reducing current leakage. ### Conclusions In summary, we have explored the effects of KOH solution concentration and temperature as well as the effects of a metal hard mask on the wet etching of vertical Ill-Nitride micropillar structures. Solution temperature plays a major role in the observed etch rate, increasing solution temperature from 70 \({}^{\circ}\)C to 80 \({}^{\circ}\)C causes the observed etch rate to increase from 3.4 nm/min to 24.7 nm/min. Changes in solution concentration have a more minor effect on the initial etch rate, increasing from 20% AZ400K to 40% led to an etch rate increase of only 2.8 nm/min, an increase of approximately 10%. However, as the etch progresses and material damaged by prior dry etching is removed, the etch rate of the underlying undamaged GaN is more sensitive to solution concentration. During the final etch cycle, the same increase from 20% concentration to 40% leads to a dramatic increase in etch rate, rising from 0.8 nm/min to 2.1 nm/min. The presence of a Ni etch mask atop the micropillar has a minor effect on etch rate, but a dramatic effect on post-etch surface morphology. Following the wet etch, samples with a Ni mask showed much smoother, more vertical sidewalls than samples without the Ni mask. Using optimized etch conditions of 20% AZ400K solution at 90 \({}^{\circ}\)C, micropillar sidewall roughness reduced from 33.09 nm to 2.59 nm after immersion in etchant for 60 minutes. This reduction in surface roughness is important for the fabrication of Ill-Nitride laser diodes, enabling the fabrication of smooth, highly reflective etched mirror faces, avoiding the need for mechanical cleaving. Additionally, this wet etch can lead to improved electrical device performance by reducing or eliminating current leakage pathways, essential for Ill-Nitride transistors and power electronics. Figure 6: Etch rate comparisons for micropillars (a) with and without a Ni mask, (b) using different etch solution temperatures, (c) using different etch solution concentrations. Figure 7: AFM measurement results (a) before wet etching in KOH solution, (b) after 60 minutes wet etching. ## Experimental Procedures ### Lead Contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Matthew Seitz ([email protected]). ### Materials Availability This study did not generate new unique materials. ### Data and Code Availability Any data in this study are available from the corresponding author upon reasonable request. #### Acknowledgments This work was partially supported by the National Science Foundation, Award No. ECCS 1751675. ## Author Contributions M.S. and J.Z. designed the experiment; J.B. and M.S. conducted the experiments; W.M., N.M., and K.X. collected AFM measurements, M.S, J.B, and B.M. performed data analysis; M.S. was the main contributing author. All authors revised and commented on the manuscript. ## Declaration of Interests The authors declare no competing interests.
2309.14958
Minimum trace norm of real symmetric and Hermitian matrices with zero diagonal
We obtain tight lower bounds for the trace norm $\Vert \cdot \Vert_1$ of some matrices with diagonal zero, in terms of the entry-wise $L^1$-norm (denoted by $\Vert \cdot \Vert_{(1)}$). It is shown that on the space of nonzero real symmetric matrices $A$ of order $n$ with diagonal zero, the minimum value of the quantity $\frac{\Vert A\Vert_1}{\Vert A\Vert_{(1)}}$ is equal to $\frac{2}{n}$. The answer of the similar problem in the space of Hermitian matrices, is also obtained to be equal to $\tan(\frac{\pi}{2n})$. The equivalent "dual" form of these results, give some upper bounds for the distance to the nearest diagonal matrix for a given symmetric or Hermitian matrix, when the distance is computed in the spectral norm.
Mostafa Einollahzadeh
2023-09-26T14:26:46Z
http://arxiv.org/abs/2309.14958v2
# Minimum trace norm of real symmetric and Hermitian matrices with zero diagonal ###### Abstract We obtain tight lower bounds for the trace norm \(\|\cdot\|_{1}\) of some matrices with diagonal zero in terms of the entry-wise \(L^{1}\)-norm (denoted by \(\|\cdot\|_{(1)}\)). It is shown that in the space of nonzero real symmetric matrices \(A\) of order \(n\) with diagonal zero, the minimum value of quantity \(\frac{\|A\|_{1}}{\|A\|_{(1)}}\) is equal to \(\frac{2}{n}\). The answer to a similar problem in the space of Hermitian matrices is also obtained to be equal to \(\tan(\frac{\pi}{2n})\). As an equivalent form of these results, it is shown that for every real symmetric matrix \(A\) of order \(n\) with off-diagonal entries of absolute values at most \(1\), there always exists a diagonal matrix \(D\) that is at distance \(\leq\frac{n}{2}\) of \(A\) in the spectral norm. The similar problem for Hermitian matrices, has answer \(\cot(\frac{\pi}{2n})\). _AMS Classification:_ 15A42, 05C50 _Keywords:_ Trace norm, Matrix norm inequality, Graph energy, Nearest diagonal matrix ## 1 Introduction The trace norm of a matrix (denoted by \(\|\cdot\|_{1}\)) is defined as the sum of its singular values and the spectral norm (denoted by \(\|\cdot\|_{\infty}\)) is defined as the maximum singular value. In particular, in the case of Hermitian matrices or real symmetric matrices, which we consider in this study, the singular values are equal to the absolute values of the eigenvalues, which can be used to define the above norms. The trace norm is also known by "energy" in some references, mostly those related to algebraic graph theory or its applications in chemistry (cf. [10] for a survey). The main idea of this paper comes from a conjecture by Haemers in [1] (proved in[1]), on "the minimum energy of Seidel matrix of simple graphs", which can be restated in our terminology as follows: **Conjecture.** Let \(A\) be an arbitrary symmetric matrix of order \(n\), with diagonal entries equal to zero and off-diagonal entries in \(\{\pm 1\}\). Then the minimum value of \(\|A\|_{1}\) is equal to \(\|J_{n}-I_{n}\|_{1}=2n-2\) for the all-ones matrix \(J_{n}\) and the identity matrix \(I_{n}\), both of order \(n\). Numerical computations for small values of \(n\), suggest that the matrix \(J_{n}-I_{n}\) has the minimum trace norm in the set of all diagonal zero matrices of order \(n\) for which the sum of the absolute values of the entries is equal to \(n(n-1)\). Therefore, the above conjecture on the discrete set of matrices with \(\{\pm 1\}\) off-diagonal entries can be generalized to provide a lower bound for the trace norm of every diagonal zero matrix with a prescribed value for the \(L^{1}\)-norm of its entries. This is one of the main results of this paper: **Theorem 1**.: _Let \(A=[a_{ij}]\) be a real symmetric matrix of order \(n\) with zero entries on the main diagonal. Then_ \[\|A\|_{1}\geq\frac{2}{n}\sum_{i,j}|a_{ij}|,\] _and the bound is sharp for all \(n\)._ A similar problem can be imposed on other spaces of matrices. We also obtained the best result for the space of Hermitian matrices: **Theorem 2**.: _Let \(A=[a_{ij}]\) be a Hermitian matrix of order \(n\) with zero entries on the main diagonal. Then_ \[\|A\|_{1}\geq\tan(\frac{\pi}{2n})\sum_{i,j}|a_{ij}|,\] _and the bound is sharp for all \(n\)._ By considering the standard Frobenius inner product on the space of real symmetric or Hermitian matrices, we obtained equivalent "dual" results of the above theorems. The dual results are related to the minimum distance in the spectral norm of a matrix to the space of diagonal matrices. We denote the space of all real diagonal matrices of order \(n\) by \(\mathcal{D}_{n}\). Then, the first "dual" theorem in the case of real symmetric matrices can be expressed as follows: **Theorem 3**.: _Let \(A=[a_{ij}]\) be a real symmetric matrix of order \(n>1\). Then_ \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}\;\leq\;\frac{n}{2}\max_{i\neq j}|a_ {ij}|,\] _and the bound is sharp for all \(n>1\)._ The equality case in the above theorem is obtained by the matrix \(A=J_{n}\), because \(\|J_{n}-\frac{n}{2}I_{n}\|_{\infty}=\frac{n}{2}\) and \(\frac{n}{2}I_{n}\) is the nearest diagonal matrix to \(J_{n}\) in the spectral norm. It is interesting to note that the worst case in this respect is the simple matrix \(J_{n}\). It is worth noting that there is no effective method for finding a diagonal matrix at spectral distance \(\frac{n}{2}\) from a given matrix with entries in \([-1,1]\), as we know its existence from the above theorem. This is an interesting problem on its own. The final theorem in this study is the Hermitian version of Theorem 3. **Theorem 4**.: _Let \(A=[a_{ij}]\) be a Hermitian matrix of order \(n>1\). Then_ \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}\ \leq\ \cot(\frac{\pi}{2n})\max_{i\neq j }|a_{ij}|,\] _and the bound is sharp for all \(n>1\)._ The remainder of this paper is organized as follows: Section 1 is devoted to the basic definitions and known results that are used throughout. In Sections 2 and 3, we present the proofs and necessary lemmas for Theorems 1 and 2, respectively. Section 4 is devoted to the dual versions, Theorems 3 and 4. ## 2 Notation and preliminaries In the first part of this section, we review some of the basic concepts and results of the theory of matrix analysis for the reader's convenience and fixing notations. Detailed discussion and proofs can be found in any standard textbook on this subject, such as [1], [10] and [11]. The conjugate transpose of a general complex matrix \(A\) is denoted by \(A^{*}\). For two complex vectors \(\mathbf{u}\) and \(\mathbf{v}\) of the same dimension, the usual Euclidean inner product is denoted by \(\langle u,v\rangle:=\mathbf{u}^{*}\mathbf{v}\) and the corresponding norm by \(|\mathbf{u}|:=\sqrt{\langle\mathbf{u},\mathbf{u}\rangle}\). The spaces of real and complex \(m\times n\) matrices are denoted by \(M_{m\times n}(\mathbb{R})\) and \(M_{m\times n}(\mathbb{C})\), respectively. The space of real symmetric matrices of order \(n\) is denoted by \(\mathcal{S}_{n}\) and the space of Hermitian matrices of order \(n\) is denoted by \(\mathcal{H}_{n}\). The space of real diagonal matrices of order \(n\) is also denoted by \(\mathcal{D}_{n}\). Let \(A\in\mathcal{H}_{n}\) has eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\). The _trace norm_ (energy, nuclear norm,...) of \(A\) is defined by \(\|A\|_{1}=\sum_{i=1}^{n}|\lambda_{i}|\) and the _spectral norm_ of \(A\) is defined by \(\|A\|_{\infty}=\max_{i}|\lambda_{i}|\). 1 The two other _entry-wise \(L^{1}\)-norm_ and _max norm_ of \(A\) are also defined by the formulas \(\|A\|_{(1)}=\sum_{i,j}|A_{ij}|,\|A\|_{(\infty)}=\max_{i,j}|A_{ij}|\). We also denote vector \((A_{11},A_{22},\ldots,A_{nn})\) of the diagonal entries of \(A\) by \(\mathrm{diag}(A)\). Footnote 1: These are two special cases of the Schatten \(p\)-norms defined by the \(L^{p}\)-norm of the singular values of a general matrix, which is the reason for choosing the present notation. Now let \[A=\lambda_{1}\mathbf{x}_{1}\mathbf{x}_{1}^{*}+\cdots+\lambda_{n}\mathbf{x}_{ 1}\mathbf{x}_{n}^{*},\] be the spectral decomposition of \(A\) for an orthonormal basis \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) of the eigenvectors and corresponding eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\). Suppose for some \(k\), \[\lambda_{1},\ldots,\lambda_{k}\geq 0,\quad\lambda_{k+1},\ldots,\lambda_{n}<0.\] The _positive_ and \(\ \mathrm{\,\,}\)textingativeparts of \(A\) are defined by \[A^{+}:=\sum_{i\leq k}\lambda_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{*},\quad A^{-}: =-\left(\sum_{i>k}\lambda_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{*}\right).\] It can be easily seen that \(A^{+},A^{-}\) are well-defined positive semi-definite matrices, and \(A=A^{+}-A^{-}\). We also use the Frobenius inner product on spaces \(M_{m\times n}(\mathbb{C})\). For two matrices \(A,B\in M_{m\times n}(\mathbb{C})\), their (Frobenius) _inner product_ is defined by: \[\langle A,B\rangle:=\operatorname{tr}(A^{*}B)=\sum_{i,j}\overline{A_{ij}}B_{ ij}.\] It can be verified that this yields a real inner product on \(\mathcal{H}_{n}\). We require two basic properties of the Frobenius inner product: * For every unitary matrix \(U\) and square matrices \(A\) and \(B\), all of the same order, \[\langle A,B\rangle=\langle U^{*}AU,U^{*}BU\rangle.\] * The inner product of two positive semi-definite matrices of the same order, is nonnegative. For two matrices \(A,B\in M_{m\times n}(\mathbb{C})\), the _Hadamard product_\(A\circ B\) is a matrix of the same dimension \(m\times n\), with elements given by \[(A\circ B)_{ij}=A_{ij}B_{ij}.\] The following lemma is known as the "Schur product theorem" ([1], Theorem 7.5.3): **Lemma 1**.: _The Hadamard product of two positive semi-definite matrices is positive semi-definite._ In the second part of this section, we review the "duality transformation" on convex sets. For more information on this subject, the reader can consult [16] (for the duality transformation in general ) and [14] (for dual norms). Let \((V,\langle\cdot,\cdot\rangle)\) be a real inner product space of finite dimension. For every subset \(X\) of \(V\), we define the _dual_ (or polar) of \(X\) by, \[X^{*}=\{y\in V:\langle y,x\rangle\leq 1,\forall x\in X\}.\] We denote the convex hull and closure of set \(X\) by \(\operatorname{conv}(X)\) and \(\operatorname{cl}(X)\), respectively. In the next lemma, we list some of the basic properties of the duality transform required in this study. **Lemma 2**.: _Let \(V\) be a finite dimensional real inner product space, and \(X,Y\) be subsets of \(V\)._ 1. \(X^{*}\) _is a closed convex set and contains_ \(0\)_._ 2. _If_ \(X\subset Y\)_, then_ \(Y^{*}\subset X^{*}\)_._ 3. \(X^{*}=\operatorname{conv}(X)^{*}=\operatorname{cl}(X)^{*}\) 4. \((X\cup Y)^{*}=X^{*}\cap Y^{*}\)_._ 5. _Let_ \(W\) _be a linear subspace of_ \(V\)_. Subsequently,_ \(W^{*}=W^{\perp}\) _for the orthogonal complement_ \(W^{\perp}\) _of_ \(W\)_._ 6. _If_ \(X\) _is closed convex and contains_ \(0\)_, then_ \((X^{*})^{*}=X\)_._ 7. _If_ \(X\) _and_ \(Y\) _are closed convex and contain_ \(0\)_, and_ \(Z=\operatorname{cl}(\operatorname{conv}(X^{*}\cup Y^{*}))\)_, then_ \[(X\cap Y)^{*}=Z.\] Proof.: The statements 1-5, are immediate consequences of the definition, and 6 is standard (cf. [11], Section 5.1). For 7, first by assumption we have \(X=(X^{*})^{*}\) and \(Y=(Y^{*})^{*}\) and by double use of 3, we have, \[Z^{*}=(X^{*}\cup Y^{*})^{*}=(X^{*})^{*}\cap(Y^{*})^{*}=X\cap Y.\] Now \(Z\) is closed convex and contains \(0\) and so is equal to its second dual, which gives, \[(X\cap Y)^{*}=(Z^{*})^{*}=Z.\] For every norm \(\|\cdot\|\) on \(V\), its _dual norm_\(\|\cdot\|^{\prime}\) is defined by \[\|v\|^{\prime}=\max\{\langle v,w\rangle,\|w\|\leq 1\}.\] Equivalently, the unit balls of \(\|\cdot\|\) and \(\|\cdot\|^{\prime}\) are dual. In the spaces \(\mathcal{H}_{n}\) and \(\mathcal{S}_{n}\) with the Frobenius inner product (which is the only inner product we consider in this study), the dual norms of \(\|\cdot\|_{1}\) and \(\|\cdot\|_{(1)}\) are \(\|\cdot\|_{\infty}\) and \(\|\cdot\|_{(\infty)}\) respectively (cf. [10], Section 4.3). ## 3 Real symmetric matrices In this section, we prove Theorem 1 by using two lemmas. **Lemma 3**.: _Let \(d>0\) and \(a,b\in\mathbb{R}\) with \(|a|,|b|\leq d\). Then_ \[|a-b|\leq\left(d-\frac{ab}{d}\right).\] Proof.: We have \[\left(d-\frac{ab}{d}\right)-(a-b)=\frac{1}{d}(d+b)(d-a)\geq 0,\] Thus, \((d-\frac{ab}{d})\geq(a-b)\) and similarly \((d-\frac{ab}{d})\geq(b-a)\) **Proposition 1**.: _Let \(A,B\) be two real positive semi-definite matrices of order \(n\), with_ \[\operatorname{diag}(A)=\operatorname{diag}(B)=(d_{1},\ldots,d_{n}).\] _Then_ \[\sum_{i,j}|A_{ij}-B_{ij}|\leq\left(\sum_{i=1}^{n}\sqrt{d_{i}}\right)^{2}.\] Proof.: \(A\) and \(B\) are positive semi-definite, so all of their \(2\times 2\) principal minors are nonnegative, i.e. for all \(i,j\) we have \[d_{i}d_{j}-(A_{ij})^{2}=A_{ii}A_{jj}-(A_{ij})^{2}\geq 0\Longrightarrow\ |A_{ij}|\leq\sqrt{d_{i}d_{j}}.\] Similarly \(|B_{ij}|\leq\sqrt{d_{i}d_{j}}\). Let \(C=A\circ B\) and define the vector \(\mathbf{e}=(e_{1},\ldots,e_{n})\) by the equations \[e_{i}=\left\{\begin{array}{ll}\frac{1}{\sqrt{d_{i}}}&d_{i}>0\\ 0&d_{i}=0\end{array}\right..\] For every \(i\neq j\), if \(d_{i}d_{j}=0\), \(A_{ij}=B_{ij}=C_{ij}=0\), and if \(d_{i}d_{j}>0\), \(|A_{ij}|,|B_{ij}|\leq\sqrt{d_{i}d_{j}}\). So by Lemma 3, for all \(i,j\), \[|A_{ij}-B_{ij}|\leq\left(\sqrt{d_{i}d_{j}}-e_{i}e_{j}C_{ij}\right). \tag{1}\] Summing up the above inequalities, we obtain \[\sum_{i,j}|A_{ij}-B_{ij}| \leq\left(\sum_{i}\sqrt{d_{i}}\right)^{2}-\sum_{i,j}e_{i}e_{j}C_{ij}\] \[=\left(\sum_{i}\sqrt{d_{i}}\right)^{2}-\mathbf{e}^{T}C\mathbf{e}. \tag{2}\] By Lemma 1, \(C\) is also positive semi-definite, hence \[\mathbf{e}^{T}C\mathbf{e}\geq 0. \tag{3}\] This completes the proof. Proof of Theorem 1.: We have \(A=A^{+}-A^{-}\) and \(\operatorname{diag}(A)=0\), so for a vector \((d_{1},\ldots,d_{n})\in\mathbb{R}^{n}\), \[\operatorname{diag}(A^{+})=\operatorname{diag}(A^{-})=(d_{1},\ldots,d_{n}).\] Let \(\lambda_{1},\ldots,\lambda_{k}\geq 0\) be all non-negative eigenvalues and \(\lambda_{k+1},\ldots,\lambda_{n}\) all negative eigenvalues of \(A\). Therefore \[\|A\|_{1}=\sum_{i\leq k}\lambda_{i}-\sum_{i>k}\lambda_{i}=\operatorname{tr}(A ^{+})+\operatorname{tr}(A^{-})=2\sum_{i}d_{i}. \tag{4}\] Now by Proposition 1, \[\sum_{i,j}|A_{ij}|=\sum_{i,j}|A_{ij}^{+}-A_{ij}^{-}|\leq\left(\sum_{i}\sqrt{d_{i} }\right)^{2}\leq n\sum_{i}d_{i}=\frac{n}{2}\|A\|_{1}. \tag{5}\] For an equality case, consider the matrix \(A=J_{n}-I_{n}\) for the all-one matrix \(J_{n}\) and the identity matrix \(I_{n}\), both of order \(n>1\). The eigenvalues of \(A\) are \(n-1\) and \(-1\) with multiplicity \(n-1\). Hence \(\frac{\|A\|_{1}}{\|A\|_{(1)}}=\frac{2(n-1)}{n(n-1)}=\frac{2}{n}\). **Remark 1**.: For more general example of matrices \(A\), which give equality in Theorem 1, consider the matrix, \[A=\mathbb{1}\mathbb{1}^{*}-B,\] for \(\mathbb{1}=(1,1,\ldots,1)\in\mathbb{R}^{n}\) and arbitrary positive semi-definite matrix \(B\in\mathcal{S}_{n}\), with condition \[B\mathbb{1}=0,\quad\mathrm{diag}(B)=(1,1,\ldots,1).\] Then \(\langle\mathbb{1}\mathbb{1}^{*},B\rangle=0\), and \(A^{+}=\mathbb{1}\mathbb{1}^{*},A^{-}=B\). So we have, \(\mathrm{diag}(A)=0,\|A\|_{1}=2\mathrm{tr}(A^{+})=2n\) and because \(|B_{ij}|\leq 1\) for all \(i,j\) (by \(B\geq 0\)), \[\|A\|_{(1)}=\sum_{i,j}|1-B_{ij}|=n^{2}-\sum_{i,j}B_{ij}=n^{2}.\] Therefore \(\frac{\|A\|_{1}}{\|A\|_{(1)}}=\frac{2}{n}\). Also it can be easily seen that, for every \(d\neq 0\) and every diagonal matrix \(D\) with diagonal entries in \(\{\pm 1\}\), the matrix \(dDAD\) is also an example of equality in Theorem 1. These matrices constitute the set of all zero diagonal matrices in \(\mathcal{S}_{n}\) which have only one positive or negative eigenvalue with the corresponding eigenvector in the set \(\{\pm 1\}^{n}\). It seems that, these are all of the nonzero equality cases, although we don't have a proof. ## 4 Hermitian matrices In the previous section, we observed that the minimum of \(\frac{\|A\|_{1}}{\|A\|_{(1)}}\) over all nonzero real symmetric matrices \(A\) of order \(n\) with zero entries on the diagonal, is equal to \(\frac{2}{n}\). In this section, we present a proof of Theorem 2, which provides an answer to the similar problem for Hermitian matrices. It turns out that for the constant \(\gamma_{n}\) defined below, the answer is equal to \(\frac{1}{\gamma_{n}}\cdot\frac{2}{n}\). We define: \[\gamma_{n}:=\frac{2}{n}\sum_{k=0}^{n-1}\sin(\frac{k\pi}{n}). \tag{6}\] For \(n\geq 2\), there is a simpler formula for \(\gamma_{n}\): \[\gamma_{n} =\frac{2}{n}\operatorname{Im}\left(\sum_{k=0}^{n-1}\exp(\mathrm{i} \frac{k\pi}{n})\right)=\frac{2}{n}\operatorname{Im}\left(\frac{1-\exp( \mathrm{i}\pi)}{1-\exp(\frac{\mathrm{i}\pi}{n})}\right)\] \[=\frac{4}{n}\cdot\frac{\sin(\frac{\pi}{n})}{2-2\cos(\frac{\pi}{n })}=\frac{2}{n}\cdot\frac{2\sin(\frac{\pi}{2n})\cos(\frac{\pi}{2n})}{2\sin^{2} (\frac{\pi}{2n})}\] \[=\frac{2}{n\tan(\frac{\pi}{2n})}. \tag{7}\] and because \(\frac{\tan(x)}{x}\) is increasing in the interval \([0,\frac{\pi}{2})\) and \(\lim_{x\to 0}\frac{\tan(x)}{x}=1\), \[\gamma_{1}=0\leq\gamma_{2}=1\leq\gamma_{3}\leq\dots,\quad\lim_{n\to\infty} \gamma_{n}=\frac{4}{\pi}. \tag{8}\] **Lemma 4**.: _Let \(d>0\) and \(a,b\in\mathbb{C}\) with \(|a|,|b|\leq d\). Then_ \[|a-b|\leq\left|d-\frac{\overline{a}b}{d}\right|.\] Proof.: We have \[\left|d-\frac{\overline{a}b}{d}\right|^{2}-|a-b|^{2} =\left(d^{2}+\frac{|a|^{2}|b|^{2}}{d^{2}}-\overline{a}b-a\overline {b}\right)-\left(|a|^{2}+|b|^{2}-\overline{a}b-a\overline{b}\right)\] \[=\frac{1}{d^{2}}(d^{2}-|a|^{2})(d^{2}-|b|^{2})\geq 0.\] Hence, \(|a-b|^{2}\leq\left|d-\frac{\overline{a}b}{d}\right|^{2}\) as required. Hereafter, we use notation \(\mathbb{S}^{1}=\{z\in\mathbb{C}:|z|=1\}\). **Lemma 5**.: _Let \(d_{1},\dots,d_{n}\geq 0\) and \(\omega_{1},\dots,\omega_{n}\in\mathbb{S}^{1}\). Then_ \[\sum_{i,j}d_{i}d_{j}|\omega_{i}-\omega_{j}|\,\leq\,\gamma_{n}\left(\sum_{i}d_ {i}\right)^{2},\] _and equality holds if \(\{\omega_{1},\dots,\omega_{n}\}\) is the set of \(n\)-th roots of unity and \(d_{1}=d_{2}=\dots=d_{n}\)._ Because the proof is rather long, we postpone it until Section 6. **Corollary 1**.: _For every vector \(\mathbf{x}=(x_{1},\dots,x_{n})\in\mathbb{C}^{n}\),_ \[\sum_{i,j}||x_{i}x_{j}|-x_{i}\overline{x_{j}}|\leq\gamma_{n}\left(\sum_{i}|x_ {i}|\right)^{2}.\] Proof.: Apply the previous lemma to \(d_{i}=|x_{i}|\) and \(x_{i}=d_{i}\omega_{i}\) for \(\omega_{i}\in\mathbb{S}^{1}\). Then we have \[\sum_{i,j}||x_{i}x_{j}|-x_{i}\overline{x_{j}}| =\sum_{i,j}|d_{i}d_{j}-d_{i}d_{j}\omega_{i}\overline{\omega_{j}}|\] \[=\sum_{i,j}d_{i}d_{j}\left|\omega_{i}-\omega_{j}\right|\] \[\leq\gamma_{n}\left(\sum_{i}d_{i}\right)^{2}=\gamma_{n}\left( \sum_{i}|x_{i}|\right)^{2}.\] **Proposition 2**.: _Let \(A,B\) be two Hermitian semi-definite matrices of order \(n\), with_ \[\mathrm{diag}(A)=\mathrm{diag}(B)=(d_{1},\ldots,d_{n}).\] _Then_ \[\sum_{i,j}|A_{ij}-B_{ij}|\leq\gamma_{n}\left(\sum_{i=1}^{n}\sqrt{d_{i}}\right) ^{2}.\] Proof.: Define the vector \(\mathbf{e}\in\mathbb{C}^{n}\) and the Hermitian matrix \(C\) by the equations, \[e_{i}=\left\{\begin{array}{ll}\frac{1}{\sqrt{d_{i}}}&d_{i}>0\\ 0&d_{i}=0\end{array}\right.,\quad C_{ij}=e_{i}e_{j}\overline{A_{ij}}B_{ij}.\] Then \(\mathrm{diag}(C)=(d_{1},\ldots,d_{n})\) and by semi-positivity of \(A\) and \(B\), for all \(i,j\), \[|A_{ij}|,|B_{ij}|\leq\sqrt{d_{i}d_{j}},\] and if \(d_{i}d_{j}=0\), \(A_{ij}=B_{ij}=C_{ij}=0\). Hence by Lemma 4, \[\sum_{i,j}|A_{ij}-B_{ij}|\leq\sum_{i,j}|\sqrt{d_{i}d_{j}}-C_{ij}|. \tag{9}\] Let \(X\) be a diagonal matrix with \(e_{1},\ldots,e_{n}\) along its diagonal. Then \[C=X^{*}\left(\overline{A}\circ B\right)X,\] and by Lemma 1, \(\overline{A}\circ B\) and also \(C\) are positive semi-definite. Hence for some vectors \(\mathbf{x}_{1},\ldots,\mathbf{x}_{k}\in\mathbb{C}^{n}\), we can write, \[C=\mathbf{x}_{1}\mathbf{x}_{1}^{*}+\cdots+\mathbf{x}_{k}\mathbf{x}_{k}^{*}.\] For every \(i\leq k\), define \(\mathbf{y}_{i}\in\mathbb{R}_{\geq 0}^{n}\) by, \[(\mathbf{y}_{i})_{j}=|(\mathbf{x}_{i})_{j}|,\quad 1\leq j\leq n.\] Now if we define \[D=\mathbf{y}_{1}\mathbf{y}_{1}^{*}+\cdots+\mathbf{y}_{k}\mathbf{y}_{k}^{*},\] we have \(\operatorname{diag}(D)=\operatorname{diag}(C)=(d_{1},\ldots,d_{n})\) and \(D\) is also positive semi-definite. So for all \(i,j\), \(|D_{ij}|\leq\sqrt{d_{i}d_{j}}\), and by triangle inequality \[\sum_{i,j}|\sqrt{d_{i}d_{j}}-C_{ij}|\leq\sum_{i,j}\left(\sqrt{d_{i}d_{j}}-D_{ij} +|D_{ij}-C_{ij}|\right). \tag{10}\] By the definition of \(C\) and \(D\) and applying Corollary 1 to vectors \(\mathbf{x}_{1},\ldots,\mathbf{x}_{k}\), we also have \[\sum_{i,j}|D_{ij}-C_{ij}| \leq\sum_{s=1}^{k}\sum_{i,j}|(\mathbf{y}_{s})_{i}(\mathbf{y}_{s} )_{j}-(\mathbf{x}_{s})_{i}\overline{(\mathbf{x}_{s})_{j}}|\] \[=\sum_{s=1}^{k}\sum_{i,j}||(\mathbf{x}_{s})_{i}(\mathbf{x}_{s})_{ j}|-(\mathbf{x}_{s})_{i}\overline{(\mathbf{x}_{s})_{j}}|\] \[\leq\gamma_{n}\sum_{s=1}^{k}\left(\sum_{i}(\mathbf{y}_{s})_{i} \right)^{2}\] \[=\gamma_{n}\sum_{i,j}D_{ij}. \tag{11}\] Finally combining inequalities (9), (10) and (11), and the fact that \(\gamma_{n}\geq 1\) for \(n\geq 2\), we have \[\sum_{i,j}|A_{ij}-B_{ij}| \leq\sum_{i,j}(\sqrt{d_{i}d_{j}}-D_{ij})+\gamma_{n}\sum_{i,j}D_{ij}\] \[\leq\gamma_{n}\sum_{i,j}(\sqrt{d_{i}d_{j}}-D_{ij})+\gamma_{n}\sum _{i,j}D_{ij}\] \[=\gamma_{n}\left(\sum_{i}\sqrt{d_{i}}\right)^{2}.\] Proof of Theorem 2.: This proof is similar to that of Theorem 1. If \(\operatorname{diag}(A^{+})=(d_{1},\ldots,d_{n})\), we have \[\|A\|_{1}=\operatorname{tr}(A^{+})+\operatorname{tr}(A^{-})=2\sum_{i}d_{i}, \tag{12}\] and by applying Proposition 2 to \(A^{+}\) and \(A^{-}\), \[\sum_{i,j}|A_{ij}| =\sum_{i,j}|A^{+}_{ij}-A^{-}_{ij}|\,\leq\gamma_{n}\left(\sum_{i=1 }^{n}\sqrt{d_{i}}\right)^{2}\] \[\leq n\gamma_{n}\sum_{i}d_{i}\,=\frac{n\gamma_{n}}{2}\|A\|_{1}\,= \cot(\frac{\pi}{2n})\|A\|_{1}. \tag{13}\] For an equality case, let \(A\) be given by \(\mathbb{1}\,\mathbb{1}^{*}-\boldsymbol{\alpha}\boldsymbol{\alpha}^{*}\) for the same notation defined in (26). It can be easily checked that \(A\) is a Hermitian matrix with diagonal zero, and because \[\langle\mathbb{1},\boldsymbol{\alpha}\rangle=0,\quad|\mathbb{1}\,|^{2}=| \boldsymbol{\alpha}|^{2}=n,\] it has two nonzero eigenvalues \(n,-n\), and \(\|A\|_{1}=2n\). On the other hand, the equality statement of Lemma 5, gives, \[\sum_{i,j}|A_{ij}|=\sum_{i,j}|1-\alpha_{i}\overline{\alpha_{j}}|=\sum_{i,j}| \alpha_{i}-\alpha_{j}|=n^{2}\gamma_{n}.\] Hence, \(\frac{\|A\|_{1}}{\|A\|_{(1)}}=\frac{2n}{n^{2}\gamma_{n}}=\tan(\frac{\pi}{2n})\), which yields the desired equality for (13). **Remark 2**.: From the relations in (8), the sequence \(\gamma_{n}=\frac{2}{n\tan(\frac{\pi}{2n})}\) converges from below to \(\frac{4}{\pi}\). So by Theorem 2, for every nonzero Hermitian matrix \(A\) of order \(n\) with zero diagonal, we have \(\|A\|_{1}\geq\frac{2}{n\gamma_{n}}\|A\|_{(1)}\) and so \[\|A\|_{1}>\ \frac{\pi}{2}\cdot\frac{1}{n}\|A\|_{(1)}, \tag{14}\] and \(\frac{\pi}{2}\) is the best constant (independent of \(n\)) for which the above inequality holds for all \(n\). ## 5 The dual version In this section, we prove Theorems 3 and 4. The main idea is to view the inequality statements in Theorems 1 and 2 as statements regarding the inclusion of certain convex sets. Then, the duality transformation is applied to these sets to obtain reverse inclusion relations between the resulting dual sets, and these inclusions are finally interpreted as a matrix norm inequality. For every subscript \(*\), we denote the closed unit ball of the norm \(\|\cdot\|_{*}\) by \(\mathcal{B}_{*}\). Here, \(\|\cdot\|_{*}\) can be any of the norms \(\|\cdot\|_{1},\|\cdot\|_{\infty},\|\cdot\|_{(1)},\|\cdot\|_{(\infty)}\) on the underlying spaces \(\mathcal{S}_{n}\) or \(\mathcal{H}_{n}\), and the underlying space is known from the context. All these balls are bounded closed sets and are consequently compact. **Lemma 6**.: _Let \(V\) be a finite dimensional real inner product space, and \(X\) be a compact convex subset of \(V\) that contains \(0\). Then for every linear subspace \(L\) of \(V\),_ \[\mathrm{cl}(\mathrm{conv}(X\cup L))=X+L.\] Proof.: Because of the compactness of \(X\), it can be easily seen that \(X+L\) is closed. It is also convex and, therefore, contains \(\mathrm{cl}(\mathrm{conv}(X\cup L))\). Next, we demonstrate that every point in \(X+L\) is a limit point of \(\mathrm{conv}(X\cup L)\), which completes the proof. Let \(x+v\) (for \(x\in X\) and \(v\in L\)) be an arbitrary point in \(X+L\). Then we have \[x+v=\lim_{t\to 1-}tx+(1-t)\left(\frac{v}{1-t}\right).\] Clearly, \(tx+(1-t)\left(\frac{v}{1-t}\right)\in\mathrm{conv}(X\cup L)\) for \(0\leq t<1\); thus, \(x+v\) is a limit point of \(\mathrm{conv}(X\cup L)\). Proof of Theorem 3.: First, note that the orthogonal complement of \(\mathcal{D}_{n}\) is equal to the set of matrices with diagonal zero. Now let \(A\) be an arbitrary real symmetric matrix in \(\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\). Then, \(\|A\|_{1}\leq 1\) and \(A\) has zero entries on the diagonal. So by Theorem 1, \(\|A\|_{(1)}\leq\frac{n}{2}\), or equivalently, \(A\in\frac{n}{2}\mathcal{B}_{(1)}\). Therefore the inequality statement of Theorem 1 implies the inclusion, \[\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\subset\frac{n}{2}\mathcal{B}_{(1)}. \tag{15}\] By taking dual of both sides, we obtain \[\frac{2}{n}\mathcal{B}_{(1)}^{*}\subset\left(\mathcal{B}_{1}\cap\mathcal{D}_{ n}^{\perp}\right)^{*}. \tag{16}\] Now note that the norms \(\|\cdot\|_{1}\) and \(\|\cdot\|_{\infty}\), are dual to each other, and so by definition, \[\mathcal{B}_{1}^{*}=\mathcal{B}_{\infty}.\] Similarly for the dual norms \(\|\cdot\|_{(1)}\) and \(\|\cdot\|_{(\infty)}\), we have \[\mathcal{B}_{(1)}^{*}=\mathcal{B}_{(\infty)}.\] The ball \(\mathcal{B}_{1}\) is closed convex and contains zero, so by Lemma 2, \[\left(\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\right)^{*}=\mathrm{cl}( \mathrm{conv}(\mathcal{B}_{1}^{*}\cup(\mathcal{D}_{n}^{\perp})^{*}))=\mathrm{ cl}(\mathrm{conv}(\mathcal{B}_{\infty}\cup\mathcal{D}_{n})).\] Using the previous lemma for \(X=\mathcal{B}_{\infty}\) and \(L=\mathcal{D}_{n}\), we have \[\mathrm{cl}(\mathrm{conv}(\mathcal{B}_{\infty}\cup\mathcal{D}_{n}))=\mathcal{ B}_{\infty}+\mathcal{D}_{n}.\] Substituting the above relations in the inclusion (16),and multiplication of the both sides by \(\frac{n}{2}\) gives, \[\mathcal{B}_{(\infty)}\;\subset\;\frac{n}{2}\mathcal{B}_{\infty}+\mathcal{D}_ {n}. \tag{17}\] This inclusion can also be easily converted into an equivalent matrix-norm inequality. Let \(A=[a_{ij}]\in\mathcal{S}_{n}-\mathcal{D}_{n}\), and \(M=\max_{i\neq j}|a_{ij}|\). Define the matrix \(B\) as below, \[B_{ij}=\left\{\begin{array}{ll}0&i=j\\ \frac{a_{ij}}{M}&i\neq j.\end{array}\right.\] Then for every \(i\neq j\), \(|B_{ij}|\leq 1\), and so \(B\in\mathcal{B}_{(\infty)}\). In addition, \(A=MB+D_{1}\) for a diagonal matrix \(D_{1}\). Now, from (17), \(B=C+D_{2}\) for some \(C\in\mathcal{S}_{n}\) with \(\|C\|_{\infty}\leq\frac{n}{2}\) and a diagonal matrix \(D_{2}\). Therefore, \[\|A-(D_{1}+MD_{2})\|_{\infty}=\|MC\|_{\infty}\leq\frac{n}{2}\max_{i\neq j}|a_{ij}|,\] and hence, \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}\leq\frac{n}{2}\max_{i\neq j}|a_{ij}|. \tag{18}\] In the case of diagonal matrices \(A\), the inequality above is trivial; therefore, it is valid for every \(A\in\mathcal{S}_{n}\). For the sharpness of the inequality, note that all of the above arguments are reversible for any constant in place of \(\frac{n}{2}\). Therefore, if inequality (18) remains valid for some other constant smaller than \(\frac{n}{2}\), this is also the case for Theorem 1, which we know is impossible. **Remark 3**.: By the equality \[\left(\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\right)^{*}=\mathcal{B}_{ \infty}+\mathcal{D}_{n},\] in above proof, for every matrix \(A=[a_{ij}]\in\mathcal{S}_{n}\), \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty} =\min\{r\geq 0:A\in r\mathcal{B}_{\infty}+\mathcal{D}_{n}\}\] \[=\min\{r\geq 0:A\in r\left(\mathcal{B}_{1}\cap\mathcal{D}_{n}^{ \perp}\right)^{*}\}\] \[=\max_{B\in\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}}\langle A, B\rangle.\] Therefore if \(A^{\prime}\) be the matrix obtained from \(A\) by just changing the diagonal entries to zero, we have \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty} =\max_{B\in\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}}\langle A,B\rangle\] \[\leq\max_{B\in\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}}\|A^{ \prime}\|_{(\infty)}\|B\|_{(1)} \tag{19}\] \[\leq\frac{n}{2}\|A^{\prime}\|_{(\infty)}=\frac{n}{2}\max_{i\neq j }|a_{ij}|. \tag{20}\] (The second inequality is by Theorem 1, which gives \(\|B\|_{(1)}\leq\frac{n}{2}\) for every \(B\in\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\).) Now, if \(A=[a_{ij}]\), be an equality case in Theorem 3; that is, \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}=\frac{n}{2}\max_{i\neq j}|a_{ij}|, \tag{21}\] we must have equality in (19) and (20). So there must be some \(B\in\mathcal{D}_{n}^{\perp}\), such that \[\|B\|_{(1)}=\frac{n}{2}\|B\|_{1},\] and \[\langle A,B\rangle=\|A^{\prime}\|_{(\infty)}\|B\|_{(1)},\] which means that if \(M=\max_{i,j}|A^{\prime}_{ij}|\), for every \(i,j\) with \(B_{ij}\neq 0\), \(A^{\prime}_{ij}=\pm M\mathrm{sgn}(B_{ij})\). Therefore, all solutions to (21) can be constructed as follows. Let \(B=[b_{ij}]\in\mathcal{S}_{n}\) be a nonzero matrix with diagonal zero and \(\|B\|_{(1)}=\frac{n}{2}\|B\|_{1}\). For an arbitrary \(M\geq 0\), let \(A=[a_{ij}]\in\mathcal{S}_{n}\) be the matrix for which, \[\forall i\neq j,\;|a_{ij}|\leq M,\quad\text{and if }b_{ij}\neq 0\text{ then }a_{ ij}=M\,\mathrm{sgn}(b_{ij}).\] The above construction can be applied to every introduced equality case of Theorem 1 in Remark 1. For example if \(n\) be even, we can define \[B=\mathbb{1}\,\mathbb{1}^{*}-\mathbf{v}\mathbf{v}^{*},\quad\mathbf{v}_{i}= \left\{\begin{array}{cc}1&i\leq\frac{n}{2}\\ -1&i>\frac{n}{2}\end{array}\right..\] Then for \(B\), we have \(\|B\|_{(1)}=\frac{n}{2}\|B\|_{1}\), and \[B=\begin{bmatrix}0&2I_{\frac{n}{2}}\\ 2I_{\frac{n}{2}}&0\end{bmatrix}.\] So for every matrix \(A\) of the form \[A=\begin{bmatrix}*&I_{\frac{n}{2}}\\ I_{\frac{n}{2}}&*\end{bmatrix},\] the equality (21) holds; that is, \(\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}=\frac{n}{2}\). Proof of Theorem 4.: The proof is completely similar to that of Theorem 3, and we omit the details. One shows first that Theorem 2, implies the inclusion \[\mathcal{B}_{1}\cap\mathcal{D}_{n}^{\perp}\subset\cot(\frac{\pi}{2n})\mathcal{ B}_{(1)}, \tag{22}\] in the space \(\mathcal{H}_{n}\). Applying the duality transformation, gives an equivalent inclusion, \[\mathcal{B}_{(\infty)}\;\subset\;\cot(\frac{\pi}{2n})\mathcal{B}_{\infty}+ \mathcal{D}_{n}, \tag{23}\] which can be rewritten as the following inequality for every \(A=[a_{ij}]\in\mathcal{H}_{n}\), \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}\leq\cot(\frac{\pi}{2n})\max_{i\neq j }|a_{ij}|. \tag{24}\] **Remark 4**.: To find the equality cases of Theorem 4, a completely similar analysis to Remark 3 can be made, and we only provide the final answer here. All of solutions of the equation, \[\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}=\cot(\frac{\pi}{2n})\max_{i\neq j}| a_{ij}|, \tag{25}\] for \(A\in\mathcal{H}_{n}\) can be constructed as follows. Let \(B=[b_{ij}]\in\mathcal{H}_{n}\) be a nonzero matrix with diagonal zero, and \(\|B\|_{1}=\tan(\frac{\pi}{2n})\|B\|_{(1)}\). For an arbitrary \(M\geq 0\), let \(A=[a_{ij}]\in\mathcal{H}_{n}\) be the matrix for which, \[\forall i\neq j,\;|a_{ij}|\leq M,\;\text{and if}\;b_{ij}\neq 0\;\text{then}\;a_{ ij}=M\,\frac{b_{ij}}{|b_{ij}|}.\] For example, if we consider \(B=\mathbb{1}\,\mathbb{1}^{*}-\boldsymbol{\alpha}\boldsymbol{\alpha}^{*}\) with the same notation in the proof of Theorem 4, then we can obtain the following solution \(A=[a_{ij}]\) for (25), \[\forall i,\;a_{ii}=0,\quad\forall i\neq j,\;a_{ij}=\frac{1-\zeta^{2(i-j)}}{|1- \zeta^{2(i-j)}|}=-\mathrm{i}\zeta^{i-j}\mathrm{sgn}(i-j),\] when \(\zeta=\exp(\frac{\mathrm{i}\pi}{n})\). Now if we define \(E\in\mathcal{H}_{n}\) by, \[E_{ij}=\left\{\begin{array}{ll}0&i=j\\ \mathrm{sgn}(i-j)\mathrm{i}&i\neq j\end{array}\right.,\] we have \(A=-UEU^{*}\) for the diagonal matrix \(U\) with diagonal entries \((\zeta,\zeta^{2},\ldots,\zeta^{n})\). Also \(U\) is a unitary matrix and so, \[\cot(\frac{\pi}{2n}) =\min_{D\in\mathcal{D}_{n}}\|A-D\|_{\infty}=\min_{D\in\mathcal{D}_ {n}}\|-UEU^{*}-D\|_{\infty}\] \[=\min_{D\in\mathcal{D}_{n}}\|E+U^{*}DU\|_{\infty}=\min_{D\in \mathcal{D}_{n}}\|E-D\|_{\infty}.\] Therefore, \(E\) is a simple solution to (25). By further computation, \(\|E\|_{\infty}=\cot(\frac{\pi}{2n})\); thus, the nearest diagonal matrix to \(E\) in the spectral norm is the zero matrix. ## 6 Proof of Lemma 5 Proof of Lemma 5.: Define \(\mathcal{X}\) to be the set of all of pairs \((\mathbf{d},\boldsymbol{\omega})\) with \[\mathbf{d}=(d_{1},\ldots,d_{n})\in\mathbb{R}_{\geq 0}^{n},\;\sum_{i}d_{i}=1, \quad\boldsymbol{\omega}=(\omega_{1},\ldots,\omega_{n})\in\mathbb{S}^{n},\] and define the function \(F:\mathcal{X}\to\mathbb{R}\) by \[F(\mathbf{d},\boldsymbol{\omega})=\sum_{i,j}d_{i}d_{j}|\omega_{i}-\omega_{j}|.\] Also define \[\mathbb{1}=(1,\ldots,1),\quad\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n }),\;\alpha_{k}=\exp(\mathrm{i}\frac{2(k-1)\pi}{n}),1\leq k\leq n. \tag{26}\] Now, \(\mathcal{X}\) is a compact set and \(F\) has a maximum value \(M_{n}\) on \(X\). The statement of lemma is equivalent to showing that \(M_{n}=\gamma_{n}\) and \(F(\frac{1}{n}\mathbb{1},\boldsymbol{\alpha})=\gamma_{n}\). The proof is performed by induction on \(n\). The case \(n=1\) is trivial; therefore, we can assume \(n>1\) and \(M_{n-1}=\gamma_{n-1}\). We have, \[F(\frac{1}{n}\mathbb{1},\boldsymbol{\alpha}) =\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}|1-\overline{\alpha_{i}} \alpha_{j}|\] \[=\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}|1-\exp(\mathrm{i}\frac{2(j- i)\pi}{n})|\] \[=\frac{1}{n^{2}}\times n\sum_{k=0}^{n-1}|1-\exp(\mathrm{i}\frac{2 k\pi}{n})|\] \[=\frac{2}{n}\sum_{k=0}^{n-1}\sin(\frac{k\pi}{n})=\gamma_{n}.\] (By the identity \(|1-\exp(\mathrm{i}t)|=\sqrt{2-2\cos(t)}=\sqrt{4\sin^{2}(\frac{t}{2})}=2\sin( \frac{t}{2})\) for \(t\in[0,2\pi]\).) Hence, \(\gamma_{n}=F(\frac{1}{n}\mathbb{1},\boldsymbol{\alpha})\leq M_{n}\) and because \(M_{n-1}=\gamma_{n-1}<\gamma_{n}\), \(M_{n-1}<M_{n}\). Now let \[(\tilde{\mathbf{d}}=(\tilde{d}_{1},\ldots,\tilde{d}_{n}),\tilde{\boldsymbol{ \omega}}=(\tilde{\omega}_{1},\ldots,\tilde{\omega}_{n}))\in\mathcal{X},\] be a maximum point for \(F\). By a rotation and rearrangement of \(\tilde{\omega}_{i}\)'s and applying the same rearrangement to \(\tilde{d}_{i}\)'s, we can assume that \(\tilde{\omega}_{1}=1\) and for every \(1\leq k\leq n\), \[\tilde{\omega}_{k}=\exp(2\mathrm{i}\theta_{k}),\quad 0=\theta_{1}\leq\theta_{2} \leq\cdots\leq\theta_{n}<\pi.\] By a sequence of steps, we prove that \((\tilde{\mathbf{d}},\tilde{\boldsymbol{\omega}})=(\frac{1}{n}\mathbb{1}, \boldsymbol{\alpha})\), and so \(M_{n}=F(\tilde{\mathbf{d}},\tilde{\boldsymbol{\omega}})=\gamma_{n}\), and we are done. **Step 1**. \(\tilde{\omega}_{i}\)'s are pairwise distinct. Suppose, on the contrary, \(\tilde{\omega}_{k}=\tilde{\omega}_{k+1}\) for some \(k\). Define the vectors \(\mathbf{d}^{\prime}=(d_{1}^{\prime},\ldots,d_{n-1}^{\prime})\) and \(\boldsymbol{\omega}^{\prime}=(\omega_{1}^{\prime},\ldots,\omega_{n-1}^{\prime})\) by, \[d_{i}^{\prime}=\left\{\begin{array}{ll}\tilde{d}_{i}&i<k\\ \tilde{d}_{k}+\tilde{d}_{k+1}&i=k\\ \tilde{d}_{i+1}&i>k\end{array}\right.,\quad\omega_{i}^{\prime}=\left\{ \begin{array}{ll}\tilde{\omega}_{i}&i\leq k\\ \tilde{\omega}_{i+1}&i>k\end{array}\right..\] Then \[F(\tilde{\mathbf{d}},\tilde{\boldsymbol{\omega}}) =\sum_{i,j\neq k,k+1}\tilde{d}_{i}\tilde{d}_{j}|\tilde{\omega}_{i }-\tilde{\omega}_{j}|+2\sum_{j}\tilde{d}_{k}\tilde{d}_{j}|\tilde{\omega}_{k}- \tilde{\omega}_{j}|\] \[\quad+2\sum_{j}\tilde{d}_{k+1}\tilde{d}_{j}|\tilde{\omega}_{k+1} -\tilde{\omega}_{j}|\] \[=\sum_{i,j\neq k,k+1}\tilde{d}_{i}\tilde{d}_{j}|\tilde{\omega}_{i }-\tilde{\omega}_{j}|+2\sum_{j}(\tilde{d}_{k}+\tilde{d}_{k+1})\tilde{d}_{j}| \tilde{\omega}_{k}-\tilde{\omega}_{j}|\] \[=F(\mathbf{d}^{\prime},\boldsymbol{\omega}^{\prime})\,\leq M_{n-1} \,<M_{n},\] which is a contradiction. **Step 2**. All \(\tilde{d}_{i}\)'s are positive. If say \(d_{k}\)=0, it can be easily seen that \(F(\tilde{\bf d},\tilde{\mathbf{\omega}})\) is equal to the value of \(F\) on the pair \(({\bf d}^{\prime},{\mathbf{\omega}}^{\prime})\) obtained by deleting \(\tilde{d}_{k}\) and \(\tilde{\omega}_{k}\) from the coordinates of \(\tilde{\bf d}\) and \(\tilde{\mathbf{\omega}}\). Again, \(F({\bf d}^{\prime},{\mathbf{\omega}}^{\prime})\leq M_{n-1}<M_{n}\), which is not possible. **Step 3**. \(\tilde{d}_{1}=\tilde{d}_{2}=\cdots=\tilde{d}_{n}\). For vectors \({\mathbf{\omega}}=(\omega_{1},\ldots,\omega_{n})\), define the symmetric matrix \(A=A({\mathbf{\omega}})=[|\omega_{i}-\omega_{j}|]_{i,j}\). Then we have \[F({\bf d},{\mathbf{\omega}})={\bf d}^{T}A{\bf d}.\] Because \(\tilde{d}_{i}>0\) for every \(1\leq i\leq n\), \(\tilde{\bf d}\) is an interior point of the set \(\{{\bf d}\in\mathbb{R}_{\geq 0}^{n}:{\mathbbm{1}}^{T}{\bf d}=0\}\) and maximizes the function \(F(\cdot,\tilde{\mathbf{\omega}})\) in this set. So by Lagrange multiplier theorem, there is a real scalar \(\lambda\) such that \[\nabla_{\bf d}F(\tilde{\bf d},\tilde{\mathbf{\omega}})=\lambda{\mathbbm{1}}\,.\] On the other hand, \[\nabla_{\bf d}F({\bf d},{\mathbf{\omega}})=\nabla_{\bf d}({\bf d}^{T}A{\bf d})=2A{ \bf d}.\] Hence, \[A(\tilde{\mathbf{\omega}})\,\tilde{\bf d}=\frac{\lambda}{2}{\mathbbm{1}}\,. \tag{27}\] Now for every \(i\), \(\tilde{\omega}_{i}=\exp(2{\rm i}\theta_{i})\) and \(\tilde{\omega}_{i}\)'s are pairwise distinct, so \[0=\theta_{1}<\cdots<\theta_{n}<\pi.\] So in a sufficiently small neighborhood \({\cal B}\) of \({\mathbf{\theta}}:=(\theta_{1},\ldots,\theta_{n})\) in \(\mathbb{R}^{n}\), every \({\mathbf{\sigma}}=(\sigma_{1},\ldots,\sigma_{n})\in B\) satisfies, \[\sigma_{1}<\sigma_{2}<\cdots<\sigma_{n}<\pi,\quad\sigma_{n}-\sigma_{1}<\pi.\] Consider \({\mathbf{\omega}}=(\omega_{1},\ldots,\omega_{n})\in(\mathbb{S}^{1})^{n}\) as a function of \({\mathbf{\sigma}}=(\sigma_{1},\ldots,\sigma_{n})\in{\cal B}\) by the formula, \[{\mathbf{\omega}}({\mathbf{\sigma}})=(\omega_{1},\ldots,\omega_{n}),\quad\omega_{i}= \exp(2{\rm i}\sigma_{i}),\quad 1\leq i\leq n.\] Then for every \({\mathbf{\sigma}}\in{\cal B}\), and \({\mathbf{\omega}}={\mathbf{\omega}}({\mathbf{\sigma}})\), there is a simpler formula for \(A({\mathbf{\omega}})_{ij}\) in terms of \(\sigma_{k}\)'s, \[A({\mathbf{\omega}})_{ij}=|\omega_{i}-\omega_{j}| =|\exp(2{\rm i}\sigma_{i})-\exp(2{\rm i}\sigma_{j})|\] \[=\sqrt{2-2\cos(2(\sigma_{i}-\sigma_{j}))}\] \[=2\sin(|\sigma_{i}-\sigma_{j}|)=\left\{\begin{array}{ll}2\sin( \sigma_{i}-\sigma_{j})&i\geq j\\ 2\sin(\sigma_{j}-\sigma_{i})&i<j\end{array}\right.. \tag{28}\] Now, \(\tilde{\mathbf{\omega}}=\mathbf{\omega}(\mathbf{\theta})\); hence, \(\mathbf{\theta}\) is a maximum for the function \(F(\tilde{\mathbf{d}},\mathbf{\omega}(\cdot))\) on the set \(\mathcal{B}\). Thus derivation with respect to any of variables \(\sigma_{i}\) for \(1\leq i\leq n\), gives, \[0=\frac{\partial}{\partial\sigma_{i}}F(\tilde{\mathbf{d}},\tilde {\mathbf{\omega}}) =2\sum_{j=1}^{n}\tilde{d}_{i}\tilde{d}_{j}\frac{\partial}{\partial \sigma_{i}}A_{ij}(\tilde{\mathbf{\omega}})\] \[=4\tilde{d}_{i}\left(\sum_{j<i}\tilde{d}_{j}\cos(\theta_{i}- \theta_{j})-\sum_{j>i}\tilde{d}_{j}\cos(\theta_{i}-\theta_{j})\right),\] and because \(\tilde{d}_{i}>0\) for all \(i\), \[\sum_{j<i}\tilde{d}_{j}\cos(\theta_{i}-\theta_{j})-\sum_{j>i}\tilde{d}_{j}\cos (\theta_{i}-\theta_{j})=0,\quad 1\leq i\leq n. \tag{29}\] Now define the anti-symmetric matrix \(E\) as follows, \[E=[e_{ij}],\quad e_{ij}=\left\{\begin{array}{ll}0&i=j\\ -1&i<j\\ 1&i>j\end{array}\right..\] So we can rewrite the equations (27) and (29) as bellow, \[\sum_{j=1}^{n}e_{ij}\tilde{d}_{j}\sin(\theta_{i}-\theta_{j})= \frac{\lambda}{4},\quad 1\leq i\leq n, \tag{30}\] \[\sum_{j=i}^{n}e_{ij}\tilde{d}_{j}\cos(\theta_{i}-\theta_{j})=0, \quad 1\leq i\leq n. \tag{31}\] Let's define \[\phi_{i}=\exp(\mathrm{i}\theta_{i}),\quad 1\leq i\leq n.\] Then we have \[\phi_{i}\overline{\phi_{j}}=\exp(\mathrm{i}(\theta_{i}-\theta_{j}))=\cos( \theta_{i}-\theta_{j})+\mathrm{i}\sin(\theta_{i}-\theta_{j}).\] Hence the equations (30) and (31) are equivalent to one equation, \[\sum_{j=1}^{n}(\phi_{i}e_{ij}\overline{\phi_{j}})\,\tilde{d}_{j}=\frac{ \mathrm{i}\lambda}{4},\quad 1\leq i\leq n. \tag{32}\] After a conjugation and multiplication of both sides of the above equation by \(\phi_{i}\) for every \(i\), it becomes equivalent to the vector equation below, \[E\,[\phi_{1}\tilde{d}_{1}\,\dots\,\phi_{n}\tilde{d}_{n}]^{T}=-\frac{\mathrm{i }\lambda}{4}\,[\phi_{1}\,\dots\,\phi_{n}]^{T}. \tag{33}\] We claim that (33) implies \(\tilde{d}_{1}=\tilde{d}_{2}=\cdots=\tilde{d}_{n}\). Let \(1\leq k<n\). By (33), for every \(i\), \[\sum_{j=1}^{n}e_{ij}\tilde{d}_{j}\phi_{j}=-\frac{\mathrm{i}\lambda}{4}\phi_{i}.\] Subtraction of the above equation for \(i=k\) from the the equation for \(i=k+1\), gives \[\tilde{d}_{k}\phi_{k}+\tilde{d}_{k+1}\phi_{k+1}=-\frac{\mathrm{i}\lambda}{4}( \phi_{k+1}-\phi_{k}).\] The right-hand side is perpendicular to \(\phi_{k+1}-\phi_{k}\). But for the Euclidean inner product of the left hand side with \(\phi_{k+1}-\phi_{k}\) we have, \[\mathrm{Re}\left((\tilde{d}_{k}\phi_{k}+\tilde{d}_{k+1}\phi_{k+1} )\overline{(\phi_{k+1}-\phi_{k})}\right) =\tilde{d}_{k+1}-\tilde{d}_{k}-\tilde{d}_{k+1}\cos(\theta_{k+1}- \theta_{k})\] \[\quad+\tilde{d}_{k}\cos(\theta_{k}-\theta_{k+1})\] \[=(\tilde{d}_{k+1}-\tilde{d}_{k})(1-\cos(\theta_{k+1}-\theta_{k})).\] However, \(0<\theta_{k+1}-\theta_{k}<\pi\); therefore, \(\cos(\theta_{k+1}-\theta_{k})\neq 1\). Thus we have \(\tilde{d}_{k+1}-\tilde{d}_{k}=0\), for all \(k<n\) and by \(\sum_{i}\tilde{d}_{i}=1\), we conclude \[\mathbf{\tilde{d}}=\frac{1}{n}\mathbb{1}.\] (End of Step 3.) Now for \(\mu=-\frac{n\lambda}{4}\), the equation (33) can be simplified to \[E\boldsymbol{\phi}=(\mathrm{i}\mu)\,\boldsymbol{\phi},\quad\boldsymbol{\phi} =(\phi_{1},\dots,\phi_{n}), \tag{34}\] which indicates that \(\boldsymbol{\phi}\) is an eigenvector of \(E\). **Step 4**. \(\tilde{\boldsymbol{\omega}}=\boldsymbol{\alpha}\). Let \(\zeta\) be a root of the equation \(z^{n}=-1\). Then for every \(1\leq i\leq n\), we have, \[\sum_{j=1}^{n}e_{ij}\zeta^{j-1} =\sum_{1\leq j<i}\zeta^{j-1}-\sum_{i<j\leq n}\zeta^{j-1}\] \[=-\left(\sum_{1\leq j<i}\zeta^{n+j-1}+\sum_{i<j\leq n}\zeta^{j-1}\right)\] \[=-\zeta^{i-1}\left(\zeta+\cdots+\zeta^{n-1}\right)\] \[=\zeta^{i-1}\left((-\zeta)\cdot\frac{1-\zeta^{n-1}}{1-\zeta}\right)\] \[=\zeta^{i-1}\left((-\zeta)\cdot\frac{1+\frac{1}{\zeta}}{1-\zeta} \right)=\zeta^{i-1}\left(\frac{\zeta+1}{\zeta-1}\right).\] Hence \[E\,[1\;\zeta\;\dots\;\zeta^{n-1}]^{T}=\left(\frac{\zeta+1}{\zeta-1}\right)[1 \;\zeta\;\dots\;\zeta^{n-1}]^{T},\] and \((1,\zeta,\ldots,\zeta^{n-1})\) is an eigenvector of \(E\) for the eigenvalue \(\frac{\zeta+1}{\zeta-1}\). Now the equation \(z^{n}=-1\) has \(n\) distinct roots, \[\zeta_{i}=\exp(\mathrm{i}\frac{(2i-1)\pi}{n}),\quad\text{for}\;1\leq i\leq n,\] and because of the non-singularity of the Vandermonde matrix \([\zeta_{i}^{j-1}]_{i,j}\), the vectors \(\mathbf{z}_{i}:=(1,\zeta_{i},\ldots,\zeta_{i}^{n-1})\) constitute a basis of the eigenvectors of \(E\). On the other hand, the mapping \(z\to\frac{z+1}{z-1}\) is one-to-one; thus, \(E\) has \(n\) distinct eigenvalues \(\frac{\zeta_{i}+1}{\zeta_{i}-1}\). Now \(\boldsymbol{\phi}=(\phi_{1},\ldots,\phi_{n})\) is also an eigenvector of \(E\), so for some \(i\), \(\boldsymbol{\phi}\) is an scalar multiple of \(\mathbf{z}_{i}\), and because \(\phi_{1}=\exp(\mathrm{i}\theta_{1})=1\), this scalar must be equal to \(1\). Thus \(\boldsymbol{\phi}=\mathbf{z}_{i}\). Now note that the arguments of the coordinates of \(\boldsymbol{\phi}\) are \(0=\theta_{1}<\cdots<\theta_{n}<\pi\). Also the argument of the \(j\)-th coordinate of \(\mathbf{z}_{i}\) is \(a_{j}:=(j-1)\left(\frac{(2i-1)\pi}{n}\right)\) for \(1\leq j\leq n\). For \(j=2\), \(0<a_{2}=\frac{(2i-1)\pi}{n}<2\pi\), so \(0<a_{2}=\theta_{2}<\pi\) and for every \(1\leq j<n\), \[0<a_{j+1}-a_{j}=\frac{(2i-1)\pi}{n}<2\pi,\quad a_{j+1}=\theta_{j+1}+2m\pi\; \text{for some}\;m\in\mathbb{Z}.\] So by induction it can be easily seen that \(a_{j}=\theta_{j}\) (for all \(1\leq j\leq n\)), which implies \[\forall 1\leq j\leq n,\;(j-1)\left(\frac{(2i-1)\pi}{n}\right)<\pi \implies(n-1)\left(\frac{(2i-1)\pi}{n}\right)<\pi\] \[\implies 2i-1<\frac{n}{n-1}\leq 2\] \[\implies i=1.\] So, \(\boldsymbol{\phi}=\mathbf{z}_{1}=(1,\zeta_{1},\ldots,\zeta_{1}^{n-1})\). Now for every \(1\leq i\leq n\) we have, \[\tilde{\omega}_{i}=\exp(2\mathrm{i}\theta_{i})=\phi_{i}^{2}=\zeta_{1}^{2(i-1) }=\exp(\frac{2(i-1)\pi}{n})=\alpha_{i}.\] Thus, \(\tilde{\boldsymbol{\omega}}=\boldsymbol{\alpha}\) and the proof is complete. **Acknowledgements**. This work was supported in part by a grant (no. 1400050046) from the School of Mathematics, Institute for Research in Fundamental Sciences (IPM).
2309.04593
Non-convex regularization based on shrinkage penalty function
Total Variation regularization (TV) is a seminal approach for image recovery. TV involves the norm of the image's gradient, aggregated over all pixel locations. Therefore, TV leads to piece-wise constant solutions, resulting in what is known as the "staircase effect." To mitigate this effect, the Hessian Schatten norm regularization (HSN) employs second-order derivatives, represented by the pth norm of eigenvalues in the image hessian, summed across all pixels. HSN demonstrates superior structure-preserving properties compared to TV. However, HSN solutions tend to be overly smoothed. To address this, we introduce a non-convex shrinkage penalty applied to the Hessian's eigenvalues, deviating from the convex lp norm. It is important to note that the shrinkage penalty is not defined directly in closed form, but specified indirectly through its proximal operation. This makes constructing a provably convergent algorithm difficult as the singular values are also defined through a non-linear operation. However, we were able to derive a provably convergent algorithm using proximal operations. We prove the convergence by establishing that the proposed regularization adheres to restricted proximal regularity. The images recovered by this regularization were sharper than the convex counterparts.
Manu Ghulyani, Muthuvel Arigovindan
2023-09-08T20:58:22Z
http://arxiv.org/abs/2309.04593v1
# Non-convex regularization based on shrinkage penalty function ###### Abstract Total Variation regularization (TV) is a seminal approach for image recovery. TV involves the norm of the image's gradient, aggregated over all pixel locations. Therefore, TV leads to piece-wise constant solutions, resulting in what is known as the "staircase effect." To mitigate this effect, the Hessian Schatten norm regularization (HSN) employs second-order derivatives, represented by the pth norm of eigenvalues in the image hessian vector, summed across all pixels. HSN demonstrates superior structure-preserving properties compared to TV. However, HSN solutions tend to be overly smoothed. To address this, we introduce a non-convex shrinkage penalty applied to the Hessian's eigenvalues, deviating from the convex lp norm. It is important to note that the shrinkage penalty is not defined directly in closed form, but specified indirectly through its proximal operation. This makes constructing a provably convergent algorithm difficult as the singular values are also defined through a non-linear operation. However, we were able to derive a provably convergent algorithm using proximal operations. We prove the convergence by establishing that the proposed regularization adheres to restricted proximal regularity. The images recovered by this regularization were sharper than the convex counterparts. Introduction Total Variation (TV) [24] is widely applicable because of its ability to preserve edges. But, image reconstruction (restoration) via TV regularization leads to piece-wise constant estimates. This effect is known as the staircase effect. A well-known workaround for the above problem is to use higher-order derivatives [1, 2, 3, 4] of the image rather than only the first-order derivative. The use of higher-order derivatives leads to smooth intensity variations on the edges rather than sharp jumps in the intensity, thereby eliminating staircase artifacts. The above workaround led to many solutions, important ones being TV-2, Total Generalized Variation (TGV), Hessian-Schatten norm, and others. Hessian-Schatten (HS) norm regularization is an important work because of it theoretical properties and good performance for a wide variety of inverse problems [5, 6, 7]. Although HS norm regularization leads to good reconstruction quality, it leads to smoothing of solution-images, which is a common drawback of all convex regularizations. Also, it is well known that non-convex regularizations [8] lead to sharper images. But, convergence of non-convex and non-smooth optimization algorithms is difficult to establish. Due to these difficulties in optimization of non-convex functionals, there are very few works that explore higher-order derivative based non-convex regularization functionals, and also have convergence guarantees. An important work by [9] explores the properties non-convex potential functionals, and also give an algorithm to solve the image reconstruction problem. It is also important to mention the work by [10] that analyses properties of edges of the recovered images via non-convex regularization functionals. There are many works that explore non-convex first-order total variation, for e.g. [11, 12]. The work by [12] is important as it provides convergence as well as recovery guarantees of the proposed reconstruction algorithm. To the best of our knowledge, there is no non-convex regularization that exploits the structural information encoded in the singular values of the image hessian. This is because computing these singular values involves a non-linear operation without a known closed-form solution. In this work, we derive a non-convex regularization inspired from the Hessian-Schatten norm [4] and non-convex shrinkage penalty [12]. In this work, we use the shrinkage penalty on the singular values of the hessian. Although non-convex regularizations are designed to better approximate the \(l_{0}\) norm, non-convexity has many drawbacks such as convergence issues and no recovery guarantees. In addition to being non-convex, the optimization problem for the non-convex formulation of HS is also non-smooth. This further adds to the complexity of the problem in terms of optimization. Now, the above problems are solved by the following contributions in this work: 1. Design of a non-convex regularization retaining the theoretical and structural properties of the original HS-norm, 2. algorithm to solve the image restoration problem with the proposed non-convex functional, 3. convergence results for the algorithm, and 4. establishing various theoretical properties of the restoration cost with the proposed regularization. ### Organization of the paper In section 2, we give describe the proposed non-convex regularization. In section 3, we give details of the image restoration problem and show the numerical results for the image restoration problem in section 4. Finally, all the theoretical results and proofs are given in section 5. ## 2 Formulation ### Forward model The degradation model for a linear imaging inverse problem is expressed as follows: \[\mathbf{m}=\mathcal{T}(\mathbf{u})+\eta, \tag{1}\] In this work, our approach involves considering images in a (lexicographically) scanned form, departing from the conventional 2-D array perspective. Therefore, the measurement image takes on a vector representation: \(\mathbf{m}\in\mathbb{C}^{N}\), while \(\mathbf{u}\in\mathbb{R}^{N}\) signifies the original image specimen. Additionally, the operator \(\mathcal{T}\) is a linear operator representing the forward model. This paper focuses on MRI image reconstruction. In MRI reconstruction, the forward model \(\mathcal{T}\) can be understood as the composition of two operators: \(\mathcal{T}=\mathcal{M}\circ\mathcal{F}\). The operator \(\mathcal{M}\) corresponds to the sampling trajectory and can be represented through multiplication by a diagonal matrix, which embodies a 2D mask consisting of 1s (where sampling occurs) and zeros (where sampling is absent). On the other hand, \(\mathcal{F}\) symbolizes the 2D Discrete Fourier Transform. Additionally, \(\eta\in\mathbb{C}^{N}\) denotes the Gaussian measurement noise. It is important to note that when referring to a pixel in the square image \(\mathbf{u}\), containing \(N\) pixels, at the coordinates \([r_{1},r_{2}]\), the notation \([\mathbf{u}]_{\mathbf{r}}\) is used, rather than \(u_{r_{1}\sqrt{N}+r_{2}}\). This choice of notation is made to enhance clarity and conciseness when indicating access to the pixel positioned at coordinate \(\mathbf{r}\). Also, \(\sum_{\mathbf{r}}\) denotes the summation over all pixel locations. ### Hessian-Schatten norm regularization The \(q\)-Hessian-Schatten-norm [4] (\(\mathcal{HS}_{q}(\cdot)\)) at any pixel location of an image is defined as the \(l_{q}\) norm of the singular values of the image Hessian. The corresponding \(q\)-Hessian-Schatten-norm (HS) regularization functional for an image is obtained as the sum of these norm values across all pixel locations. Let \(\mathbf{D}_{xx},\ \mathbf{D}_{xy},\ \mathbf{D}_{yx}\,\text{and}\,\mathbf{D}_{yy.}\) denote the discrete second derivative operators (i.e, discrete analogue of second-order partial derivative \(\partial(\cdot)/\partial x\partial y\) etc.), then the discrete Hessian, \(\mathcal{H}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{2\times 2\times N}\) can be defined as: \[[\mathcal{H}(\mathbf{u})]_{\mathbf{r}}=\begin{pmatrix}[\mathbf{D}_{xx}( \mathbf{u})]_{\mathbf{r}}&[\mathbf{D}_{xy}(\mathbf{u})]_{\mathbf{r}}\\ [\mathbf{D}_{yx}(\mathbf{u})]_{\mathbf{r}}&[\mathbf{D}_{yy}(\mathbf{u})]_{ \mathbf{r}}\end{pmatrix}\in\mathbb{R}^{2\times 2},\] for all \(N\) pixel locations indexed by \(\mathbf{r}\). Now, let \(\sigma_{1}([\mathcal{H}(\mathbf{u})]_{\mathbf{r}})\) and \(\sigma_{2}([\mathcal{H}(\mathbf{u})]_{\mathbf{r}})\) denote the singular values of \([\mathcal{H}(\mathbf{u})]_{\mathbf{r}}\). With this, the Hessian-Schatten-norm can be defined as: \[\mathcal{HS}_{q}(\mathbf{u})=\sum_{\mathbf{r}}\big{[}|\sigma_{1}([\mathcal{H} (\mathbf{u})]_{\mathbf{r}})|^{q}+|\sigma_{2}([\mathcal{H}(\mathbf{u})]_{ \mathbf{r}})|^{q}\big{]}^{1/q},\] where \(q\) is considered to lie in \([1,\infty].\) This is because the above choice of \(q\) makes \(\mathcal{HS}_{q}(\cdot)\) convex, and therefore efficient convex optimization algorithms (e.g. ADMM[5], primal-dual splitting [4] etc.) can be used to obtain the reconstruction. The original work [4] also proposed solutions using proximal operators for \(q\in\{1,2,\infty\}\). Although the convexity of the HS-norm described above is an advantage with respect to optimization and convergence, it has been verified theoretically as well as by numerical experiments that non-convex regularization functionals lead to a better quality of the recovered image. This motivates the extension of the HS penalty to non-convex formulation, which can lead to better reconstruction. In this following section, we describe the non-convex HS (based) regularization obtained by applying the shrinkage penalty [12] on the singular values of the Hessian. ### Shrinkage Penalty The shrinkage penalty, as discussed in [12], possesses theoretical properties--such as the exact recovery of sparse vectors and the convergence of proximal algorithms--that resemble those of the conventional \(l_{1}\) penalty, despite its non-convex nature. The shrinkage penalty (\(g_{q}(\cdot)\)) is not explicitly defined; rather, it is characterized by its proximal operation. The proximal operation (\(s_{q}\)) is the solution to the following cost for \(t\): \[\gamma(x,t)=\rho g_{q}(t)+\frac{1}{2}(t-x)^{2}. \tag{2}\] The function \(s_{q}\) is given by: \[s_{q}(x)=\operatorname*{arg\,min}_{t}\gamma(x,t)=\max\left\{|x|-\rho^{2-q}|x|^{ 1-q},0\right\}\cdot\operatorname{sign}(x).\] This expression of \(s_{q}(\cdot)\) is known as the \(q\)-shrinkage operation. Notably, when \(q=1\), \(s_{q}(\cdot)\) corresponds to the familiar soft-thresholding operation (which is the solution to eq. (2) with \(g_{q}(t)\) replaced by \(|t|\)). For a given proximal mapping (\(s\)), the corresponding cost (\(g\)) will exist given the conditions outlined in literature Theorem 2.1 are satisfied. **Literature Theorem 2.1**.: [12] Consider a continuous function \(s:[0,\infty)\rightarrow\mathbb{R}\) and satisfies \[s(x)=\begin{cases}0&x\leq\lambda\\ \text{strictly increasing}&x\geq\lambda,\end{cases}\] also \(s(x)\leq x.\) With this \(s\), define \(S:\mathbb{R}\rightarrow\mathbb{R}\) such that \(S:x\mapsto s(x)sign(x),\) then \(S(\cdot)\) is a proximal mapping of an even, continuous and strictly increasing function \(g.\) Moreover, \(g(\cdot)\) is differentiable in \((0,\infty)\), and \(g(\cdot)\) is non-differentiable at \(0\) if and only if \(\lambda>0\) with \(\partial g(0)=[-1,1].\) It can be observed that these conditions are satisfied by the \(s_{q}\) defined above. The function (\(g_{q}(\cdot)\)) derived from the shrinkage function has some interesting properties: * \(g_{q}(\cdot)\) is coercive for \(q\in(0,1)\) * \(g_{q}^{\prime\prime}(x)<0\) for all \(x\in(0,\infty)\), this means that \(g_{q}^{\prime}:(0,\infty)\rightarrow(0,\infty)\) is invertible and \((g_{q}^{\prime})^{-1}:(0,\infty)\rightarrow(0,\infty)\) is well defined. These properties will be used in showing the existence of the solution for the regularized image reconstruction, and the restricted proximal regularity of the cost. ### QSHS: q-Shrinkage Hessian-Schatten penalty With this \(g_{q}\), we can define the shrinkage-Schatten penalty (\(f(\cdot)\)) on the singular values of the image Hessian \(\mathcal{H}(\mathbf{u})\) at pixel location \(\mathbf{r}\) as: \[f([\mathcal{H}\mathbf{u}]_{\mathbf{r}})=g_{q}(\sigma_{1}([\mathcal{H}\mathbf{ u}]_{\mathbf{r}}))+g_{q}(\sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}})). \tag{3}\] Without the closed form solution for \(g_{q}\), we can still use \(g_{q}\) for image restoration as we can solve the following optimization problem (which is a step in the ADMM algorithm described in section 3) in terms of the shrinkage operation \(s_{q}(\cdot)\). \[H^{*}=\operatorname*{arg\,min}_{H}\frac{1}{2}\|M-H\|_{2}^{2}+\rho f(H),M\in \mathbb{R}^{2\times 2}.\] To solve the above problem, we define \(M=USV^{T}\) and \(H=U_{1}S_{1}V_{1}^{T}\) to be the singular value decompositions of \(M\) and \(H\) respectively. Following the approach by [4], we apply Von Neumann's trace inequality to obtain: \(\|M-H\|_{2}\geq\|S-S_{1}\|_{2}\). Now, using the result obtained above, we get: \(\rho f(H)+\frac{1}{2}\|M-H\|_{2}^{2}\geq 0.5(\sigma_{2}(H)-\sigma_{2}(M))^{2}+ 0.5(\sigma_{1}(H)-\sigma_{1}(M))^{2}+\rho g_{q}(\sigma_{1}(H))+\rho g_{q}( \sigma_{2}(H))\). Since, the problem is separable we can obtain \[\sigma_{i}^{*}=\operatorname*{arg\,min}_{\sigma}\frac{1}{2}(\sigma-\sigma_{i} (M))^{2}+\rho g_{q}(\sigma)=s_{q}(\sigma_{i}(M)),\] for \(i=1,2\). Therefore, \[H^{*}=U\begin{pmatrix}s_{q}(\sigma_{1}(M))&0\\ 0&s_{q}(\sigma_{2}(M))\end{pmatrix}V^{T}. \tag{4}\] Here, the proposed \(H^{*}\) will have a sparser set of singular values, when compared to the original \(H\). By this formulation it is clear that the proposed non-convex functional will retain the properties of the original HS formulation and lead to sharper results. The above defined \(f(\cdot)\) has many propoerties similar to HS norm: * We prove that the QSHS penalty satisfies a technical condition of the so-called restricted proximal regularity (please refer to definition 5.1 for details), which generalizes the concept of convexity. This condition helps us to show the convergence of the proposed algorithm. We prove this result in section 5. * The (continuous analogue of) QSHS penalty is translational and rotational invariant. We state the result rigorously in the form of the following proposition. **Proposition 1**.: _Let \(u:\mathbb{R}^{2}\to\mathbb{R}\) be a twice continuously differentiable function and let \(\mathcal{H}\) denote the hessian operator, \(f(u)\stackrel{{ def}}{{=}}\int_{\mathbf{r}}g_{q}(\sigma_{1}( \mathcal{H}u(\mathbf{r})))+g_{q}(\sigma_{2}(\mathcal{H}u(\mathbf{r})))d\mathbf{r}\), then \(f(u)=f(u\circ R_{\theta})\) for any rotation matrix \(R_{\theta}\)._ The proof of the above proposition is similar to the one presented in [4]. The result given in [4] can be directly extended to our formulation as the proposed penalty is based on the singular values of the hessian. Therefore, we skip the proof. ## 3 Image restoration problem The recovered image \((\mathbf{u}^{*})\) can be obtained by solving the following optimization problem: \[\mathbf{u}^{*}=\operatorname*{arg\,min}_{u\in S}\frac{1}{2}\|\mathcal{T} \mathbf{u}-\mathbf{m}\|_{2}^{2}+\rho\sum_{\mathbf{r}}f([\mathcal{H}\mathbf{u}] _{\mathbf{r}}). \tag{5}\] Here, \(f\) is the shrinkage penalty defined in \(eq.\) (3) and \(S\) is the set where desired solution lies. For example, one widely used choice for \(S\) is the positive orthant, i.e \(\{\mathbf{u}|[\mathbf{u}]_{\mathbf{r}}\geq 0\ \ \forall\mathbf{r}\}.\) We prove the following lemma that guarantees the existence of the solution of the optimization problem given in eq. (5). **Lemma 1**.: _If \(\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H})=\{\mathbf{0}\}\), the image restoration cost \(f(\mathbf{u})=\frac{1}{2}\|\mathcal{T}\mathbf{u}-\mathbf{m}\|^{2}+\rho\sum_{ \mathbf{r}}g_{q}(\sigma_{1}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))+g_{q}( \sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))\) is coercive._ Also, \(f(\cdot)\) is continuous, therefore, existence of the minimum point is guaranteed by the Weierstrass theorem. For the complete proof, please refer to section 5.2. We solve the optimization by ADMM approach. Although ADMM is (conventionally) guaranteed to converge for convex functions, but there are recent works (e.g. [13]) that demonstrate the effectiveness of ADMM for non-convex problems. In order to derive the ADMM algorithm, we first write a constrained formulation of eq. (5). It can be verified that eq. (5) is equivalent to the following constrained problem (\(I_{S}\) denotes the indicator function on set \(S\)): \[\mathbf{u}^{*} =\operatorname*{arg\,min}_{u\in S}\frac{1}{2}\|\mathcal{T} \mathbf{u}-\mathbf{m}\|_{2}^{2}+\rho\sum_{\mathbf{r}}f([\mathbf{H}]_{\mathbf{r }})+I_{S}(\mathbf{v}),\] \[\text{subject to }[\mathcal{H}\mathbf{u}]_{\mathbf{r}}=[ \mathbf{H}]_{\mathbf{r}}\ \forall\mathbf{r}\text{ and }\mathbf{u}=\mathbf{v}. \tag{6}\] Note that the constrained formulation decouples the two terms in eq. (5). The ADMM involves minimization of the augmented Lagrangian (\(\mathcal{L}()\)) which is given as: \[\mathcal{L}(\mathbf{u},\mathbf{H},\mathbf{v},\hat{\mathbf{u}}, \hat{\mathbf{H}})= \tag{7}\] \[0.5\ \|\mathcal{T}\mathbf{u}-\mathbf{m}\|_{2}^{2}+\rho\sum_{ \mathbf{r}}f([\mathbf{H}]_{\mathbf{r}})+I_{S}(\mathbf{v})\] \[+\frac{\beta}{2}\sum_{\mathbf{r}}\|[\mathcal{H}\mathbf{u}]_{ \mathbf{r}}-[\mathbf{H}]_{\mathbf{r}}\|_{F}^{2}+\langle[\hat{\mathbf{H}}]_{ \mathbf{r}},\|[\mathcal{H}\mathbf{u}]_{\mathbf{r}}-[\mathbf{H}]_{\mathbf{r}} \rangle+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}\|_{2}^{2}+\langle\hat{\mathbf{ u}},\mathbf{u}-\mathbf{v}\rangle.\] The ADMM algorithm is composed of minimization of \(\mathcal{L}()\) w.r.t \(\mathbf{u},\mathbf{H}\) and \(\mathbf{v}\) cyclically, and then updating Lagrange multipliers \(\hat{\mathbf{H}}\) and \(\hat{\mathbf{u}}\). For any iteration \(k\in\mathbb{N}\), the algorithm advances through the following four steps: **Step 1, minimization w.r.t v:**\(\mathbf{v}\) is updated as: \(\mathbf{v}^{(k+1)}=\operatorname*{arg\,min}_{\mathbf{v}}\mathcal{L}(\mathbf{u} ^{(k)},\mathbf{H}^{(k)},\mathbf{v},\hat{\mathbf{u}}^{(k)},\hat{\mathbf{H}}^{ (k)})\). This reduces to the following minimization on completing the squares: \[\mathbf{v}^{(k+1)} =\operatorname*{arg\,min}_{\mathbf{v}}I_{S}(\mathbf{v})+\frac{ \beta}{2}\ \|\mathbf{u}^{(k)}-\mathbf{v}+\frac{\hat{\mathbf{u}}^{(k)}}{\beta}\|_{2}^{2}\] \[=P_{S}(\mathbf{u}^{(k)}+\frac{\hat{\mathbf{u}}^{(k)}}{\beta}) \tag{8}\] Here, \(P_{S}(\cdot)\) is the projection on set \(S\). **Step 2, minimization w.r.t H:** In step 2, \(\mathbf{H}\) is updated as: \[\mathbf{H}^{(k+1)}=\operatorname*{arg\,min}_{\mathbf{H}\in\mathbb{R}^{2\times 2 \times N}}\mathcal{L}(\mathbf{u}^{(k)},\mathbf{H},\mathbf{v}^{(k+1)},\hat{ \mathbf{u}}^{(k)},\hat{\mathbf{H}}^{(k)}).\] Since, the above minimization is separable for each pixel location \(\mathbf{r}\), we solve the minimization for a fixed \(\mathbf{r}\). This minimization reduces to \[\operatorname*{arg\,min}_{[\mathbf{H}]_{\mathbf{r}}\in\mathbb{R}^{2\times 2}} \frac{\beta}{2}\|[\mathcal{H}\mathbf{u}^{(k)}]_{\mathbf{r}}-[\mathbf{H}]_{ \mathbf{r}}+\frac{[\hat{\mathbf{H}}^{(k)}]_{\mathbf{r}}}{\beta}\|_{F}^{2}+ \rho f([\mathbf{H}]_{\mathbf{r}}).\] The solution to the above problem has already been done in the previous section (eq. (4)), where \(M\) plays the role of \([\mathcal{H}\mathbf{u}^{(k)}]_{\mathbf{r}}+[\hat{\mathbf{H}}^{(k)}]_{\mathbf{ r}}\). **Step 3, minimizing w.r.t u:** Updating \(\mathbf{u}\) is essentially minimizing \(\frac{1}{2}\)\(\|\mathcal{T}\mathbf{u}-\mathbf{m}\|_{2}^{2}+\frac{\beta}{2}\sum_{\mathbf{r}} \|[\mathcal{H}\mathbf{u}]_{\mathbf{r}}-[\mathbf{H}^{(k+1)}]_{\mathbf{r}}+ \frac{[\hat{\mathbf{H}}^{(k)}]_{\mathbf{r}}}{\beta}\|_{F}^{2}+\frac{\beta}{2 }\|\mathbf{u}-\mathbf{v}^{(k+1)}+\frac{\hat{\mathbf{u}}^{(k)}}{\beta}\|_{2}^ {2}\) w.r.t \(\mathbf{u}\). The minimizer of the above cost can be written as: \[\begin{split}&\big{[}\mathcal{T}^{*}\mathcal{T}+\beta(\mathbf{ D}_{xx}^{T}\mathbf{D}_{xx}+\mathbf{D}_{xy}^{T}\mathbf{D}_{xy}+\mathbf{D}_{ yx}^{T}\mathbf{D}_{yx}+\mathbf{D}_{yy}^{T}\mathbf{D}_{yy}+\mathcal{I})\big{]} \mathbf{u}^{(k+1)}=\\ &\qquad\qquad\mathcal{T}^{*}\mathbf{m}+\beta\mathbf{v}^{(k+1)}- \mathbf{u}^{(k)}+\beta(\mathbf{D}_{xx}^{T}\bar{\mathbf{H}}_{11}+\mathbf{D}_{ xy}^{T}\bar{\mathbf{H}}_{12}+\mathbf{D}_{yx}^{T}\bar{\mathbf{H}}_{21}+\mathbf{D}_{yy}^{T} \bar{\mathbf{H}}_{22}).\end{split} \tag{9}\] Here, \(\left[\begin{array}{cc}\bar{\mathbf{H}}_{11}&\bar{\mathbf{H}}_{12}\\ \bar{\mathbf{H}}_{21}&\bar{\mathbf{H}}_{22}\end{array}\right]=\mathbf{H}^{(k+ 1)}-\hat{\mathbf{H}}^{(k)}/\beta\).The above problem is linear, and as all the operators involved are block circulant, the equation can be solved efficiently by using 2D-FFTs. **Step 4, updating multipliers:** After the above steps multipliers can be updated as: * \(\hat{\mathbf{u}}^{(k+1)}=\hat{\mathbf{u}}^{(k)}+\beta(\mathbf{u}^{(k+1)}- \mathbf{v}^{(k+1)})\), and * \(\forall\mathbf{r}:[\hat{\mathbf{H}}^{(k+1)}]_{\mathbf{r}}=[\hat{\mathbf{H}}^{( k)}]_{\mathbf{r}}+\beta[\mathcal{H}\mathbf{u}^{(k+1)}]_{\mathbf{r}}-\beta[ \mathbf{H}^{(k+1)}]_{\mathbf{r}}\). ### Convergence Guarantees ADMM was proposed in [14, 15]. ADMM typically converges for convex problems [16], but can fail to converge for multi-block (3 or more) splitting. The behaviour of ADMM for non-convex and non-smooth problems was largely unknown and many questions are still unanswered. But, owing to successful results of the algorithm in many applications (especially in signal processing literature, see for e.g. [17, 18, 19]) there has been a lot of interest in understanding the convergence of the ADMM for non-convex and non-smooth problems. There are many frameworks that establish the convergence of non-convex ADMM [20, 21, 13, 22, 23]. The works by [22] and [23] need restrictive assumptions on the iterates, which are difficult to verify. The [20] work requires that the hessian of the smooth part (data-fitting) of the cost be lower-bounded, this is not true as \(\mathcal{T}\) has a non-trivial null space for most of the imaging inverse problems. [21] prove the convergence for only a special class of optimization problems, and the framework is not general. [23] provide the most general framework and allow us to prove that the algorithm is (subsequentially) convergent. We prove the following theorem that guarantees that any sub-sequential limit of the sequence generated by the above algorithm is a stationary point of the image restoration cost. **Theorem 1**.: _If \(\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H})=\{\mathbf{0}\}\) and \(\beta\) is sufficiently large the iterates generated algorithm defined in section 3 by steps 1-4 are bounded. Moreover, each limit point of the sequence generated by the iterate is a stationary point of the image restoration cost \(f(\mathbf{u})=\frac{1}{2}\|\mathcal{T}\mathbf{u}-\mathbf{m}\|^{2}+\rho\sum_{ \mathbf{r}}g_{q}(\sigma_{1}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))+g_{q}( \sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))\) (defined in eq. (5))._ The theorem provides the following assurance: when the sequence produced by the algorithm converges, its convergence will occur at the point where the sub-gradient of the restoration cost reaches zero. Given that the sequence is bounded, the existence of a converging subsequence is guaranteed. Consequently, the limit of this subsequence will correspond to the point where the sub-gradient of the cost becomes zero. For the proof of the above theorem, please see section 5.2. ## 4 Simulation Results To demonstrate the effectiveness of the proposed method, we compare the reconstruction results with q-Hessian Schatten norm [4] (for \(q\)=1 and 2) and TV-1 [24]. Hessian Schatten norm for \(q=2\) is popularly known as TV-2. We use two sampling masks (\(\mathcal{M}\)) with sampling densities 18 and 9 percent. We add noise with \(\sigma=2.5.\) In numerical simulations, we use a data-set with 5 typical MRI-images (see fig. 1) of size \(256\times 256.\) For the proposed shrinkage penalty we use \(q=.5\) (in eq. (2)). The optimal regularization parameter was tuned (by golden-section method) to obtain minimum Mean Squared Error (MSE). The SSIM scores of the reconstructions are given in the table 1. The table clearly shows that the proposed method performs better than all other methods by a significant margin. To demonstrate the visual difference between the images, we show result of image 2 for mask-2 (shown in fig. 3 and zoomed view in fig. 4). Clearly, the proposed method recovers sharper images as dot like structures are much sharper in the proposed method. Also, from the algorithm it is clear that there is no significant computational cost associated with the \(q-\) shrinkage step. ## 5 Theoretical Results and Proofs ### Properties of QSHS penalty Now, we formally define the concept of restricted proximal regularity. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Im & Mask & TV-1 & TV-2 & HS-1 & Proposed \\ \hline \multirow{2}{*}{1} & 1 & 0.915 & 0.920 & 0.925 & 0.958 \\ \cline{2-6} & 2 & 0.790 & 0.804 & 0.816 & 0.884 \\ \hline \multirow{2}{*}{2} & 1 & 0.960 & 0.964 & 0.966 & 0.980 \\ \cline{2-6} & 2 & 0.893 & 0.903 & 0.905 & 0.953 \\ \hline \multirow{2}{*}{3} & 1 & 0.937 & 0.937 & 0.940 & 0.957 \\ \cline{2-6} & 2 & 0.815 & 0.815 & 0.820 & 0.849 \\ \hline \multirow{2}{*}{4} & 1 & 0.936 & 0.938 & 0.941 & 0.966 \\ \cline{2-6} & 2 & 0.866 & 0.871 & 0.875 & 0.920 \\ \hline \multirow{2}{*}{5} & 1 & 0.924 & 0.930 & 0.932 & 0.949 \\ \cline{2-6} & 2 & 0.816 & 0.817 & 0.825 & 0.859 \\ \hline \end{tabular} \end{table} Table 1: Table showing SSIM values of reconstructions Figure 1: Test Images **Definition 5.1**.: (Restricted proximal regularity) A lower semi-continuous function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\cup\{\infty\}\) is restricted proximal regular if for any \(M>0\) and any bounded set \(\Omega\) there exists \(\gamma\equiv\gamma(M,\Omega)\) such that the following holds for all \(y\in\Omega,x\in\{\,x\in\Omega\mid\|p\|\leq M\quad\forall p\quad\in\partial f( x)\,\}\), and for all \(d\in\partial f(x)\): \[f(y)-f(x)-\langle d,(y-x)\rangle\geq-\frac{\gamma}{2}\|y-x\|^{2}.\] The following proposition establishes the restricted proximal regularity of the proposed QSHS penalty. The proof proceeds by following the methodology outlined in the proof of restricted proximal regularity for \(l_{q}\) norms (\(q\in(0,1)\)) as presented in [13]. However, a notable challenge in this context is the lack of a closed-form expression for the penalty. Consequently, we use the abstract properties of \(g_{q}\) to prove the result. **Proposition 2**.: _Consider any \(\mathbf{H}\in\mathbb{R}^{2\times 2},\) then \(r(\mathbf{H})\stackrel{{ def}}{{=}}g_{q}(\sigma_{1}(\mathbf{H}))+g _{q}(\sigma_{2}(\mathbf{H}))\) is restricted proximal regular._ Proof.: We use the following result [25] for the sub-gradient: Let \(H\in\mathbb{R}^{2\times 2}\) and \(H=[U\ U_{1}]\begin{bmatrix}S&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}[V\ V_{1}]^{T}\) be the singular value decomposition (SVD) Figure 3: Result for Im2 and Mask 2 Figure 2: Masks for sampling trajectories of \(H\), \(r(H)=\sum_{i=1}^{2}g_{q}(S_{ii}(\mathbf{H}))\), then \(UDV^{T}+U_{1}\Theta V_{1}^{T}\in\partial r(H)\), where \(D\) is diagonal matrix with entries \((D)_{ii}=g_{q}^{\prime}(S_{ii}),\text{ and }\Theta\) is any arbitrary matrix. Without loss of generality we choose, \(\Omega=\{\,X\in\mathbb{R}^{2\times 2}\mid\|X\|\leq P\,\}\). Now, for any \(M,P>0\), we intend to show that \[r(B)-r(A)-\langle T,B-A\rangle\geq-\frac{\gamma}{2}\|B-A\|^{2}\] for all \(B\in\Omega,A\in\Omega_{M}\stackrel{{ def}}{{=}}\{\,X\in\Omega \mid\|T\|\leq M,\quad\forall T\quad\in\partial r(X)\,\}\), and for all \(T\in\partial r(A)\). We do this in the following cases: **Case 1:**: \(\|B-A\|\geq\epsilon_{0}=\frac{1}{3}(g_{q}^{\prime})^{-1}(M)\) Note that the above condition is equivalent to \[\frac{-\|B-A\|}{\epsilon_{0}}\leq-1. \tag{11}\] First, it can be observed that, \[r(B)-r(A)-\langle T,B-A\rangle \stackrel{{ a}}{{\geq}}-r(A)+\|T\|\|B-A\|, \tag{12}\] \[\stackrel{{ b}}{{\geq}}-R_{max}-M\|B-A\|. \tag{13}\] Here, \((a)\) is true as \(r(\cdot)\) is non-negative and by Cauchy-Scwartz inequality; while \((b)\) is true as \(T\) is bounded and \(r(\cdot)\) is a continuous function Figure 4: Zoomed view of the result on a bounded and closed set, therefore, it attains maximum \(R_{max}.\) Now, using eq. (11) we obtain \[r(B)-r(A)-\langle T,B-A\rangle\!\geq\!-\big{(}\frac{R_{max}+M\epsilon_{0}}{ \epsilon_{0}^{2}}\big{)}\|B-A\|^{2}. \tag{14}\] **Case 2:**\(\|B-A\|<\epsilon_{0}\). To prove this, we first define \(\Omega^{\prime}=\{\,T\in\mathbb{R}^{n\times n}\mid\|T\|\leq P,\ \min_{i}\sigma_{i}(T)\geq \epsilon_{0}\,\}.\) Now, we decompose \(B=U_{B}\Sigma_{B}V_{B}^{T},\)\(B=U_{B}\Sigma_{B}^{(1)}V_{B}^{T}+U_{B}\Sigma_{B}^{(2)}V_{B}^{T}=B_{1}+B_{2},\) where all singular values of \(B_{1}\) are greater than \(\epsilon_{0}.\) Clearly, \(B_{1}\in\Omega^{\prime}.\) We show that \(A\in\Omega^{\prime}.\) This is proved by contradiction. Assume the contrary that \(A\notin\Omega^{\prime}.\) This means, \(\exists\ i\) such that \(\sigma_{i}(A)<\epsilon_{0}\implies\sigma_{i}(A)<3\epsilon_{0}.\) Now, since \(g_{q}^{\prime}(\cdot)\) is non-increasing, we have \(g_{q}^{\prime}(\sigma_{i}(A))>g_{q}^{\prime}(3\epsilon_{0})=g_{q}^{\prime}((g _{q}^{\prime})^{-1}(M))=M.\) Let \(A=U_{A}S_{A}V_{A}^{T}\) be the singular value decomposition of A. Define \(T_{1}\stackrel{{ def}}{{=}}U_{A}S^{\prime}V_{A}^{T},\) where \(\{S^{\prime}\}_{kk}=g_{q}^{\prime}(\{S_{A}\}_{kk})\) for all \(k\). Now, by lemma \(T_{1}\in\partial r(A).\) But, \(\|T_{1}\|\geq\|S^{\prime}\|\geq g_{q}^{\prime}(\sigma_{i}(A))>M.\) This contradicts the fact that \(A\in\Omega_{M}.\) Hence, \(A\in\Omega^{\prime}.\) Now, we define a function, \(F:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}^{n\times n},\) which is defined as \[F:X\mapsto U_{X}D_{X}^{\prime}V_{X}^{T}.\] Here, \(X=U_{X}D_{X}V_{X}^{T}\) is the singular value decomposition of \(X\) and \(D_{X}^{\prime}\) is a diagonal matrix which is defined as \((D_{X}^{\prime})_{ii}=g_{q}^{\prime}((D_{X})_{ii})\). Since \(F\) is continuous on compact set on \(\Omega^{\prime}\), it is Lipschitz continuous on \(\Omega^{\prime}\)[26], this means \(\|F(B)-F(A)\|\leq L\|B-A\|\). Now, by Taylor's expansion we have: \[r(B_{1})-r(A)-\langle B_{1}-A,U_{A}S^{\prime}V_{A}^{T}\rangle\geq\frac{-L}{2 }\|B_{1}-A\|^{2} \tag{15}\] . Now, \(\|U_{2}^{T}U_{B}\|\leq\frac{\|A-B_{1}\|}{\epsilon_{0}}\) and \(\|V_{2}^{T}V_{B}\|\leq\frac{\|A-B_{1}\|}{\epsilon_{0}}\)[27]. Now, \[\langle U_{2}^{T}\Theta V_{2},B_{1}-A\rangle=\langle\Theta,U_{2}U_{B}^{T} \Sigma_{B}^{(1)}V_{B}V_{2}^{T}\rangle\geq-\frac{M^{2}\|B_{1}-A\|\|^{2}}{ \epsilon_{0}^{2}}. \tag{16}\] Also, \(r(B_{2})-<T,B_{2}>\geq 0\) and by triangle inequality we get:\(\|B_{1}-A\|\leq\|B_{1}-B\|+\|B-A\|\leq 2\|B-A\|.\) Adding eq. (15) and eq. (16) we get, \[r(B)-r(A)-\langle B-A,T\rangle\geq-(\frac{L}{2}+\frac{4M^{2}}{\epsilon_{0}^{2 }})\|B-A\|^{2}. \tag{17}\] ### Properties of Restoration cost The following helps us to establish that the restoration cost is coercive. **Claim 1.1**.: _If \(\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H})=\{\mathbf{0}\}\) the function \(\|\mathcal{T}\mathbf{u}\|+\|\mathcal{H}\mathbf{u}\|_{*}\geq\gamma\|\mathbf{u}\|,\) where \(\gamma>0.\) Here, \(\|\mathcal{H}\mathbf{u}\|_{*}\) is \(\mathcal{H}S_{1}(\mathbf{u}),\) the conventional \(l-1\) Hessian Schatten norm._ Proof.: The above statement is trivial if \(\mathbf{u}=\mathbf{0}.\) If \(\mathbf{u}\neq\mathbf{0},\) let \(\hat{\mathbf{u}}=\frac{\mathbf{u}}{\|\mathbf{u}\|},\) then \(\|\mathcal{T}\hat{\mathbf{u}}\|+\|\mathcal{H}\hat{\mathbf{u}}\|_{*}\geq\inf_{ \|p\|=1}\|\mathcal{T}\mathbf{p}\|+\|\mathcal{H}\mathbf{p}\|_{*}\). Since, \(\|\mathbf{p}\|=1\) is a compact set, \(\exists\mathbf{p}_{min}\) (with \(\|\mathbf{p}_{min}\|=1\)) such that \(\inf_{\|p\|=1}\|\mathcal{T}\mathbf{p}\|+\|\mathcal{H}\mathbf{p}\|_{*}=\| \mathcal{T}\mathbf{p}_{min}\|+\|\mathcal{H}\mathbf{p}_{min}\|_{*}\). Define, \(\gamma=\|\mathcal{T}\mathbf{p}_{min}\|+\|\mathcal{H}\mathbf{p}_{min}\|_{*}.\) Clearly, \(\gamma\neq 0\) as we will get a vector in intersection of the null spaces, i.e. \(\mathbf{p}_{min}\in\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H}),\) this contradicts the hypothesis. Re-substituting \(\hat{\mathbf{u}}=\frac{\mathbf{u}}{\|\mathbf{u}\|}\) completes the proof. **Lemma 1**.: _If \(\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H})=\{\mathbf{0}\}\), the image restoration cost \(f(\mathbf{u})=\frac{1}{2}\|\mathcal{T}\mathbf{u}-\mathbf{m}\|^{2}+\rho\sum_{ \mathbf{r}}g_{q}(\sigma_{1}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))+g_{q}( \sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))\) is coercive._ Proof.: Without loss of generality, we prove the theorem for \(\rho=1.\) Consider the level set \(\mathcal{L}_{\eta}(f)\stackrel{{ def}}{{=}}\{\mathbf{x}\mid f( \mathbf{x})\leq\eta\,\}.\) Now, if \(f(\mathbf{x})\leq\eta\implies\frac{1}{2}\|\mathcal{T}\mathbf{x}-\mathbf{m}\|^ {2}\leq\eta.\) Now, by triangle inequality \[\|\mathcal{T}\mathbf{x}\|\leq\sqrt{2\eta}+\|\mathbf{m}\|. \tag{18}\] Since, \(\mathbf{u}\in\mathcal{L}_{\eta}(f)\implies g_{q}(\sigma_{i}([\mathcal{H} \mathbf{u}]_{\mathbf{r}}))\leq\eta\quad\forall\mathbf{r}\text{ and }i=1,2.\) As \(g_{q}(\cdot)\) is coercive for \(q\in(0,1),\) we have \(\sigma_{i}([\mathcal{H}\mathbf{u}]_{\mathbf{r}})\leq M\) for some \(M>0.\) This above statement is true because of the fact that any level set of a coercive function is compact. By Taylor's series of \(g_{q}\) around \(0\) we can see that: \(g_{q}(\sigma_{i}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))=g_{q}^{\prime}(\gamma _{i,\mathbf{r}})\sigma_{i}([\mathcal{H}\mathbf{u}]_{\mathbf{r}})\) for \(\gamma_{i,\mathbf{r}}\in[0,2M)\). Since, \(g_{q}^{\prime}(\cdot)\) is decreasing, therefore, \(g_{q}^{\prime}(\gamma_{i,\mathbf{r}})\geq g_{q}^{\prime}(2M)\stackrel{{ def}}{{=}}C_{2M}.\) Hence, \(\mathbf{x}\in\mathcal{L}_{\eta}(f)\implies C_{2M}\|\mathcal{H}\mathbf{u}\|_{*} \leq\sum_{\mathbf{r}}g_{q}(\sigma_{1}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))+ g_{q}(\sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}})\leq\eta.\) Now, we use the following claim 1.1 to show that the level set \(\mathcal{L}_{\eta}(f)\) is bounded. Using, claim 1.1 we get \[\gamma\|\mathbf{u}\|\leq\|\mathcal{T}\mathbf{u}\|+\|\mathcal{H}\mathbf{u}\|_{* }\leq\frac{\eta}{C_{2M}}+\sqrt{2\eta}+\|\mathbf{m}\|. \tag{19}\] This means the level set \(\mathcal{L}_{\eta}(f)\) is bounded. Combining with the fact the level set is closed as \(f(\cdot)\) is continuous implies that the level set is compact. As this is true for any level set, therefore \(f\) is coercive. ### Proof of convergence We will use the following theorem by [13] to show the convergencve of our algorithm. **Literature Theorem 5.1**.: ([13], theorem 2.2) Consider the minimization of the function \(\phi(\mathbf{u},\mathbf{v})=h(\mathbf{v})+g(\mathbf{v})\) subject to \(\mathbf{Au}+\mathbf{Bv}=\mathbf{0}\) by non-convex ADMM algorithm (1). Define the augmented Lagrangian, \(\mathcal{L}_{\beta}(\mathbf{u},\mathbf{v},\mathbf{w})\stackrel{{ def}}{{=}}\phi(\mathbf{u}, \mathbf{v})+\mathbf{w}^{T}(\mathbf{Au}+\mathbf{Bv})+\frac{\beta}{2}\|\mathbf{ Au}+\mathbf{Bv}\|^{2}.\) If * \(\phi(\mathbf{u},\mathbf{v})\) is coercive on the set \(\{\,(\mathbf{u},\mathbf{v})\mid\mathbf{Au}+\mathbf{Bv}=\mathbf{0}\,\}\); * \(Im(A)\subset Im(B)\), where \(Im\) denotes the image of the linear operator; * \(:\)**A** and \(\mathbf{B}\) are full column rank; * \(g\) is restricted proximal regular, and * \(:\)\(h\) is Lipschitz smooth, then algorithm generates a bounded sequence that has atleast one limit point, and each limit point is a stationary point of \(\mathcal{L}_{\beta}(\cdot)\) Now, we show the convergence of algorithm using the above theorem. **Theorem 1**.: _If \(\mathcal{N}(\mathcal{T})\cap\mathcal{N}(\mathcal{H})=\{\mathbf{0}\}\) and \(\beta\) is sufficiently large the iterates generated algorithm defined in section 3 by steps 1-4 are bounded. Moreover, each limit point of the sequence generated by the iterate is a stationary point of the image restoration cost \(f(\mathbf{u})=\frac{1}{2}\|\mathcal{T}\mathbf{u}-\mathbf{m}\|^{2}+\rho\sum_{ \mathbf{r}}g_{q}(\sigma_{1}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))+g_{q}( \sigma_{2}([\mathcal{H}\mathbf{u}]_{\mathbf{r}}))\) (defined in eq. (5))._ Proof.: We establish the validity of the aforementioned theorem by fulfilling the conditions (**(C1)-(C5)**) of literature Theorem 5.1. In comparison to the splitting presented in section 3, we set \(h\equiv\frac{1}{2}\|\mathcal{T}(\cdot)-\mathbf{m}\|_{2}^{2}\) and \(g\equiv\rho\sum_{\mathbf{r}}f(\cdot)+I_{S}(\cdot)\). The operator \(\mathbf{A}\) is analogous to a linear operator, satisfying \(\mathcal{A}(\mathbf{u})-\mathbf{v}=\mathbf{0}\). For each pixel location \(\mathbf{r}\), \(\mathcal{A}:[\mathbf{u}]_{\mathbf{r}}\rightarrow([\mathcal{H}\mathbf{u}]_{ \mathbf{r}},[\mathbf{u}]_{\mathbf{r}})\). Regarding the constraints, it is evident that \(\mathbf{B}\) corresponds to the negative identity matrix. **(C1)** follows from lemma 1, and **(C4)** follows from proposition 2. **(C2)** is trivial since \(\mathbf{B}\) is the negative identity. **(C5)** holds true due to the quadratic nature of the data-fitting cost. To ascertain **(C3)**, we must demonstrate that \(\mathcal{N}(\mathbf{B})=\mathbf{0}\) and \(\mathcal{N}(\mathbf{A})=\mathbf{0}\). Since \(\mathbf{B}\) is the negative identity matrix, \(\mathcal{N}(\mathbf{B})=\mathbf{0}\). For \(\mathbf{A}\), let \(\mathcal{A}(\mathbf{z})=\mathbf{0}\), implying \([\mathcal{H}\mathbf{z}]_{\mathbf{r}}=\mathbf{0}\) and \([\mathbf{z}]_{\mathbf{r}}=\mathbf{0}\) for all \(\mathbf{r}\), which concludes \(\mathbf{z}=\mathbf{0}\). Hence, all the conditions are met. The next step is to establish that the stationary point of the augmented Lagrangian coincides with that of the restoration cost. Suppose \((\mathbf{u}^{*},\mathbf{v}^{*},\mathbf{w}^{*})\) is the stationary point of the augmented Lagrangian; this implies: 1. \(\mathcal{A}\mathbf{u}^{*}-\mathbf{v}^{*}=\mathbf{0}\), 2. \(\nabla h(\mathbf{u}^{*})+\mathcal{A}^{T}\mathbf{w}^{*}=0\), and 3. \(\partial g(\mathbf{v}^{*})-\mathbf{w}^{*}\ni\mathbf{0}\). Rearranging 2 and 3 we obtain \(\nabla h(\mathbf{u}^{*})\in-\mathcal{A}^{T}\partial g(\mathbf{v}^{*})\). Now, we use 1 to get \[-\nabla h(\mathbf{u}^{*})-\mathcal{A}^{T}\partial(\mathcal{A} \mathbf{u}^{*})\ni\mathbf{0} \tag{20}\] \[\implies -\partial f(\mathbf{u}^{*})\ni\mathbf{0}\] (21) \[\implies \partial f(\mathbf{u}^{*})\ni\mathbf{0}. \tag{22}\]
2309.14780
Transferring climate change knowledge
Accurate and precise climate projections are required for climate adaptation and mitigation, but Earth system models still exhibit great uncertainties. Several approaches have been developed to reduce the spread of climate projections and feedbacks, yet those methods cannot capture the non-linear complexity inherent in the climate system. Using a Transfer Learning approach, we show that Machine Learning can be used to optimally leverage and merge the knowledge gained from Earth system models simulations and historical observations to more accurately project global surface air temperature fields in the 21st century. We reach an uncertainty reduction of more than 50% with respect to state-of-the-art approaches. We give evidence that our novel method provides narrower projection uncertainty together with more accurate mean climate projections, urgently required for climate adaptation.
Francesco Immorlano, Veronika Eyring, Thomas le Monnier de Gouville, Gabriele Accarino, Donatello Elia, Giovanni Aloisio, Pierre Gentine
2023-09-26T09:24:53Z
http://arxiv.org/abs/2309.14780v4
# Transferring climate change knowledge ###### Abstract Accurate climate projections are required for climate adaptation and mitigation. Earth system model simulations, used to project climate change, inherently make approximations in their representation of small-scale physical processes, such as clouds, that are at the root of the uncertainties in global mean temperature's response to increased greenhouse gas concentrations. Several approaches have been developed to use historical observations to constrain future projections and reduce uncertainties in climate projections and climate feedbacks. Yet those methods cannot capture the non-linear complexity inherent in the climate system. Using a Transfer Learning approach, we show that Machine Learning, in particular Deep Neural Networks, can be used to optimally leverage and merge the knowledge gained from Earth system model simulations and historical observations to more accurately project global surface temperature fields in the 21st century. For the Shared Socioeconomic Pathways (SSPs) 2-4.5, 3-7.0 and 5-8.5, we refine regional estimates and the global projection of the average global temperature in 2081-2098 (with respect to the period 1850-1900) to 2.73degC (2.44-3.11degC), 3.92degC (3.5-4.47degC) and 4.53degC (3.69-5.5degC), respectively, compared to the unconstrained 2.7degC (1.65-3.8degC), 3.71degC (2.56-4.97degC) and 4.47degC (2.95-6.02degC). Our findings show that the 1.5degC threshold of the Paris' agreement will be crossed in 2031 (2028-2034) for SSP2-4.5, in 2029 (2027-2031) for SSP3-7.0 and in 2028 (2025-2031) for SSP5-8.5. Similarly, the 2degC threshold will be exceeded in 2051 (2045-2059), 2044 (2040-2047) and 2042 (2038-2047) respectively. Our new method provides more accurate climate projections urgently required for climate adaptation. ## 1 Introduction Climate change is affecting all aspects of the Earth system, impacting ecosystems' health, placing new strains on our infrastructures and affecting human migration[1, 2]. In 2016, most countries around the globe ratified the Paris' agreement[3], with the objective to keep global mean temperature well below 2degC compared to pre-industrial levels and to put in place a strategy to limit the increase to 1.5degC. Earth System Models (ESMs) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5)[4] have been used to project global mean temperature rise according to several Representative Concentration Pathways (RCPs). These pathways describe different levels of Greenhouse Gases (GHGs) and other radiative forcings, such as short-lived climate forcers (including aerosols and chemically reactive gases), land use and land cover, from more pessimistic to more optimistic scenarios. The Coupled Model Intercomparison Project Phase 6 (CMIP6)[5] models have further advanced, through increased resolution and the inclusion of additional Earth System processes and feedbacks, and with future projections starting from 2015 (instead of 2005 as in CMIP5)[6]. CMIP6 climate projections are based on the Shared Socioeconomic Pathways (SSPs), which better represent future pathways of socioeconomic development linked to societal actions, such as climate change mitigation, adaptation or impacts[7]. Yet, even for prescribed Carbon Dioxide (CO\({}_{2}\)) concentrations, ESMs still exhibit substantial uncertainties in their projections of global mean temperatures. However, these uncertainties have not been reduced with the evolution of ESMs; instead they increased in CMIP6[8, 9]. For instance, the Transient Climate Response (TCR) (i.e., the surface temperature warming at the time of CO\({}_{2}\) doubling in response to a yearly 1% increase in per-year CO\({}_{2}\) concentration), produced by CMIP6 simulations, of 1.7\({}^{\circ}\)C with a spread from 1.3\({}^{\circ}\)C to 3.0\({}^{\circ}\)C, is larger than CMIP3 and CMIP5 model ensembles[10]. In CMIP6, the Equilibrium Climate Sensitivity (ECS), i.e. the global temperature increase at equilibrium for a doubling of CO\({}_{2}\), was the largest of any generation of models from the 1990s, ranging from 1.8\({}^{\circ}\)C to 5.6\({}^{\circ}\)C[10]. This large uncertainty range is a major roadblock for policy makers and for climate change adaptation strategies. It is well known that most of those uncertainties in climate projections can be attributed to small-scale and "fast" physical processes, such as clouds, convection or ocean turbulence[9, 11, 12, 13]. Better constraining these physical processes, observable on a day-to-day basis, would enable reduction in their associated uncertainties. In this study, we show that using Transfer Learning (TL), a recent branch of Machine Learning (ML), we can more accurately project global temperature maps in the 21\({}^{\text{st}}\) century. By means of TL, the knowledge gained by a pre-trained model on a data rich task can be used as a starting point to boost the performance on a new but related task in the same domain with limited data[14]. We show that, using this approach, multi-model climate projections can be improved by optimally fusing (uncertain) model projections and historical observations. This helps enhance the representation of future projection and their associated spatial patterns which are critical to climate sensitivity[15, 16, 17]. ## Constraining climate projections To reduce climate model projection uncertainties, various approaches have been proposed to leverage current or past climate observations for refining estimates of climate sensitivity[18, 19]. One group of approaches have been exploiting paleoclimate proxies (i.e., surrogates for climate variables, such as temperature), especially chemical tracers that are now routinely simulated in ESMs, to reduce and better constrain the range of climate sensitivity[20]. Paleoclimate records offer tremendous potential, but the use of paleoclimate proxies is not exempt from potential issues. First, proxies are only surrogate of the actual variable of interest and sometimes strong assumptions might be required -- e.g., relating stomatal density to atmospheric CO\({}_{2}\) concentrations[21] -- to link proxies to climate variables. In addition, paleoclimate forcings are significantly different from those in the recent past or future ones and have also occurred on longer time scales. A second group of approaches have used more recent climate observations, such as from the 20th century, which do not require proxies but cover a shorter time period, to constrain the range of climate sensitivity. One such prominent approach has been the use of emergent constraints. This approach relates a physical process, which is an important regulator of climate sensitivity (e.g., low cloud reflectivity) and its spread across models to an observation which is used to constrain future climate sensitivity within a Bayesian framework[13, 22, 23, 24, 25]. These methods however also suffer from several issues. Firstly, they typically assume a linear relationship between the constraining variable and target variable, whereas many important climate feedbacks are nonlinear (e.g., cloud response to warming, convective aggregation or chemistry)[25, 27, 28, 29]. Secondly, the emergent constraint is typically cast in terms of a univariate constraint, whereas many processes can be interacting. For instance, low cloud cover is interacting with deep convection in the tropics through circulation feedback[12, 13] so that using only one variable (e.g., low cloud cover) as an emergent constraint may implicitly be hiding cross-correlations with other important processes acting on climate sensitivity (e.g., deep convection)[30]. Finally, emergent constraints have been shown to be critically dependent on the model ensemble used[31]. Indeed, many emergent constraints developed for CMIP5 are statistically non-significant in CMIP6[25]. In addition, they typically do not account for the pattern effect, which is critical to climate sensitivity. Simple toy zero-order models of the Earth's climate can also be used to understand the response of the global climate[32, 33], and especially the role of the different climate feedbacks such as from water vapor or clouds. Yet, it has now become clear that the spatial patterns of climate response and sea surface temperature or the subtle response of cloud-circulation feedback are important for the overall climate sensitivity response. These subtleties cannot directly be resolved if toy climate models are used[32, 34]. Even though ESMs still exhibit biases (e.g., in low cloud cover), they simulate many of the key inherent feedbacks and their spatial variability that play an important role in the climate system. More accurate projections can be achieved by means of optimal corrections based on historical observations. Indeed, available observed warming trends over the last decades have been used in several studies to directly constrain model-based temperature projections over the 21st century. Tokarska et al., 2020[35] reduced the uncertainty in future projections by downweighting those CMIP6 models whose simulation results are not in agreement with historical warming. For instance, they obtained a 30% lower uncertainty under SSP3-7.0 scenario by reducing the 5-95% range in projected end-of-century warming from 2.20-4.8\({}^{\circ}\)C to 1.84-3.69\({}^{\circ}\)C[35]. Ribes et al., 2021[36] constrained global mean temperature projections by means of a linear inverse method (kriging) combining CMIP6 simulations and historical warming observations since 1850. They reduced the uncertainty range of projected temperatures by up to 50%[36]. Liang et al., 2020[38] exploited a weighting method which takes both model quality and independence into account[37] to give more weight to CMIP6 models which better match the observed 1970-2014 warming. The authors found a lower mean and lower 95th percentile warming under all scenarios and reduced the 5-95% warming range uncertainty for 2081-2100 by 19% and 28% under SSP1-2.6 and SSP5-8.5, respectively[38]. It is worth noting that these constraints do not consider the "pattern effect" in their temperature projections[39]. Finally, the Intergovernmental Panel on Climate Change Working Group I (IPCC WGI) assessed global surface air temperature change in the Sixth Assessment Report (AR6) by using multiple lines of evidence including CMIP6 projections up to 2100. CMIP6 projections, driven by SSP scenarios, were combined with observational constraints on simulated past warming to update estimates in the AR6[40]. With respect to these approaches, this study aims to leverage advances in ML and especially in TL to improve the accuracy of climate projections, and its pattern, to further reduce the uncertainty of global climate temperature projections. ## Leave-one-out cross-validation approach In the proposed approach, 22 CMIP6 ESMs are gathered for the SSP2-4.5, 3-7.0 and 5-8.5 scenarios. For each scenario, 22 Deep Neural Networks (DNNs), one for each ESM, are emulating the spatial fields of surface air temperature response to GHG and aerosol forcing (Methods). The DNNs share the same architecture and hyperparameters, which are optimized, and are trained on the corresponding ESM simulation data over the entire 1850-2098 time period (Methods). Training on the 1850-2098 period allows the DNNs to learn both historical and future CO\({}_{2}\) temperature patterns. This approach results in a total of 66 (22 ESMs \(\times\) 3 SSP scenarios) DNNs trained to emulate annual surface air temperature maps. These DNNs are then each "tuned" to match historical observations and then better project the future by optimally merging the physical model projections and historical observational data. In order to prove the effectiveness of the approach, the Leave-One-Out Cross-Validation technique is used[41]. For a specific scenario, one ESM is taken out from the ensemble and its simulation data are used as ground truth (i.e., as synthetic historical observations). The remaining 21 pre-trained DNNs for the considered scenario are then adjusted using a TL procedure which consists in fine tuning each DNN by further training their neuron weights on the synthetic observations data from 1850 to 2022. We then use the 21 fine-tuned DNNs for projecting the global average surface air temperature maps from 2023 to 2098. The aforementioned leave-one-out procedure is repeated for all 22 ESMs and for all three SSP scenarios. The global average temperature bias, Root Mean Squared Error (RMSE), as well as 5\({}^{\text{th}}\) and 95\({}^{\text{th}}\) percentiles in 2081-2098 are then computed for the three SSP scenarios considered (Supplementary Table 1). In the following, we use SSP2-4.5 as the reference scenario since low-emission scenarios are currently more likely by the end of the century than the high-emission SSP5-8.5[42]. Our approach shows a global average bias of only 0.08\({}^{\circ}\)C and a global average RMSE of 0.48\({}^{\circ}\)C in the 2081-2098 time period for the SSP2-4.5 scenario with our approach compared to the synthetic CMIP6 observations, across all 22 leave-one-out simulations (Supplementary Table 1). These results are consistent with the small 5-95% confidence range (1.67-2.4\({}^{\circ}\)C) of the global average temperature projections from 2081 to 2098 relative to the 1850-1900 base period (Fig. 1a). The DNNs ensemble accurately reproduces the climate change signals from the leave-one-out ESM historical simulations over the training years (1850-2022), and exploits them to make reliable projections up to the end of the century, correcting the spatial pattern of both historical and future projections. This confirms that the TL approach is not only correcting the ESM model bias but also the tendency of the ESM simulations' response to GHG forcing, and its associated pattern. The role of the TL approach is to transfer the prior information coming from the CMIP6 models and combine it with the retraining on the historical (1850-2022) taken-out ESM simulation. In addition, using our TL approach we are able to spatially project all of the complexity of the surface air temperature consistently replicating the details of their spatial features, such as the land-ocean contrast, the Arctic Amplification, the gradient of warming between Tropics and mid-latitudes, or colder temperatures over Greenland (Fig. 1b, c and d). The global average bias and global RMSE for SSP3-7.0 between the end of the century (2081-2098) predicted and actual temperature across all leave-one-out models are 0.1\({}^{\circ}\)C and 0.66\({}^{\circ}\)C respectively; for SSP5-8.5, the values are 0.15\({}^{\circ}\)C and 0.96\({}^{\circ}\)C respectively, confirming that our approach can correct the spatial structure of the projected climate response for SSP3-7.0 and SSP5-8.5 as well (Extended Data Table 1). ## 2 Transfer Learning on Observational Data The Leave-One-Out Cross-Validation described in the previous section demonstrates the effectiveness of transferring knowledge from the climate models to the observations, allowing extrapolation beyond the historical regime. The same strategy is then applied to the actual global surface air temperature maps taken from the Berkeley Earth Surface Temperatures (BEST) dataset, including observational noise (Methods). Observational data are used for transferring knowledge and constraining ESMs simulations on a common historical period to reduce the ESMs uncertainty on future projections. Specifically, the pre-trained DNNs on ESMs simulation data over the 1850-2022 period are "transfer learned" on the BEST data covering the 1979-2022 time period. The resulting fine-tuned DNNs are then used to project global surface air temperature maps over the 2023-2098 time period, for each SSP scenario. In the following, the SSP2-4.5 scenario is used again as reference and predicted future warming values are computed with respect to the 1850-1900 baseline period. The ensemble mean and spread across the DNNs are used to project future climate change. Our estimate of the projected global annual-mean temperature increase by 2098 is 2.8\({}^{\circ}\)C (2.5-3.21\({}^{\circ}\)C, 5-95% range) (Fig. 2). This can be compared to the inter-model ESMs equal-weight mean of 2.79\({}^{\circ}\)C (1.65-3.92\({}^{\circ}\)C, 5-95% range). The equal-weight mean ESMs projection of global temperature is comparable to the one projected by the DNNs ensemble, building further confidence in our approach. Concerning the 2081-2098 time period, the ESMs and DNNs predictions are 2.7\({}^{\circ}\) (1.65-3.8\({}^{\circ}\)C, 5-95% range) and 2.73\({}^{\circ}\) (2.44-3.11\({}^{\circ}\)C, 5-95% range), respectively. The difference is slightly higher under the SSP3-7.0 scenario, with 3.71\({}^{\circ}\) (2.56-4.97\({}^{\circ}\)C, ESMs) and 3.92\({}^{\circ}\) (3.5-4.47\({}^{\circ}\)C, DNNs). However, there is substantially more spread in the global mean temperature projection of the CMIP6 ensemble and a major strength of our method to reduce future uncertainties. It is worth noting that the spread in the ESM mean global temperature projection is sensitive to the subset of models used for the ensemble in the standard CMIP6 projections. This is not the case in our approach, as all of the DNNs trained on independent models and then fine-tuned on historical temperature data are projecting nearly the same global temperature rise after transfer learning, with a 90% confidence range across models equal to 2.5-3.21\({}^{\circ}\)C by 2098. This shows that the DNNs, when exposed to historical observational data, are able to correct the ESMs structural errors, such as the ones due to low clouds or convection. In particular, the DNNs have learned the non-linear dependence on CO\({}_{2}\) equivalent when exposed to historical data. We also note that the skill of the prediction is similar across DNNs emphasizing that the they are able to implicitly learn the impact of spatial variability on warming, as it differs across models but does not reduce the skill; nor does the model filiation impact the result, as the same skill is reached across models whether or not they share some lineage. The strong reduction in our projections' spread is reflected in the reduction of the 90% confidence range of the projected surface air temperature in 2081-2098, relative to 1995-2014, under SSP2-4.5 from 0.51-3.07\({}^{\circ}\)C (with the unconstrained CMIP6 ensemble) to 1.53-2.2\({}^{\circ}\)C (this work): this represents a 70% reduction of the overall uncertainty range with respect to the unconstrained CMIP6. Our predicted ranges for the SSP3-7.0 and SSP5-8.5 emissions scenarios are 2.59-3.56\({}^{\circ}\)C and 2.78-4.59\({}^{\circ}\)C, respectively (Fig. 3). In comparison to other state-of-the-art work aimed at narrowing down the model-based projections uncertainty, we find a 46% reduction with respect to Ribes et al., 2021 [36], 52% with respect to Liang et al., 2020 [38], and 56% with respect to Tokarska et al., 2020 [35] under SSP2-4.5. Moreover, we obtained a 52% reduction under SSP2-4.5, with respect to the 5-95% range (1.2-2.6\({}^{\circ}\)C, relative to 1995-2014) assessed by IPCC WGI [40]. Even for SSP3-7.0 and 5-8.5 we observe a reduction in the uncertainty range with respect to other recent estimates, in particular equal to 32% for SSP3-7.0 and 17% for SSP5-8.5 with respect to Ribes et al., 2021 [36]; the reduction is equal to 43% for SSP3-7.0 and 24% for SSP5-8.5 with respect to IPCC WGI [40] (Fig. 3). Concerning near-term (2021-2040) projections with respect to 1995-2014, our TL approach predicts an average mean temperature value of 0.59\({}^{\circ}\)C (0.52-0.66\({}^{\circ}\)C, 5-95% range), 0.65\({}^{\circ}\)C (0.58-0.73\({}^{\circ}\)C, 5-95% range) and 0.67\({}^{\circ}\)C (0.58-0.79\({}^{\circ}\)C, 5-95% range) for SSP2-4.5, 3-7.0 and 5-8.5, respectively. On the other hand, the temperatures predicted in the mid-term (2041-2060), relative to 1995-2014, are 1.09\({}^{\circ}\)C (0.93-1.27\({}^{\circ}\)C, 5-95% range), 1.33\({}^{\circ}\)C (1.2-1.49\({}^{\circ}\)C, 5-95% range) and 1.46\({}^{\circ}\)C (1.21-1.75\({}^{\circ}\)C, 5-95% range) for SSP2-4.5, 3-7.0 and 5-8.5 respectively. They result in an agreement but with a reduced uncertainty with respect to the IPCC AR6 evaluation for 2041 to 2060 temperatures in which our 90% confidence ranges are equal to 0.8-1.9\({}^{\circ}\)C, 0.9-2.3\({}^{\circ}\)C and 1.2-2.5\({}^{\circ}\)C for SSP2-4.5, 3-7.0 and 5-8.5, respectively [40]. This corresponds to an uncertainty reduction equal to 69%, 79% and 58% for SSP2-4.5, 3-7.0 and 5-8.5 respectively. Concerning the near-term projections, the IPCC AR6 WGI predictions are 0.4-1.2\({}^{\circ}\)C, 0.5-1.2\({}^{\circ}\)C and 0.5-1.3\({}^{\circ}\)C for SSP2-4.5, 3-7.0 and 5-8.5, respectively [40]. This corresponds to a percentage uncertainty reduction equal to 83%, 79% and 74% for SSP2-4.5, 3-7.0 and 5-8.5 respectively. The Paris Agreement is aimed at "holding the increase in the global average temperature to well below 2\({}^{\circ}\)C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5\({}^{\circ}\)C above pre-industrial levels" [40]. According to our results, the 1.5\({}^{\circ}\)C global threshold (relative to 1850-1900) will be exceeded in 2031 (with a range of 2028-2034) under SSP2-4.5, in 2029 (2027-2031) under SSP3-7.0 and in 2028 (2025-2031) under SSP5-8.5. Similarly, the 2\({}^{\circ}\)C threshold will be exceeded in 2051 (2045-2059), 2044 (2040-2047) and 2042 (2038-2047) for SSP2-4.5, 3-7.0 and 5-8.5 respectively. Each of those years are computed as the first year at which 21-year running averages of surface air temperature exceed the given global warming level, as done in Chapter 4 of the IPCC WGI AR6 [40]. From the analysis made by the IPCC WGI in the AR6, the central estimate of crossing the 1.5\({}^{\circ}\)C threshold (relative to 1850 to 1900) is found to be in the "early 2030s" (in all SSP scenarios except 5-8.5), about 10 years earlier than the midpoint of the likely range (2030 to 2052) communicated in the Special Report [43] on 1.5\({}^{\circ}\)C in which continuation on the current warming rate was assumed [40]. Estimates based on linear extrapolation of the recent global warming trend reveal that 1.5\({}^{\circ}\)C will be reached in the first years after 2030 up to 2035 [44]. Our estimates suggest that 1.5\({}^{\circ}\)C will be crossed earlier with respect to both the aforementioned two results and the time-to-threshold (i.e., the time until the threshold is met) computed by Diffenbaugh and Barnes, 2023 (mean: 2033, 2028 to 2039) for SSP2-4.5) [45]. The same analysis holds for the 2\({}^{\circ}\)C threshold, considering a predicted time-to-threshold equal to 2050 (2043 to 2058) in SSP3-7.0 and 2049 (2043 to 2055) in SSP2-4.5 by Diffenbaugh and Barnes, 2023 [45]. Our TL DNNs ensemble predicts earlier time-to-thresholds with respect to other state-of-the-art estimates likely because of the stronger retrieved spatial pattern, especially the Arctic Amplification that most ESMs, even CMIP6 models, still tend to underestimate [46]. Nonetheless, our results appear in agreement with the mean and the 5-95% range of the evaluation made by the IPCC WGI AR6 warming (relative to 1850-1900) projected by the CMIP6 ensemble for 2041 to 2060 in SSP2-4.5 (mean: 2.1\({}^{\circ}\)C; 5-95%: 1.5 to 3.0\({}^{\circ}\)C), SSP3-7.0 (mean: 2.3\({}^{\circ}\)C; 5 to 95%: 1.6 to 3.2 \({}^{\circ}\)C) and SSP5-8.5 (mean: 2-6\({}^{\circ}\)C; 5 to 95%: 1.8 to 3.4\({}^{\circ}\)C) [40]. ## Conclusions Initially trained deep neural networks emulating Earth System Models (ESMs) are fine tuned using Transfer Learning on historical global surface air temperature maps, to better project climate change for prescribed GHG concentrations. In particular, we reached a 46% reduction in the uncertainty in global surface air temperature projection, with respect to the best state-of-the-art approaches [36], based on the 5-95% range of global surface air temperature in 2081-2098 for the SSP2-4.5 scenario. For SSP3-7.0 and SSP5-8.5 scenarios, a reduction in the uncertainty of 32% and 17%, respectively, was achieved. Moreover, we observed a reduction in the projection uncertainty of 52%, 32% and 17% for SSP2-4.5, 3-7.0 and 5-8.5 respectively, compared to the assessment made by the IPCC WGI in the AR6 [40] (based on a 5-95% range of global surface air temperature in 2081-2098). Our 2098 estimate of global surface air temperature increase (relative to 1850-1900) is 2.8\({}^{\circ}\)C (2.5-3.21\({}^{\circ}\)C, 5-95% range) for SSP2-4.5, 4.39\({}^{\circ}\)C (3.87-5.09\({}^{\circ}\)C, 5-95% range) for SSP3-7.0 and 5.12\({}^{\circ}\)C (4.06-6.26\({}^{\circ}\)C, 5-95% range) for SSP5-8.5. Our models give an estimate of the TCR equal to 2.29\({}^{\circ}\)C (2.08-2.54\({}^{\circ}\)C, 5-95% range) under SSP2-4.5, which is slightly higher but still in line with the recent TCR best estimate of 1.8\({}^{\circ}\)C (1.2-2.4\({}^{\circ}\)C, very likely range) from the IPCC WGI in the AR6 [47]. Our estimates translate into exceeding the 1.5\({}^{\circ}\)C threshold (with respect to 1850-1900) established by the Paris' agreement in 2031 (2028-2034) under SSP2-4.5, in 2029 (2027-2031) under SSP3-7.0 and in 2028 (2025-2031) under SSP5-8.5. Similarly, the 2\({}^{\circ}\)C threshold will be exceeded in 2051 (2045-2059), 2044 (2040-2047), 2042 (2038-2047) respectively. Moreover, one of the key features of our work is the projection of annual surface air temperature maps with a global coverage instead of simply providing annual average scalar values. However there are still substantial uncertainties related to future GHG concentrations scenarios. Some of those uncertainties relate to projections of the ocean and terrestrial carbon uptake [48, 49], even though there have been recent estimates to refine those model estimates [50]. Yet, reducing GHG emissions is clearly the only path forward to reaching the limits set by the Paris' agreement. It is upon the international community to achieve this goal. **Fig. 1 \(|\)****Leave-one-model-out cross-validation approach example (here for FGOALS-G-L).****a**, Global average surface air temperature projected by the Deep Neural Networks (DNNs) (thin dotted lines), corresponding averages across the DNNs (bold green line) for each scenario and FGOALS-f3-L simulation data (orange bold line). The projections are generated after transfer learning each DNN on the FGOALS-f3-L historical model simulations. Red shadings show the training set (1850-2022) and green shadings show the 5-95% range (numerical values for 5-95% range of temperature prediction in 2098 are reported as well). **b,c,d** Maps of surface air temperature projected in 2081-2098 by FGOALS-f3-L (**b**) and by the DNNs ensemble (average computed across maps generated by each DNN) (**c**) under SSP2-4.5 scenario. The bias map (difference between DNNs and CMIP6 average temperature maps) is also reported (**d**). Figure 2: **Transfer Learning on observations.** Deep Neural Networks (DNNs) multi-model mean projections (bold blue line) of global average surface air temperature for each scenario. The projections are generated after transfer learning (training set: 1979–2022, red shading) each DNN on BEST historical observational data (black dots). The red line in each plot represents the year the 2\({}^{\circ}\)C Paris’ Agreement threshold will be reached according to the DNNs projections. The 5–95% ranges of DNNs (light blue shading) and CMIP6 unconstrained (orange shading) projections are reported. The unconstrained CMIP6 multi-model mean (bold orange line) is shown as well. For each plot, numerical values of 5–95% range for temperature prediction in 2098 are present in square brackets. Figure 3: **Global surface air temperature changes for the long-term period (2081–2098).** Global 5–95% warming ranges for the long-term period (2081–2098) relative to the average over 1995–2014 (left y axis) and 1850–1900 (right y axis) for SSP2-4.5, 3-7.0 and 5-8.5 scenarios. White lines for each box plot represent the median values. These results extend those reported in Chapter 4 of the IPCC AR6[40]. Figure 4: **Long-term surface air temperature anomaly maps.****a–c**, Surface air temperature anomaly maps in 2081–2098 with respect to the 1980–1900 base period for the SSP3-7.0 scenario. They are computed by averaging in time (between 2081 and 2098) the temperature maps generated by the DNNs (**a**) and CMIP6 models (**b**). The difference between DNNs and CMIP6 average maps is also reported (**c**). The maps are produced by the DNNs after transfer learning them on observations. ## References * [1] Intergovernmental Panel on Climate Change. _Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change_ (ed.s Stocker, T. F. _et al._) (Cambridge Univ. Press, 2013). * [2] Intergovernmental Panel on Climate Change. _Climate Change 2014: Impacts, Adaptation, and Vulnerability: Working Group II Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change._ (eds. Field, C. B. _et al._) (Cambridge Univ. Press, 2014). * [3] UNFCCC. _Adoption of the Paris Agreement_. Report. No. FCCC/CP/2015/L. 9/Rev. 1, 21932 (UnFCCC, 2015). * [4] Taylor, K. E., Stouffer, R. J. & Meehl, G. A. An Overview of CMIP5 and the Experiment Design. _B. Am. Meteorol. Soc._**93**(4), 485-498. [https://doi.org/10.1175/BAMS-D-11-00094.1](https://doi.org/10.1175/BAMS-D-11-00094.1) (2012). * [5] Eyring, V. _et al._ Overview of the coupled model intercomparison project phase 6 (CMIP6) experimental design and organization. _Geosci. Model Dev._**9**, 1937-1958. [https://doi.org/10.5194/gmd-9-1937-2016](https://doi.org/10.5194/gmd-9-1937-2016) (2016). * [6] Hamed, M. M., Nashwan, M. S., Shiru, M. S. & Shahid, S. Comparison between CMIP5 and CMIP6 models over MENA region using historical simulations and future projections. _Sustainability_**14**, 10375. [https://doi.org/10.3390/su141610375](https://doi.org/10.3390/su141610375) (2022). * [7] O'Neill, B. C. _et al._ The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6. _Geosci. Model Dev._**9**, 3461-3482. [https://doi.org/10.5194/gmd-9-3461-2016](https://doi.org/10.5194/gmd-9-3461-2016) (2016). * [8] Tebaldi, C. _et al._ Climate model projections from the Scenario Model Intercomparison Project (ScenarioMIP) of CMIP6. _Earth Syst. Dyn._**12**(1), 253-293. [https://doi.org/10.5194/esd-12-253-2021](https://doi.org/10.5194/esd-12-253-2021) (2021). * [9] Zelinka, M. D. _et al._ Causes of higher climate sensitivity in CMIP6 models. _Geophys. Res. Lett._**47**, e2019GL085782. [https://doi.org/10.1029/2019GL085782](https://doi.org/10.1029/2019GL085782) (2020). * [10] Meehl, G. A. _et al._ Context for interpreting equilibrium climate sensitivity and transient climate response from the CMIP6 Earth system models. _Sci. Adv._**6**, eaba1981. [https://doi.org/10.1126/sciadv.aba1981](https://doi.org/10.1126/sciadv.aba1981) (2020). * [11] Schneider, T. _et al._ Climate goals and computing the future of clouds. _Nat. Clim. Change_**7**, 3-5. [https://doi.org/10.1038/nclimate3190](https://doi.org/10.1038/nclimate3190) (2017). * [12] Bony, S. _et al._ Clouds, circulation and climate sensitivity. _Nat. Geosci._**8**, 261-268. [https://doi.org/10.1038/ngeo2398](https://doi.org/10.1038/ngeo2398) (2015). * [13] Sherwood, S. C., Bony, S. & Dufresne, J. L. Spread in model climate sensitivity traced to atmospheric convective mixing. _Nature_**505**, 37-42. [https://doi.org/10.1038/nature12829](https://doi.org/10.1038/nature12829) (2014). * [14] Weiss, K., Khoshgoftaar, T. M. & Wang, D. A survey of transfer learning. _J. Big Data_**3**, 9. [https://doi.org/10.1186/s40537-016-0043-6](https://doi.org/10.1186/s40537-016-0043-6) (2016). * [15] Rose, B. E. J., Armour, K. C., Battisti, D. S., Feldl, N. & Koll, D. D. B. The dependence of transient climate sensitivity and radiative feedbacks on the spatial pattern of ocean heat uptake. _Geophys. Res. Lett._**41**, 1071-1078. [https://doi.org/10.1002/2013GL058955](https://doi.org/10.1002/2013GL058955) (2014). * [16] Andrews, T. _et. al._ Accounting for Changing Temperature Patterns Increases Historical Estimates of Climate Sensitivity. _Geophys. Res. Lett._**45**, 8490-8499. [https://doi.org/10.1029/2018GL078887](https://doi.org/10.1029/2018GL078887) (2018). * [17] Armour, K. C. Energy budget constraints on climate sensitivity in light of inconstant climate feedbacks. _Nat. Clim. Change_**7**, 331-335. [https://doi.org/10.1038/nclimate3278](https://doi.org/10.1038/nclimate3278) (2017). * [18] Eyring, V. _et al._ Taking climate model evaluation to the next level. _Nat. Clim. Change_**9**, 102-110. [https://doi.org/10.1038/s41558-018-0355-y](https://doi.org/10.1038/s41558-018-0355-y) (2019). * [19] Sherwood, S. _et al._ An assessment of Earth's climate sensitivity using multiple lines of evidence. _Rev. Geophys._**58**, e2019RG000678. [https://doi.org/10.1029/2019RG000678](https://doi.org/10.1029/2019RG000678) (2020). * [20] Tierney, J. E. _et al._ Past climates inform our future. _Science_**370**, eaay3701. [https://doi.org/10.1126/science.aay370](https://doi.org/10.1126/science.aay370) (2020). * [21] de Boer, H. J. _et al._ Climate forcing due to optimization of maximal leaf conductance in subtropical vegetation under rising CO2. _Proc. Natl Acad. Sci. USA_**108**, 4041-4046. [https://doi.org/10.1073/pnas.1100555108](https://doi.org/10.1073/pnas.1100555108) (2011). * [22] Hall, A., Cox, P., Huntingford, C. & Klein, S. Progressing emergent constraints on future climate change. _Nat. Clim. Change_**9**, 269-278. [https://doi.org/10.1038/s41558-019-0436-6](https://doi.org/10.1038/s41558-019-0436-6) (2019). * [23] Caldwell, P. M., Zelinka, M. D. & Klein, S. A. Evaluating Emergent Constraints on Equilibrium Climate Sensitivity. _J. Clim._**31**, 3921-3942. [https://doi.org/10.1175/JCLI-D-17-0631.1](https://doi.org/10.1175/JCLI-D-17-0631.1) (2018). * [24] Cox, P. M., Huntingford, C. & Williamson, M. S. Emergent constraint on equilibrium climate sensitivity from global temperature variability. _Nature_**553**, 319-322. [https://doi.org/10.1038/nature25450](https://doi.org/10.1038/nature25450) (2018). * [25] Schlund, M., Lauer, A., Gentine, P., Sherwood, S. C. & Eyring, V. Emergent constraints on equilibrium climate sensitivity in CMIP5: do they hold for CMIP6? _Earth Syst. Dyn._**11**, 1233-1258. [https://doi.org/10.5194/esd-11-1233-2020](https://doi.org/10.5194/esd-11-1233-2020) (2020). * [26] Friedrich, T., Timmermann, A., Tigchelaar, M., Timm, O. E. & Ganopolsji, A. Nonlinear climate sensitivity and its implications for future greenhouse warming. _Sci. Adv._**2**, e1501923. [https://doi.org/10.1126/sciadv.1501923](https://doi.org/10.1126/sciadv.1501923) (2016). * [27] Coppin, D. & Bony, S. On the Interplay Between Convective Aggregation, Surface Temperature Gradients, and Climate Sensitivity. _J. Adv. Model. Earth Syst._**10**, 3123-3138. [https://doi.org/10.1029/2018MS001406](https://doi.org/10.1029/2018MS001406) (2018). * [28] Mauritsen, T. & Stevens, B. Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models. _Nat. Geosci._**8**, 346-351. [https://doi.org/10.1038/ngeo2414](https://doi.org/10.1038/ngeo2414) (2015). * [29] Schneider, T., Kaul, C. M. & Pressel, K. G. Possible climate transitions from breakup of stratocumulus decks under greenhouse warming. _Nat. Geosci._**12**, 163-167. [https://doi.org/10.1038/s41561-019-0310-1](https://doi.org/10.1038/s41561-019-0310-1) (2019). * [30] Wing, A. A. _et al._ Clouds and convective self-aggregation in a multimodel ensemble of radiative-convective equilibrium simulations. _J. Adv. Model. Earth Syst._**12**, e2020MS002138. [https://doi.org/10.1029/2020MS002138](https://doi.org/10.1029/2020MS002138) (2020). * [31] Nijsse, F. J. M. M., Cox, P. M. & Williamson, M. S. Emergent constraints on transient climate response (TCR) and equilibrium climate sensitivity (ECS) from historical warming in CMIP5 and CMIP6 models. _Earth Syst. Dyn._**11**, 737-750. [https://doi.org/10.5194/esd-11-737-2020](https://doi.org/10.5194/esd-11-737-2020) (2020). * [32] Knutti, R. & Rugenstein, M. A. Feedbacks, climate sensitivity and the limits of linear models. _Phil. Trans. R. Soc. A_**373**, 20150146. [https://doi.org/10.1098/rsta.2015.0146](https://doi.org/10.1098/rsta.2015.0146) (2015). * [33] Knutti, R., Rugenstein, M. A. A. & Hegerl, G. C. Beyond equilibrium climate sensitivity. _Nat. Geosci._**10**, 727-736. [https://doi.org/10.1038/ngeo3017](https://doi.org/10.1038/ngeo3017) (2017). * [34] Wills, R. C. J., Battisti, D. S., Armour, K. C., Schneider, T. & Deser, C. Pattern Recognition Methods to Separate Forced Responses from Internal Variability in Climate Model Ensembles and Observations. _J. Clim._**33**, 8693-8719. [https://doi.org/10.1175/JCLI-D-19-0855.1](https://doi.org/10.1175/JCLI-D-19-0855.1) (2020). * [35] Tokarska, K. B. _et al._ Past warming trend constrains future warming in CMIP6 models. _Sci. Adv._**6**, eaaz9549. [https://doi.org/10.1126/sciadv.aaz9549](https://doi.org/10.1126/sciadv.aaz9549) (2020). * [36] Ribes, A., Qasmi, S. & Gillet, N. P. Making climate projections conditional on historical observations. _Sci. Adv._**7**, eabc0671. [https://doi.org/10.1126/sciadv.abc0671](https://doi.org/10.1126/sciadv.abc0671) (2021). * [37] Knutti, R. _et al._ A climate model projection weighting scheme accounting for performance and interdependence. _Geophys. Res. Lett._**44**, 1909-1918. [https://doi.org/10.1002/2016gl072012](https://doi.org/10.1002/2016gl072012) (2017). * [38] Liang, Y., Gillett, N. P., & Monahan, A. H. Climate model projections of 21st century global warming constrained using the observed warming trend. _Geophys. Res. Lett._**47**, e2019GL086757. [https://doi.org/10.1029/2019GL086757](https://doi.org/10.1029/2019GL086757) (2020). * [39] Andrews, T. _et al._ On the Effect of Historical SST Patterns on Radiative Feedback. _Journal of Geophysical Research: Atmospheres_**127**, e2022JD036675. [https://doi.org/10.1029/2022JD036675](https://doi.org/10.1029/2022JD036675) (2022). * [40] Lee, J.-Y. _et al._ Future Global Climate: Scenario-Based Projections and Near- Term Information. In _Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change_ (eds Masson-Delmotte, V. et al.). pp. 553-672. [https://doi.org/10.1017/9781009157896.006](https://doi.org/10.1017/9781009157896.006) (Cambridge University Press, 2021). * [41] Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. _Appear. Int. Jt. Conf. Artificial Intell._**5**, 1-7. (1995). * [42] Huard, D., Fyke, J., Capellan-Perez, I., Matthews, H. D. & Partanen, A.-I. Estimating the Likelihood of GHG Concentration Scenarios From Probabilistic Integrated Assessment Model Simulations. _Earth's Future_**10**, e2022EF002715. [https://doi.org/10.1029/2022EF002715](https://doi.org/10.1029/2022EF002715) (2022). * [43] IPCC, 2018: Summary for Policymakers. In: _Global Warming of 1.5\({}^{\circ}\)C. An IPCC Special Report on the impacts of global warming of 1.5\({}^{\circ}\)C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty._ Cambridge University Press, Cambridge, UK and New York, NY, USA, pp. 3-24. [https://doi.org/10.1017/9781009157940.001](https://doi.org/10.1017/9781009157940.001) (2018). * [44] A. Amici, Copernicus Climate Change Service. European Centre for Medium-Range Weather Forecasts (ECMWF) -- Global Temperature Trend Monitor: User Guide (2021). * [45] Diffenbaugh, N. S. & Barnes, E. A. Data-driven predictions of the time remaining until critical global warming thresholds are reached. _Proc. Natl. Acad. Sci._**120**(6), e2207183120. [https://doi.org/10.1073/pnas.2207183120](https://doi.org/10.1073/pnas.2207183120) (2023). * [46] Rantanen, M. _et al._ The Arctic has warmed nearly four times faster than the globe since 1979. _Commun. Earth Environ._**3**, 168. [https://doi.org/10.1038/s43247-022-00498-3](https://doi.org/10.1038/s43247-022-00498-3) (2022). * [47] Forster, P. M. _et al._ The Earth's energy budget, climate feedbacks, and climate sensitivity. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (eds Masson-Delmotte, V. et al.). pp. 923-1054. [https://doi.org/10.1017/9781009157896.009](https://doi.org/10.1017/9781009157896.009) (Cambridge University Press, 2021). * [48] Friedlingstein, P. _et al._ Climate-carbon cycle feedback analysis: results from the C4MIP model intercomparison. _J. Clim._**19**, 3337-3353. [https://doi.org/10.1175/JCLI3800.1](https://doi.org/10.1175/JCLI3800.1) (2006). * [49] Jones, C. D. _et al._ C4MIP-The coupled climate-carbon cycle model intercomparison project: experimental protocol for CMIP6. _Geosci. Model Dev._**9**, 2853-2880. [https://doi.org/10.5194/gmd-9-2853-2016](https://doi.org/10.5194/gmd-9-2853-2016) (2016). * [50] Wenzel, S., Cox, P. M., Eyring, V. & Friedlingstein, P. Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO\({}_{2}\). _Nature_**538**, 499-501. [https://doi.org/10.1038/nature19772](https://doi.org/10.1038/nature19772) (2016). ## Methods ### Earth System Models We use global surface air temperature maps simulated from 1850 to 2098 by Earth System Models (ESMs) from the Coupled Model Intercomparison Projects Phase 6 (CMIP6) ensembles under the Shared Socioeconomic Pathways (SSPs) 2-4.5, 3-7.0 and 5-8.5, using SSP2-4.5 as the baseline. The chosen ESMs are: ACCESS-CM2, AWI-CM-1-1-MR, BCC-CSM2-MR, CAMS-CSM1-0, CanESM5-CanOE, CMCC-CM2-SR5, CNRM-CM6-1, CNRM-ESM2-1, FGOALS-f3-L, FGOALS-g3, GFDL-ESM4, IITM-ESM, INM-CM4-8, INM-CM5-0, IPSL-CM6A-LR, KACE-1-0-G, MIROC6, MPI-ESM1-2-LR, MRI-ESM2-0, NorESM2-MM, TaiESM1, UKESM1-0-LL. The models are selected according to the ensemble member r11p1f1 and the availability of the simulations for the three SSPs. Some of the ESMs simulations after the aforementioned selections are available at 250 km spatial resolution and others at about 100 km. All of the simulations are remapped to the CanESM5-CanOE grid being the one at the lowest spatial resolution among them, with 64\(\times\)128 grid points. Using all the maps with the same size and spatial resolution is needed to work with the Deep Neural Networks (DNNs), so the coarsest spatial resolution is selected to avoid any synthetic information that would be added in case of remapping to a higher-resolution grid. The CMIP6 simulated maps are provided with a monthly temporal resolution but, with the aim of matching carbon dioxide (CO\({}_{2}\)) input data, they are averaged over time to generate the corresponding annual version. ### CO\({}_{2}\) equivalent data One single CO\({}_{2}\) equivalent (CO\({}_{2}\)e) value is fed to the DNNs for each year and for each SSP scenario. The CO\({}_{2}\)e values are computed from Effective Radiative Forcing (ERF) values that take into account aerosols and Greenhouse Gases (GHG) such as CO\({}_{2}\), methane, nitrous dioxide, etc. and are estimated by the MCE v1.2 reduced-complexity model[53]. We have one ERF value per year for each SSP scenario. For each ERF value we iteratively compute the CO\({}_{2}\)e value such that, when fed to a CO\({}_{2}\) radiative forcing formula, the output is in a distance less than 1e-5 with respect to the ERF value. The CO\({}_{2}\) radiative forcing formula used in this work is the one introduced by Meinshausen et al., 2020[54] (for radiative forcing after stratospheric adjustments and relative to pre-industrial (1750) levels) and is an optimized modification of the simplified formula presented by Etminan et al., 2016[55]. It is the following: \[RF_{CO2}=(\alpha^{\prime}+\alpha_{N_{2}O})\cdot ln\left(\frac{C}{Co}\right)\] where: \[\mathcal{C}_{\alpha MAX}=C_{0}-\frac{b_{1}}{2a_{1}}\;\approx\;1808\;ppm\] \(\alpha^{\prime}=d_{1}-\frac{b_{1}^{2}}{4a_{1}}\), for \(C>C_{\alpha max}\) \(\alpha^{\prime}=d_{1}+a_{1}(C-C_{0})^{2}+b_{1}(C-C_{0})\), for \(C_{0}<C<C_{\alpha max}\) \(a^{\prime}=d_{1}\), for \(C<C_{0}\) \(\alpha_{N_{2}0}=c_{1}\cdot\sqrt{N}\) \(a_{l}=-2.4785\times 10^{-7}\ Wm^{-2}\ ppm^{-1}\) \(b_{l}=0.00075906\ Wm^{-2}\ ppm^{-1}\) \(c_{l}=-0.0021492\ Wm^{-2}\ ppb^{-0.5}\) \(d_{l}=5.2488\ Wm^{-2}\) \(C_{0}=277.15\ ppm\) ## 3 BEST observational data We use historical surface air temperature estimates from the monthly Berkeley Earth Surface Temperatures (BEST)[56] gridded data which are provided on a 1\({}^{\circ}\) latitude/longitude grid from 1850 to 2022 with a monthly temporal resolution. Specifically, we select the BEST maps with air temperatures at sea ice, in which temperatures in the presence of sea ice are extrapolated from land-surface air temperature. This revealed to be a more sensible approach for capturing climate change, especially at the poles. Indeed, the change of air temperatures over sea ice can be large even if the Sea Surface Temperature (SST) under sea ice is not changing, since the latter is strictly connected to the water freezing point and can only vary with changes in sea ice cover. Over the last decades, the Arctic region was characterized by a very strong warming trend during the winter season, and this translated into an additional \(\sim\)0.1\({}^{\circ}\)C global-average temperature rise during the 19th century with respect to estimates not including such changes (i.e., estimates based on SST under sea ice)[56]. The BEST data is remapped to the same CanESM5-CanOE grid used for ESMs data, thus generating gridded data with size 64\(\times\)128 and averaged over time to get one map per year. Although the temporal coverage of the BEST dataset starts from 1850, maps prior to 1979 are excluded after the remapping because of lack of data in many regions at the time and thus increased accuracy prior to 1979. Thus, the temporal domain used is 1979-2022. In order to take into account aleatoric uncertainty (i.e., uncertainty related to the inherent randomness of a given phenomenon), a noise is added to each annual BEST map by sampling the data value using a Gaussian distribution with 0 mean and standard deviation equal to the annual uncertainties associated with the BEST dataset and representing the statistical and spatial undersampling effects as well as ocean biases[57]. To include epistemic uncertainty (i.e., uncertainty due to lack of knowledge by the model about the phenomenon of interest), 5 datasets are built for each ESM and for each scenario by adding the random Gaussian noise to the BEST temperature maps, thus obtaining an ensemble of 330 (i.e., 5x22x3) datasets of historical observations used to estimate structural and epistemic uncertainties. We tried 10 and 20 BEST datasets per model and scenario as well, but did not obtain substantial improvements. ## 4 Transfer Learning Approach The first step of the algorithm involves the use of Deep Neural Networks (DNNs) to emulate and replicate the annual surface air temperature maps simulated by 22 CMIP6 ESMs participating in the Scenario Model Intercomparison Project (ScenarioMIP)[58]. These models cover the global surface and are selected for SSP2-4.5, 3-7.0 and 5-8.5 scenarios. To achieve this, an individual DNN is trained for each ESM simulation. Each DNN is trained to predict temperature maps starting from the annual CO\({}_{2}\) equivalent concentration, including aerosols and the radiative effect of other GHG, such as methane. These inputs are readily available in most models and the objective is to generate one surface air temperature map per year. In total, 66 DNNs are implemented and trained, representing the combination of 22 models and 3 scenarios. The training is performed using data from 1850 to 2098, since the latter represents the last projection year available in all the selected ESMs' simulations. Moreover, the year 2095 was left out from training and was used for validation purposes. To evaluate the potential of our Transfer Learning (TL) strategy, for each scenario we first take one ESM simulation out of the CMIP6 ensemble (N = 22) and use the time period from 2023 to 2098 as a synthetic truth for validation in a Leave-One-Out Cross-Validation approach. Specifically, each of the DNN trained on the remaining N-1 ESMs is "transfer learned" on the left-out simulation for the corresponding scenario by updating its weights on the (simulated) historical data from 1850 to 2022. In other words, the DNNs that were initially trained to reproduce the left-out ESM are now fine-tuned on the simulated historical data. The goal of the TL just described above and applied on simulation data is to test the capacity of the approach before using the same method on observational data to constrain the warming projections, which is done in the next step. Indeed, as done for the Leave-One-out Cross-Validation, one DNN per ESM is fine tuned on the corresponding surface air temperature global maps from 1850 to 2098 for the three SSP scenarios, thus implementing and training a total of 66 DNNs. For each scenario, this results in one climate temperature projection per model. Then, using the TL strategy, the DNN model weights and biases are fine-tuned on the 5 historical BEST datasets (1979-2022) generated for the ESM which the DNN was trained on, reserving years 1985, 1995, 2005, 2015 and 2022 for validation. ## Deep Neural Networks The DNNs designed and implemented for each model and scenario share the same architecture and hyperparameters configuration. Deconvolutional (or transposed convolutional) layers[58] are used to generate temperature maps from CO\({}_{2}\) equivalent scalar values. The scalar input is fed to a dense layer made up of 4\(\times\)8\(\times\)128 neurons. Then, four deconvolutional layers have the role of modelling the correlated spatial information and upsampling it in order to perform the deconvolutions and reach the spatial resolution of the target map. Specifically, each deconvolutional layer is characterized by 128 kernels with size 10\(\times\)10 and stride equal to 2: this allows doubling the dimension of rows and columns of the activation volume the layer received as input. The last deconvolutional layer returns an activation volume of size 64\(\times\)128\(\times\)128. A final convolutional layer with one kernel of size 5\(\times\)5 and stride equal to 1 is needed to refine the spatial information generated by the previous deconvolutional layers and output the corresponding near-surface air temperature map of size 64\(\times\)128. The best set of hyperparameters was found after a trial-and-error procedure involving several configurations. We tested different learning rates for the first training by progressively increasing the value from 1e-8 to 1e-2. We selected 1e-4 as it revealed a good trade-off between generalization accuracy and convergence time even across different hyperparameters configurations. In the end, the Adam optimizer [59], a learning rate of 1e-4, a batch size of 8 and 500 epochs are used for the first training. During TL, we fine-tune the pre-trained layers selecting a lower learning rate to not dramatically change the values of the weights adjusted during the first training. This is done when training on new data with the aim of keeping the old knowledge acquired from previous training and transfer it to the new learning. We found good performance with a learning rate an order of magnitude smaller than the one used during the first training, which is a common practice in fine-tuning. The strategy of freezing some layers during transfer learning was tested as well, but it led to worse results. The final set of hyperparameters for TL is a learning rate of 1e-5, Adam optimizer [59], a batch size of 16 and 500 epochs. The DNN architecture is the same for both training and TL phases. The loss is a standard Mean Absolute Error and both the annual CO\({}_{2}\) equivalent values, and the surface air temperature maps are scaled using Min-Max Normalization in the 0-1 range. Leaky Rectified Linear Unit activation function [60] is selected for the hidden layers and a sigmoid is selected for the output layer because of the 0-1 range of Min-Max Normalization of both input and output. ## Data availability The datasets used in this study are freely accessible from the following public repositories: - CMIP6 data: [https://esgf-node.llnl.gov/search/cmip6/](https://esgf-node.llnl.gov/search/cmip6/) - BEST data: [https://berkeleyearth.org/data/](https://berkeleyearth.org/data/) ## Code availability Following paper publication, all code and data accompanying this manuscript will be made publicly available at [https://github.com/francescoimmorlano/transferring-climate-change-knowledge](https://github.com/francescoimmorlano/transferring-climate-change-knowledge). ## References * [1] J. Tsursui, J. Minimal CMIP Emulator (MCE v1.2): a new simplified method for probabilistic climate projections. _Geosci. Model Dev._**15**, 951-970. [https://doi.org/10.5194/gmd-15-951-2022](https://doi.org/10.5194/gmd-15-951-2022) (2022). * [2] Meinshausen, M. _et al._ The shared socio-economic pathway (SSP) greenhouse gas concentrations and their extensions to 2500. _Geosci. Model Dev._**13**, 3571-3605. [https://doi.org/10.5194/gmd-13-3571-2020](https://doi.org/10.5194/gmd-13-3571-2020) (2020). * [3] Myhre, G., Stordal, F., Gausemel, I., Nielsen, C. J. & Mahieu, E. Line-by-line calculations of thermal infrared radiation representative for global condition: CFC-12 as an example. _J. Quant. Spectrosc. Radiat. Transf._**97**, 317-331. [https://doi.org/10.1016/j.jqsrt.2005.04.015](https://doi.org/10.1016/j.jqsrt.2005.04.015) (2006). * [4] Rohde, R. A. & Hausfather, Z. The Berkeley Earth Land/Ocean Temperature Record. _Earth Syst. Sci. Data_**12**, 3469-3479. [https://doi.org/10.5194/essd-12-3469-2020](https://doi.org/10.5194/essd-12-3469-2020) (2020). * 56. Rohde, R. A. & Hausfather, Z. Berkeley Earth Combined Land and Ocean Temperature Field, Jan 1850-Nov 2019. Zenodo [https://doi.org/10.5281/zenodo.3634713](https://doi.org/10.5281/zenodo.3634713) (2020). * 57. Zeiler, M. D., Taylor, G. W., Fergus, R. Adaptive deconvolutional networks for mid and high level feature learning. In _International Conference on Computer Vision_ 2018-2025 [https://doi.org/10.1109/ICCV.2011.6126474](https://doi.org/10.1109/ICCV.2011.6126474) (IEEE, 2011). * 58. Kingma, D. P. & Ba, J. Adam : a method for stochastic optimization. arXiv preprint [https://doi.org/10.48550/arXiv.1412.6980](https://doi.org/10.48550/arXiv.1412.6980) (2014). * 59. Maas, A. L., Hannun, A. Y. & Ng, A. Y. Rectifier nonlinearities improve neural network acoustic models. in _Proc. icml_**30**(1), 3 (2013). ## Acknowledgements V.E. and P.G.'s research for this study was supported by the European Research Council (ERC) Synergy Grant "Understanding and modeling the Earth System with Machine Learning (USMILE)" under the Horizon 2020 research and innovation programme (Grant agreement No. 855187). P.G.'s research for this study was additionally supported by the National Science Foundation Science and Technology Center, Learning the Earth with Artificial intelligence and Physics, LEAP (Grant number 2019625). F.I.'s research for this study was supported by the H2020-MSCA-RISE project GEMCLIME-2020 (Grant agreement No. 681228). We acknowledge the World Climate Research Programme, which, through its Working Group on Coupled Modelling, coordinated and promoted CMIP. We thank the climate modeling groups for producing and making available their model output, the EarthSystem Grid Federation (ESGF) for archiving the data and providing access, and the multiple funding agencies who support CMIP6 and ESGF. The authors want to thank Prof. Ryan Abernathey for his help with the Pangeo infrastructure and CMIP6 data download as well as Prof. Tom Beucler for the help on transfer learning and Dr. Lucas Gloege for help in processing the CMIP6 data. The authors thank Dr. Manuel Schlund for providing several of the datasets being used here and Dr. Zebedee Nichols for the radiative forcing data for the RCP and SSP scenarios. ## Author contributions P.G. conceived and designed the approach. P.G. and F.I. collected and processed the data, and implemented the approach. F.I., V.E., T.M.G., G.Ac., D.E., G.Al., and P.G. contributed to results analysis and interpretation. P.G. drafted the manuscript. All authors edited and revised the manuscript. ## Competing interests The authors declare no competing interests. ## Additional information Extended Data is available for this paper. Supplementary Information is available for this paper. Correspondence and requests for materials should be addressed to P.G. Reprints and permissions information is available at www.nature.com/reprints. **Extended Data Fig. 1 | Long-term surface air temperature anomaly maps and temperature variation across latitudes. a-c,** Surface air temperature anomaly maps in 2081-2098 with respect to the 1850-1900 base period. They are computed by averaging in time the temperature maps generated by the DNNs (left) and CMIP6 models (right) for SSP2-4.5 (**a**), SSP3-7.0 (**b**) and SSP5-8.5 (**c**) scenarios. The maps are produced by the DNNs after transfer learning them on observations. The variation of temperatures across latitudes is also reported (center). **Extended Data Fig. 2 \(|\) Historical surface air temperature maps.****a-c**, Surface air temperature maps in 1980-2020 for SSP2-4.5 (**a**), SSP3-7.0 (**b**) and SSP5-8.5 (**c**) scenarios. They are computed by averaging in time the temperature maps generated by the DNNs (left) and CMIP6 models (center). The difference between DNNs and CMIP6 average maps is also reported (right). The maps are produced by the DNNs after transfer learning them on observations. Supplementary Information for Transferring climate change knowledge Francescoe Immorlano1,2 Veronika Eyring3,4 Thomas le Monnier de Gouville5,6 Gabriele Accarino1, Donatello Elia1, Giovanni Aloisio1,2 Pierre Gentine5* 1 Euro-Moditerranean Center on Climate Change (CMCC) Foundation, Lecce, Italy 2 Department of Engineering for Innovation, University of Salento, Lecce, Italy 3 Deutsches Zentrum fur Lufl- und Raumfahrt e.V. (DLR), Institut fur Physik der Atmosphare, Oberpfaffenhofen, Germany 4 University of Bremen, Institute of Environmental Physics (IUP), Bremen, Germany 5 Columbia University, New York, NY USA 6 Ecole Polytechnique, Palaiseau, France * Correspondence to: Pierre Gentine ([email protected]) ## Supplementary Table \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Scenario} & \multicolumn{2}{c|}{Global average} & \multirow{2}{*}{Global RMSE} & 5\% with respect to & 95\% with respect \\ & & \multicolumn{1}{c|}{bias [\({}^{\circ}\)C]} & & \multicolumn{1}{c|}{average} & \multicolumn{1}{c|}{to average} \\ \hline \multirow{4}{*}{ACCESS-CM2} & SSP2-4.5 & -0.13 & 0.34 & -0.33 & 0.59 \\ \cline{2-6} & SSP3-7.0 & 0.63 & 0.74 & -0.58 & 0.67 \\ \cline{2-6} & SSP5-8.5 & 0.59 & 0.83 & -0.75 & 1.23 \\ \hline \multirow{4}{*}{AWI-CM-1-1-MR} & SSP2-4.5 & 0.26 & 0.4 & -0.31 & 0.57 \\ \cline{2-6} & SSP3-7.0 & -0.23 & 0.48 & -0.57 & 0.68 \\ \cline{2-6} & SSP5-8.5 & -0.41 & 0.73 & -0.73 & 1.1 \\ \hline \multirow{4}{*}{BCC-CSM2-MR} & SSP2-4.5 & 0.33 & 0.43 & -0.3 & 0.57 \\ \cline{2-6} & SSP3-7.0 & 0.04 & 0.4 & -0.5 & 0.61 \\ \cline{2-6} & SSP5-8.5 & 0.56 & 0.86 & -0.76 & 1.17 \\ \hline \multirow{4}{*}{CAMS-CSM1-0} & SSP2-4.5 & 0.19 & 0.35 & -0.31 & 0.45 \\ \cline{2-6} & SSP3-7.0 & 0.06 & 0.48 & -0.61 & 0.69 \\ \cline{2-6} & SSP5-8.5 & 0.35 & 0.68 & -0.72 & 0.98 \\ \hline \multirow{4}{*}{CanESM5-CanOE} & SSP2-4.5 & 0.28 & 0.47 & -0.38 & 0.66 \\ \cline{2-6} & SSP3-7.0 & -0.05 & 0.48 & -0.61 & 0.86 \\ \cline{2-6} & SSP5-8.5 & -1.15 & 1.3 & -0.82 & 0.97 \\ \hline \multirow{4}{*}{CMCC-CM2-SR5} & SSP2-4.5 & -1.05 & 1.09 & -0.39 & 0.56 \\ \cline{2-6} & SSP3-7.0 & -0.21 & 0.52 & -0.62 & 0.88 \\ \cline{2-6} & SSP5-8.5 & -1.33 & 1.52 & -0.9 & 1.45 \\ \hline \multirow{4}{*}{CNRM-CM6-1} & SSP2-4.5 & -0.19 & 0.38 & -0.27 & 0.52 \\ \cline{2-6} & SSP3-7.0 & -0.42 & 0.54 & -0.48 & 0.54 \\ \cline{2-6} & SSP5-8.5 & -0.92 & 1.06 & -0.74 & 0.76 \\ \hline \multirow{4}{*}{CNRM-ESM2-1} & SSP2-4.5 & 0.11 & 0.36 & -0.32 & 0.5 \\ \cline{2-6} & SSP3-7.0 & 0.21 & 0.41 & -0.44 & 0.6 \\ \cline{1-1} \cline{2-6} & SSP5-8.5 & 0.05 & 0.62 & -0.79 & 1.11 \\ \hline \end{tabular} ## References \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Scenario} & \multicolumn{2}{c|}{Global average} & \multirow{2}{*}{Global RMSE} & \multicolumn{2}{c|}{5\% with respect to average} & \multicolumn{1}{c|}{95\% with respect to average} \\ \cline{3-6} & & \multicolumn{1}{c|}{bias [\({}^{\circ}\)C]} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{average} & \multicolumn{1}{c|}{to average} \\ \hline \multirow{4}{*}{FGOALS-f3-L} & SSP2-4.5 & 0.17 & 0.34 & -0.26 & 0.47 \\ \cline{2-6} & SSP3-7.0 & -0.11 & 0.42 & -0.66 & 0.67 \\ \cline{2-6} & SSP5-8.5 & -0.07 & 0.62 & -0.71 & 1.0 \\ \hline \multirow{4}{*}{FGOALS-g3} & SSP2-4.5 & 0.42 & 0.48 & -0.28 & 0.4 \\ \cline{2-6} & SSP3-7.0 & 0.2 & 0.46 & -0.61 & 0.7 \\ \cline{2-6} & SSP5-8.5 & 1.0 & 1.14 & -0.73 & 0.92 \\ \hline \multirow{4}{*}{GFDL-ESM4} & SSP2-4.5 & 0.61 & 0.69 & -0.38 & 0.65 \\ \cline{2-6} & SSP3-7.0 & 0.58 & 0.74 & -0.53 & 0.84 \\ \cline{2-6} & SSP5-8.5 & 0.91 & 1.12 & -0.78 & 1.21 \\ \hline \multirow{4}{*}{IITM-ESM} & SSP2-4.5 & -0.16 & 0.31 & -0.37 & 0.46 \\ \cline{2-6} & SSP3-7.0 & -0.62 & 0.79 & -0.63 & 0.65 \\ \cline{2-6} & SSP5-8.5 & -0.29 & 0.78 & -1.01 & 1.18 \\ \hline \multirow{4}{*}{INM-CM4-8} & SSP2-4.5 & -0.06 & 0.31 & -0.38 & 0.54 \\ \cline{2-6} & SSP3-7.0 & -0.6 & 0.8 & -0.8 & 0.99 \\ \cline{2-6} & SSP5-8.5 & 0.16 & 0.65 & -0.86 & 1.23 \\ \hline \multirow{4}{*}{INM-CM5-0} & SSP2-4.5 & 0.19 & 0.36 & -0.37 & 0.54 \\ \cline{2-6} & SSP3-7.0 & 0.05 & 0.44 & -0.56 & 0.77 \\ \cline{2-6} & SSP5-8.5 & 0.2 & 0.64 & -0.7 & 1.11 \\ \hline \multirow{4}{*}{IPSL-CM6A-LR} & SSP2-4.5 & -0.39 & 0.51 & -0.36 & 0.54 \\ \cline{2-6} & SSP3-7.0 & -0.98 & 1.05 & -0.63 & 0.54 \\ \cline{2-6} & SSP5-8.5 & -1.19 & 1.32 & -0.75 & 0.86 \\ \hline \multirow{4}{*}{KACE-1-0-G} & SSP2-4.5 & 0.48 & 0.63 & -0.43 & 0.74 \\ \cline{2-6} & SSP3-7.0 & -0.45 & 0.61 & -0.57 & 0.9 \\ \cline{1-1} \cline{2-6} & SSP5-8.5 & -0.76 & 1.05 & -0.92 & 1.25 \\ \hline \end{tabular} **Supplementary Table 1 \(|\) Leave-one-out cross-validation results.** Global average bias, global Root Mean Squared Error (RMSE), 5% and 95% with respect to the average of temperatures (predicted by the Deep Neural Networks) in 2081-2098 computed in the leave-one-model-out cross validation approach for each of the 22 models and the three SSP scenarios considered. Each model is the taken-out model considered as synthetic ground truth. ## Supplementary discussion ### Structural and parametric errors Two natural questions come to mind after seeing the performance of the Deep Neural Networks (DNNs). First, why can the neural network project climate change so well? And, second, isn't the historical data used twice given that some of them are used during the model tuning? Those two questions boil down to the same underlying causes. Earth System Models (ESMs) are a simplified representation of the complex physical, chemical and biological processes of the real world. As such, they inherently make assumptions regarding the representation of the processes in terms of the equations and their structure (e.g., the complexity), as well as the values of parameters used in those equations. Some of the available historical data are used to tune the major model parameters (e.g., cloud entrainment rate or microphysical parameters) to match the historical climatology or some modes of climate variability such as El Nino[59, 60]. Yet, each ESM is inherently limited by its structural assumptions and thus cannot optimally use existing data as it can only work within a subspace restricted by its inherent structures. Our DNNs, instead, learn how to best leverage both the structurally deficient physics of the climate system and historical data information to improve some of the temperature biases characterizing most of the ESMs. One of the major biases of the ESMs is the cold tongue bias and its extension along the equatorial band typically too cold by about 2\({}^{\circ}\)C[61] and present in all the three generations of Coupled Model Intercomparison Project (CMIP) models[62]. It results to be critical for climate change assessment as it can deteriorate El Nino-Southern Oscillation projections[63]. The DNNs ensemble improves the cold tongue bias by predicting higher surface air temperature values than CMIP6 ensemble in the historical period (Extended Data Fig. 2). Another bias typically present in climate models concerns the Arctic Amplification[64, 46]. It has been shown that, during 1979-2021, the Arctic has warmed nearly four times faster and both CMIP5 and CMIP6 models underestimate it[46]. The maximum warming is observed in the Eurasian sector of the Arctic Ocean, near Svalbard and Novaya Zemlya, while large continental regions in North America and Western Siberia do not reveal statistically significant trends[46]. These patterns are corrected and captured by the DNNs ensemble after the inclusion of the Berkeley Earth Surface Temperatures data and these patterns are exploited to both predict end-of-century (Fig. 4, Extended Data Fig. 1) and historical surface air temperature regional variations (Extended Data Fig. 2). Furthermore, coupled ESMs are affected by Sea Surface Temperature (SST) biases in the location and structure of the Gulf Stream[65, 66]. In particular, warmer temperatures are simulated in the North Atlantic region centered on the Mid-Atlantic bight where the modeled Gulf Stream separates from the coast further north than observations[67, 68]. Besides, a well-known and long-standing issue in ocean modeling concerns the cold bias located to the east of the Grand Banks of Newfoundland[67], where the Gulf Stream ends and the North Atlantic Current begins, even if in some models[69, 70] it is improved by using a higher horizontal resolution[65]. Our DNNs improved it as well, generating lower surface air temperatures in the aforementioned region (Fig. 4, Extended Data Fig. 1, 2). High SSTs in the western Pacific warm pool and low SSTs in the eastern Pacific cold tongue define a zonal contrast in the tropical Pacific atmosphere-ocean state [71] which is a point of contention in future projections [70]. Most CMIP models project a higher warming in the equatorial central-eastern Pacific than the western Pacific, which corresponds to a weakening of the SST gradient and often called "El Nino-like" warming pattern [71, 72, 73, 74, 75, 76]. This must be reconciled with the strengthening of the SST gradient recently observed since the mid-twentieth century ("La Nina-like" warming) [71, 72, 76] and the weakening of the east-west temperature gradient simulated by CMIP6 models [73]. Although the observational dataset shows a clear La Nina-like trend [77], determining future response from an unforced natural multidecadal variability or partly from a forced response to anthropogenic climate change is not trivial [75, 76]. Yet, the contribution of natural variability to multi-decadal trends appears relatively small in this region, thus the large discrepancy would suggest a systematic bias that models have in response of SST patterns to anthropogenic forcing [75, 77] and that observations are truly outside the model range [72]. Furthermore, it has been shown that a physically consistent response to warming could be La Nina-like and that it could have been detectable since the late twentieth century [76], which is aligned with our results (Fig. 4). Our results reveal that the Transfer Learning approach corrects SST biases that ESMs exhibit in the historical period (Extended Data Fig. 2). Clearly, this does not imply that biases in the future response (being still unknown) are fully corrected as well, as we have no information about them yet but this is an encouraging result.
2309.05855
Instabilities in Convnets for Raw Audio
What makes waveform-based deep learning so hard? Despite numerous attempts at training convolutional neural networks (convnets) for filterbank design, they often fail to outperform hand-crafted baselines. These baselines are linear time-invariant systems: as such, they can be approximated by convnets with wide receptive fields. Yet, in practice, gradient-based optimization leads to suboptimal approximations. In our article, we approach this phenomenon from the perspective of initialization. We present a theory of large deviations for the energy response of FIR filterbanks with random Gaussian weights. We find that deviations worsen for large filters and locally periodic input signals, which are both typical for audio signal processing applications. Numerical simulations align with our theory and suggest that the condition number of a convolutional layer follows a logarithmic scaling law between the number and length of the filters, which is reminiscent of discrete wavelet bases.
Daniel Haider, Vincent Lostanlen, Martin Ehler, Peter Balazs
2023-09-11T22:34:06Z
http://arxiv.org/abs/2309.05855v4
# Instabilities in Convnets for Raw Audio ###### Abstract What makes waveform-based deep learning so hard? Despite numerous attempts at training convolutional neural networks (convnets) for filterbank design, they often fail to outperform hand-crafted baselines. These baselines are linear time-invariant systems: as such, they can be approximated by convnets with wide receptive fields. Yet, in practice, gradient-based optimization leads to suboptimal approximations. In our article, we approach this phenomenon from the perspective of initialization. We present a theory of large deviations for the energy response of FIR filterbanks with random Gaussian weights. We find that deviations worsen for large filters and locally periodic input signals, which are both typical for audio signal processing applications. Numerical simulations align with our theory and suggest that the condition number of a convolutional layer follows a logarithmic scaling law between the number and length of the filters, which is reminiscent of discrete wavelet bases. Convolutional neural networks, digital filters, audio processing, statistical learning, frame theory. ## I Introduction FilterbankFS are linear time-invariant systems which decompose a signal \(\mathbf{x}\) into \(J>1\) subbands. By convolution with filters \((\mathbf{w}_{j})_{j=1,\ldots,J}\) the output of a filterbank \(\mathbf{\Phi}\) is a multivariate time series \((\mathbf{\Phi}\mathbf{x})\left[n,j\right]=(\mathbf{x}*\mathbf{w}_{j})[n]\). Filterbanks play a key role in speech and music processing: constant-Q-transforms, third-octave spectrograms, and Gammatone filterbanks are some well-known examples [1, 2, 3]. Beyond the case of audio, filterbanks are also used in other domains such as seismology [4], astrophysics [5], and neuroscience [6]. In deep learning, filterbanks serve as a preprocessing step to signal classification and generation. In this context, filterbank design is a form of feature engineering. Yet, in recent years, several authors have proposed to replace feature engineering with feature learning: i.e., to optimize filterbank parameters jointly with the rest of the pipeline [7, 8, 9]. So far, prior work on filterbank learning has led to mixed results. For example, on the TIMIT dataset, using a convolutional neural network (convnet) with 1-D filters on the "raw waveform" was found to fare poorly (29.2% phone error rate or PER) compared to the mel-spectrogram baseline (17.8% PER) [10]. Interestingly, fixing the convnet weights to form a filterbank on the mel-scale brings the PER to 18.3%, and fine-tuning them by gradient descent, to 17.8%. Similar findings have been reported with Gammatone filterbanks [11]. Arguably, such a careful initialization procedure defeats the purpose of deep learning; i.e., sparing the effort of feature engineering. Furthermore, it contrasts with other domains (e.g., image processing) in which all layers may be initialized as random finite impulse responses (FIR). Yet, in audio processing, filterbank design may outperform filterbank learning, particularly from a random initialization; a fact that is increasingly well-documented [12, 13, 14]. A model known as multiresolution neural network (MuReNN) [15] has recently circumvented this issue in practice; however, the theory which underlies its empirical success remains unclear as of yet. To understand the gap in performance in [10] and [11], we must distinguish neural network architecture design vs. iterative optimization. Simply put: just because a convnet _can_ represent a human-engineered filterbank does not mean it _will_. This issue is not just of purely theoretical interest: in some emerging topics of machine listening such as bioacoustics, it would be practically useful to train a FIR filterbank with random initialization to learn something about acoustic events of interest with minimal domain-specific knowledge [16, 17]. Our article aims to explain the difficulties of deep learning in the raw waveform by offering a theoretical study of undecimated uniform filterbanks \(\mathbf{\Phi}\) with large 1-D filters under random Gaussian initialization. Within the paradigm of filterbank learning, \(\mathbf{\Phi}\) may be interpreted as the first layer of an untrained convnet with a stride of one. Prior publications have shown that stability is a crucial prerequisite for robustness to perturbations in the input [18] and stable dynamics in gradient-based optimization [19]. We characterize numerical stability in terms of energy preservation, i.e., when the ratio \((\|\mathbf{\Phi}\mathbf{x}\|^{2}/\|\mathbf{x}\|^{2})\) is close to one with high probability. In Section II, we prove explicit formulas for the expected value and variance of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\), given a deterministic input sequence \(\mathbf{x}\), and derive upper bounds for the probability of large deviations. In Section III, we bound the expected values and variances of the optimal frame bounds of \(\mathbf{\Phi}\), i.e., \(A=\min_{\|x\|=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}\) and \(B=\max_{\|x\|=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}\). We conclude with an asymptotic analysis of the stability of \(\mathbf{\Phi}\) by means of its condition number \(\kappa=B/A\). Fig. 1: Autocorrelation in the input signal \(\mathbf{x}\) increases the variance of the filterbank response energy \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) across random initializations. We compare audio signals with different autocorrelation profiles. Left to right: Snare (low), speech (medium), and flute (high). Top: Spectrograms of the signals. Bottom: Empirical histogram of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) for 1000 independent realizations of \(\mathbf{\Phi}\). ## II FIR Filterbank with Random Gaussian Weights Throughout this article, we use finite circulant convolution of signals \(\mathbf{x}\in\mathbb{R}^{N}\) with filters \(\mathbf{w}\in\mathbb{R}^{T}\), \(T\leq N\), given by \[(\mathbf{x}\ast\mathbf{w})[n]=\sum_{k=0}^{T-1}\mathbf{w}[k]\mathbf{x}[(n-k)\bmod N]. \tag{1}\] We denote the circular autocorrelation of \(\mathbf{x}\) for \(0\leq t<T\) by \[\mathbf{R}_{\mathbf{x}\mathbf{x}}(t)=\sum_{k=0}^{N-1}\mathbf{x}[k]\mathbf{x}[(k-t)\bmod N]. \tag{2}\] ### _Moments of the squared Euclidean norm_ **Proposition II.1**.: _Let \(\mathbf{x}\in\mathbb{R}^{N}\) and \(\mathbf{\Phi}\) a random filterbank with \(J\) i.i.d. filters \(\mathbf{w}_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\) of length \(T\leq N\). Then_ \[\mathbb{E}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]=JT\sigma^{2}\|\mathbf{x}\|^{2} \tag{3}\] _and_ \[\mathbb{V}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]=2J\sigma^{4}\sum_{\tau=-T}^{T} \left(T-|\tau|\right)\mathbf{R}_{\mathbf{x}\mathbf{x}}(\tau)^{2}. \tag{4}\] We note that (3) is known for \(J=1\) and \(T=N\)[20]. Setting \(\sigma^{2}=(JT)^{-1}\) implies \(\mathbb{E}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]=\|\mathbf{x}\|^{2}\). In other words, if the variance of each parameter \(\mathbf{w}_{j}\) scales in inverse proportion with the total number of parameters (i.e., \(JT\)), then \(\mathbf{\Phi}\) satisfies energy preservation on average. However, it is important to see that the variance of the random variable \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) depends also on the content of the input \(\mathbf{x}\): specifically, its autocorrelation \(\mathbf{R}_{\mathbf{x}\mathbf{x}}\). This is a peculiar property of covnets, unlike fully connected layers with random Gaussian initialization, see Proposition V.1 in the appendix. This can be explained by the fact that the entries of the random matrix associated with \(\mathbf{\Phi}\) are not independent. In this context, we note that natural audio signals are often locally periodic and thus highly autocorrelated. Hence, we interpret Proposition II.1 as follows: untrained convnets are particularly unstable in the presence of vowels in speech or pitched notes in music. Figure 1 illustrates this phenomenon for three real-world signals. Our proof of Proposition II.1 hinges on the following lemmata, which are proven in the appendix. **Lemma II.2**.: _Let \(\mathbf{x}\in\mathbb{R}^{N}\) and \(\mathbf{w}\in\mathbb{R}^{T}\), \(T\leq N\). The circular convolution of \(\mathbf{x}\) and \(\mathbf{w}\) satisfies \(\|\mathbf{x}\ast\mathbf{w}\|^{2}=\mathbf{w}^{\top}\mathbf{Q}_{T}(\mathbf{x})\mathbf{w}\), where the entries of the matrix \(\mathbf{Q}_{T}(\mathbf{x})\) are given by \(\mathbf{Q}_{T}(\mathbf{x})[n,t]=\mathbf{R}_{\mathbf{x}\mathbf{x}}((t-n)\bmod N)\) for each \(0\leq n,t<T\)._ **Lemma II.3**.: _Let \(\mathbf{x}\in\mathbb{R}^{N}\). All diagonal entries of the matrix \(\mathbf{Q}_{T}(\mathbf{x})\) from Lemma II.2 are equal to \(\|\mathbf{x}\|^{2}\)._ Proof of Proposition ii.1.: Given a filter \(\mathbf{w}_{j}\) for \(1\leq j\leq J\), we apply Lemma II.2 and use the cyclic property of the trace \[\|\mathbf{x}\ast\mathbf{w}_{j}\|^{2}=\operatorname{Tr}\left(\mathbf{w}_{j}^{\top}\mathbf{ Q}_{T}(\mathbf{x})\mathbf{w}_{j}\right)=\operatorname{Tr}\left(\mathbf{Q}_{T}(\mathbf{x}) \mathbf{w}_{j}\mathbf{w}_{j}^{\top}\right). \tag{5}\] We take the expected value on both sides and recognize the term \(\mathbb{E}[\mathbf{w}_{j}\mathbf{w}_{j}^{\top}]\) as the covariance matrix of \(\mathbf{w}_{j}\), i.e., \(\sigma^{2}\mathbf{I}\). Hence: \[\mathbb{E}\left[\|\mathbf{x}\ast\mathbf{w}_{j}\|^{2}\right]=\operatorname{Tr}\left( \mathbf{Q}_{T}(\mathbf{x})\mathbb{E}\left[\mathbf{w}_{j}\mathbf{w}_{j}^{\top}\right] \right)=\sigma^{2}\operatorname{Tr}(\mathbf{Q}_{T}(\mathbf{x})). \tag{6}\] By Lemma II.3, \(\operatorname{Tr}(\mathbf{Q}_{T}(\mathbf{x}))=T\|\mathbf{x}\|^{2}\), hence \(\mathbb{E}[\|\mathbf{x}\ast\mathbf{w}_{j}\|^{2}]=\sigma^{2}T\|\mathbf{x}\|^{2}\). For the variance, we recall Theorem 5.2 from [21], which states that if \(\mathbf{v}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\), then for any matrix \(\mathbf{A}\) \[\mathbb{V}\big{[}\mathbf{v}^{\top}\mathbf{A}\mathbf{v}\big{]}=2\operatorname{Tr} \left(\mathbf{\Lambda}\mathbf{\Lambda}\mathbf{\Lambda}\mathbf{\Sigma}\right)+4\mathbf{\mu}^{\top} \mathbf{\Lambda}\mathbf{\Lambda}\mathbf{\Lambda}\mathbf{\mu} \tag{7}\] We set \(\mathbf{v}=\mathbf{w}_{j}\), \(\mathbf{\mu}=\mathbf{0}\), \(\mathbf{\Sigma}=\sigma^{2}\mathbf{I}\), and \(\mathbf{A}=\mathbf{Q}_{T}(\mathbf{x})\). We obtain: \[\mathbb{V}\big{[}\|\mathbf{x}\ast\mathbf{w}_{j}\|^{2}\big{]} =2\sigma^{4}\operatorname{Tr}\left(\mathbf{Q}_{T}(\mathbf{x})^{2}\right)\] \[=2\sigma^{4}\sum_{t=0}^{T-1}\sum_{t^{\prime}=0}^{T-1}\mathbf{R}_ {\mathbf{x}\mathbf{x}}(t^{\prime}-t)\mathbf{R}_{\mathbf{x}\mathbf{x}}(t-t^{\prime})\] \[=2\sigma^{4}\sum_{t=0}^{T-1}\sum_{\tau=-t}^{T-1}\mathbf{R}_{\mathbf{x }\mathbf{x}}(\tau)^{2}. \tag{8}\] By a combinatorial argument, the double sum above rewrites as \(\sum_{\tau=-T}^{T}\left(T-|\tau|\right)\mathbf{R}_{\mathbf{x}\mathbf{x}}(\tau)^{2}\). The proof concludes by linearity of the variance, given the independence of the \(J\) filters in \(\mathbf{\Phi}\). After scaling \(\mathbf{\Phi}\) such that it preserves energy on average, i.e. \(\mathbb{E}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]=\|\mathbf{x}\|^{2}\), we now derive upper bounds on the probability of large deviations of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) given \(\mathbf{x}\neq\mathbf{0}\). **Proposition II.4** (Cantelli bound).: _Let \(\mathbf{\Phi}\) be a random filterbank with \(J\) i.i.d. filters \(\mathbf{w}_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\) of length \(T\) and \(\sigma^{2}=(JT)^{-1}\). Given a deviation \(\mathbf{\alpha}\geq 0\), the probability of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) exceeding \((1+\mathbf{\alpha})\|\mathbf{x}\|^{2}\) is bounded from above as_ \[\mathbb{P}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\geq(1+\mathbf{\alpha})\|\mathbf{x}\|^{2}\right] \leq\frac{\mathbb{V}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]}{\mathbb{V}[\|\mathbf{\Phi} \mathbf{x}\|^{2}]+\alpha^{2}\mathbf{R}_{\mathbf{x}\mathbf{x}}(0)^{2}}. \tag{9}\] **Proposition II.5** (Chernoff bound).: _Let \(\mathbf{\lambda}\) denote the vector of eigenvalues of \(\mathbf{Q}_{T}(\mathbf{x})\). Under the same assumptions as Proposition II.4, and given a deviation \(\mathbf{\alpha}\geq 0\), the probability of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) exceeding \((1+\mathbf{\alpha})\|\mathbf{x}\|^{2}\) is bounded from above as_ \[\mathbb{P}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\geq(1+\mathbf{\alpha})\|\mathbf{x}\|^{2} \right]\leq\exp\left(-\frac{\alpha^{2}JT^{2}\|\mathbf{x}\|^{4}}{2\alpha T\|\mathbf{ \lambda}\|_{-}\|\mathbf{x}\|^{2}+2\|\mathbf{\Lambda}\|_{2}^{2}}\right). \tag{10}\] Fig. 2: Large deviations of filterbank response energy (\(\|\mathbf{\Phi}\mathbf{x}\|^{2}-\|\mathbf{x}\|^{2}\)) for three synthetic signals of length \(N=1024\) (top) and three natural signals of length \(N=22050\) (bottom). Blue: empirical mean and \(95^{\text{th}}\) percentile across 100 realizations of \(\mathbf{\Phi}\). We show two theoretical bounds from Proposition II.4: Cantelli (Equation 9), orange) and Chernoff (Equation 10, green). Each filterbank contains \(J=10\) filters of length \(T=2^{4}\) where \(3\leq k\leq 10\). The two propositions above have their own merits. Proposition II.4, based on Cantelli's inequality [22], is straightforward and interpretable in terms of the autocorrelation of \(\mathbf{x}\). Meanwhile, Proposition II.5, based on Chernoff's inequality [23], is closer to empirical percentiles, yet is expressed in terms of the eigenvalues of \(\mathbf{Q}_{T}(\mathbf{x})\), for which there is no general formula. In the particular case of full-length filters (\(T=N\)), \(\mathbf{Q}_{T}(\mathbf{x})\) is a circulant matrix: hence, we interpret these eigenvalues as the energy spectral density of the input signal, i.e., \(\mathbf{\lambda}=|\mathbf{\dot{x}}|^{2}\) where \(\mathbf{\dot{x}}\) is the discrete Fourier transform of \(\mathbf{x}\). ### _Numerical simulation_ We now compute empirical probabilities of relative energy deviations between \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) and \(\|\mathbf{x}\|^{2}\) for different signals \(\mathbf{x}\) and various filter lengths \(T\). Specifically, for each \(\mathbf{x}\) and each \(T\), we simulate 1000 independent realizations of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) for each value of \(T\) and retain the closest 95% displayed as shaded area in Figure 2. Additionally, we set the right-hand side of Propositions II.4 and II.5 to 5% and solve for \(\alpha\), yielding upper bounds for this area. The upper part of Figure 2 illustrates our findings for three synthetic signals: \((i)\) a single impulse, which has low autocorrelation, \((ii)\) a realization of Brownian noise, which has medium autocorrelation and \((iii)\) a sine wave with frequency \(\omega=\pi\), which has high autocorrelation. In the lower part of the same figure, we use real-world sounds: a snare drum hit, a spoken utterance, and a sustained note on the concert flute. As predicted by the theory, large deviations of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) become less probable as the filters grow in length \(T\) if the input \(\mathbf{x}\) has little autocorrelation (e.g., snare). The rate of decay is slower for highly autocorrelated signals (e.g., flute). These findings explain the observations we already made in Figure 1. ## III Extreme Value Theory meets Frame Theory In the previous section, we have described the probability distribution of \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) for a known input signal \(\mathbf{x}\). We now turn to inquire about the properties of \(\mathbf{\Phi}\) as a linear operator; i.e., independently of \(\mathbf{x}\). If there exist two positive numbers \(A\) and \(B\) such that the double inequality \(A\|\mathbf{x}\|^{2}\leq\|\mathbf{\Phi}\mathbf{x}\|^{2}\leq B\|\mathbf{x}\|^{2}\) holds for any \(\mathbf{x}\in\mathbb{R}^{N}\), \(\mathbf{\Phi}\) is said to be a _frame_ for \(\mathbb{R}^{N}\) with frame bounds \(A\) and \(B\). The optimal frame bounds are given by \(A=\min_{\|\mathbf{x}\|_{2}=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}\) and \(B=\max_{\|\mathbf{x}\|_{2}=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}\). ### _From quadratic forms to chi-squared distributions_ Although the expected frame bounds \(\mathbb{E}[A]\) and \(\mathbb{E}[B]\) do not have closed-form expressions, we can relate them to the expected order statistics of the chi-squared distribution with \(J\) degrees of freedom, denoted by \(\chi^{2}(J)\). **Theorem III.1**.: _Let \(\mathbf{\Phi}\) be a random filterbank with \(J\) i.i.d. filters \(\mathbf{w}_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{1})\) with \(\sigma^{2}=(JT)^{-1}\). The expectations of the optimal frame bounds \(A,B\) of \(\mathbf{\Phi}\) are bounded by the order statistics of \(Y_{0},\ldots,Y_{T-1}\sim\chi^{2}(J)\) i.i.d., as follows_ \[J^{-1}\mathbb{E}[Y_{T}^{\min}]\leq\mathbb{E}\left[A\right]\leq 1\leq \mathbb{E}\left[B\right]\leq J^{-1}\mathbb{E}\left[Y_{T}^{\max}\right], \tag{11}\] _where \(Y_{T}^{\min}=\min_{0\leq k<T}Y_{k}\) and \(Y_{T}^{\max}=\max_{0\leq k<T}Y_{k}\)._ Proof.: The inner inequalities \((\mathbb{E}\left[A\right]\leq 1\leq\mathbb{E}\left[B\right])\) are a direct consequence of Proposition II.1. Regarding the outer inequalities, we perform an eigenvalue decomposition of \(\mathbf{Q}_{T}(\mathbf{x})=\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\), where the columns of \(\mathbf{U}\) contain the eigenvectors of \(\mathbf{Q}_{T}(\mathbf{x})\) as columns and the diagonal matrix \(\mathbf{A}\) contains the spectrum of eigenvalues, \(\mathbf{\lambda}\). For each filter \(\mathbf{w}_{j}\) with \(1\leq j\leq J\), let us use the shorthand \(\mathbf{y}_{j}=\mathbf{U}^{\top}\mathbf{w}_{j}\). By Lemma II.2 we obtain \[\|\mathbf{x}*\mathbf{w}_{j}\|^{2}=\mathbf{w}_{j}^{\top}\mathbf{U}^{\top}\mathbf{A} \mathbf{U}\mathbf{w}_{j}=\sum_{k=0}^{T-1}\lambda_{k}\mathbf{y}_{j}[k]^{2}. \tag{12}\] We define \(Y_{k}=\sum_{j=1}^{J}(\mathbf{y}_{j}^{2}[k]/\sigma^{2})\). Equation (12) yields \[\|\mathbf{\Phi}\mathbf{x}\|^{2}=\sigma^{2}\sum_{k=0}^{N-1}\lambda_{k}\sum_{j=1}^{J} \frac{\mathbf{y}_{j,k}^{2}}{\sigma^{2}}=\sigma^{2}\sum_{k=0}^{N-1}\lambda_{k}Y_{k}. \tag{13}\] Since \(\mathbf{Q}_{T}(\mathbf{x})\) is a real symmetric matrix, \(\mathbf{U}\) is an orthogonal matrix. Thus, \(\mathbf{y}_{j}\) follows the same distribution as \(\mathbf{w}_{j}\) \[\mathbf{U}^{\top}\mathbf{w}_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{U}\mathbf{U}^{ \top})=\mathcal{N}(0,\sigma^{2}\mathbf{I}). \tag{14}\] For all \(k\) with \(0\leq k<T\), \((\mathbf{y}_{j}^{2}[k]/\sigma^{2})\) are i.i.d. standard Gaussian random variables. Thus, the \(Y_{k}\)'s are also i.i.d. and follow a \(\chi^{2}(J)\) distribution. Let us define the associated order statistics \[Y_{T}^{\min}=\min_{0\leq k<T}Y_{k}\quad\text{ and }\quad Y_{T}^{\max}=\max_{0\leq k <T}Y_{k}. \tag{15}\] Lemma II.3 implies \(\sum_{k=0}^{T-1}\lambda_{k}=\operatorname{Tr}(\mathbf{Q}_{T}(\mathbf{x}))=T\|\mathbf{x}\| ^{2}\). Hence \[\min_{\|\mathbf{x}\|_{2}=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}-\sigma^{2}TY_{T}^{ \min}\geq 0, \tag{16}\] \[\max_{\|\mathbf{x}\|_{2}=1}\|\mathbf{\Phi}\mathbf{x}\|^{2}-\sigma^{2}TY_{T}^{ \max}\leq 0,\] where the inequalities are understood as almost sure. Taking the expectation and setting \(\sigma^{2}=(JT)^{-1}\) yields the claim. In Figure 3, we have performed numerical simulations that align with the statement of Theorem III.1. We observe that optimal frame bounds \(A\) and \(B\) typically diverge away from 1 as \(T\) up grows to \(2^{10}\), a common value in audio applications. Fig. 3: Empirical means \(\overline{A}\) and \(\overline{B}\) (solid lines) and \(95^{\text{th}}\) percentiles (shaded area) of frame bounds \(A\) and \(B\) for 1000 instances of \(\mathbf{\Phi}\) with \(\sigma^{2}=(TJ)^{-1}\), \(J=40\) and different values of \(T\). Dashed lines denote the bounds of \(\mathbb{E}[A]\) and \(\mathbb{E}[B]\) from Theorem III.1. Dotted lines denote the asymptotic bounds proposed in (20). This phenomenon is evidence of instabilities at initialization of a convnet and, consequently, also during training. After bounding the expected values of \(A\) and \(B\), we now turn to their variances. We refer to the appendix for a proof. **Proposition III.2**.: _Let \(\mathbf{\Phi}\) be a random filterbank with \(J\) i.i.d. filters \(\mathbf{w}_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\) with \(\sigma^{2}=\left(JT\right)^{-1}\). The variances of the optimal frame bounds \(A\) and \(B\) can be bounded as_ \[2(TJ)^{-1}\leq\mathbb{V}\left[A\right],\mathbb{V}\left[B\right]\leq 2J^{-1}. \tag{17}\] ### _Asymptotics of the condition number_ The ratio \(\kappa=B/A\), known as condition number, characterizes the numerical stability of \(\mathbf{\Phi}\). In particular, \(\kappa\) equals one if and only if there exists \(C>0\) such that \(\|\mathbf{\Phi}\mathbf{x}\|^{2}=C\|\mathbf{x}\|^{2}\). However, its expected value, \(\mathbb{E}\left[\kappa\right]\), may be strictly greater than one even so \(\mathbb{E}\left[\|\mathbf{\Phi}\mathbf{x}\|^{2}\right]=C\|\mathbf{x}\|^{2}\) holds for every \(\mathbf{x}\). Since \(A\) and \(B\) are dependent random variables, \(\mathbb{E}\left[\kappa\right]\) is difficult to study analytically [24]. We conjecture that \(1\leq\mathbb{E}\left[\kappa\right]\leq(\mathbb{E}[B]/\mathbb{E}[A])\), which is equivalent to \(\mathrm{cov}(\kappa,A)\geq 0\). Unfortunately, the expected values of \(Y_{T}^{\min}\) and \(Y_{T}^{\max}\) that are used for the bounds of \(\mathbb{E}\left[A\right]\) and \(\mathbb{E}\left[B\right]\) in Theorem III.1 are not available in closed form for finite values of \(T\)[25]. Nevertheless, for a large number of degrees of freedom \(J\), \(\chi^{2}(J)\) resembles a normal distribution with mean \(J\) and variance \(2J\), such that we propose to replace \(Y_{T}^{\min}\) and \(Y_{T}^{\max}\) by \[\bar{Y}_{T}^{\min}=\min_{0\leq\kappa<T}\bar{Y}_{k}\quad\text{and}\quad\bar{Y} _{T}^{\max}=\max_{0\leq k<T}\bar{Y}_{k}, \tag{18}\] where the \(\bar{Y}_{k}\)'s are i.i.d. drawn from \(\mathcal{N}(J,2J)\)[26]. From the extreme value theorem for the standard normal distribution (see e.g. Theorem 1.5.3. in [27]) we know that for large \(T\), we can asymptotically approximate the expectations of (18) by \[\mathbb{E}\left[\bar{Y}_{T}^{\min}\right]\propto J-2\sqrt{J\log T}\quad\text {and}\quad\mathbb{E}\left[\bar{Y}_{T}^{\max}\right]\propto J+2\sqrt{J\log T}. \tag{19}\] The equations above suggest approximate bounds for \(\mathbb{E}[A]\) and \(\mathbb{E}[B]\). We draw inspiration from them to propose the value \[\tilde{\kappa}(J,T)=\left(1+2\sqrt{\frac{\log T}{J}}\right)\Bigg{/}\left(1-2 \sqrt{\frac{\log T}{J}}\right), \tag{20}\] as asymptotic error bound for \(\mathbb{E}[\kappa]\), subject to \(T\rightarrow\infty\) and \(J>4\log T\). Interestingly, the level sets of \(\tilde{\kappa}\) satisfy \(J\propto\log T\), a scaling law which is reminiscent of the theory underlying the construction of discrete wavelet bases [28]. ### _Numerical simulation_ Figure 4 (top) shows empirical means of \(\kappa\) for 1000 independent realizations of \(\mathbf{\Phi}\) and various settings of \(J\) and \(T\). Qualitatively speaking we observe that convnets with few long filters (small \(J\), large \(T\)) suffer from ill-conditioning, as measured by a large \(\kappa\). This is despite having set \(\sigma^{2}=(JT)^{-1}\), which implies that \(\mathbf{\Phi}\) satisfies energy preservation on average (Proposition II.1). Figure 4 (bottom) shows the result of the same simulation with \(J\) on the horizontal axis, together with our proposed scaling law \(J\propto\log T\). We observe that filterbanks that follow this scaling law have approximately the same condition number \(\kappa\) on average. ## IV Conclusion This article presents a large deviations theory of energy dissipation in random filterbanks. We have found that the variance of output energy \(\|\mathbf{\Phi}\mathbf{x}\|^{2}\) grows with the autocorrelation of the input sequence \(\mathbf{x}\) (Proposition II.1). Thus, natural audio signals, which typically have high short-term autocorrelation, are _adversarial examples_ to 1-D convnets, in the sense that they trigger numerical instabilities with high probability. Furthermore, we have shown that numerical stability depends strongly on architecture design for \(\mathbf{\Phi}\); specifically, the number of filters \(J\) and their lengths \(T\). By combining frame theory with extreme value theory, we have explained why the most stable convnets are those with many short filters (large \(T\), short \(J\)). For large convnets, we have identified a scaling law (\(J\propto\log T\)) which roughly preserves the condition number of \(\mathbf{\Phi}\). Characterizing the probability distribution of the condition number for non-asymptotic values of \(J\) and \(T\) remains an open problem. As the next step, we plan to study the potential numerical instabilities that arise due to aliasing effects from strided convolution in decimated random filterbanks.1 Footnote 1: The source code for reproducing all numerical simulations may be found at [https://github.com/danedane-haider/Random-Filterbanks](https://github.com/danedane-haider/Random-Filterbanks). ## Acknowledgment D. Haider is recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Acoustics Research Institute (A 26355). V. Lostanlen is supported by ANR MuReNN. The work of M. Ehler was supported by the WWTF project CHARMED (VRG12-009) and P. Balazs was supported by the FWF projects LoFT (P 34624) and NoMASP (P 34922). Fig. 4: We denote by \(\overline{A},\overline{B}\), and \(\overline{\kappa}\) the empirical means of the respective quantities over 1000 instances of \(\mathbf{\Phi}\) with \(\sigma^{2}=(TJ)^{-1}\). Top: Comparison of \(\overline{\kappa}\) (solid) and \(\overline{B}/\overline{A}\) (dashed) for increasing filter length \(T\) and different values of \(J\). Bottom: Empirical mean \(\overline{\kappa}\) for increasing numbers of filters \(J\) and different values \(T\). For \(J=\log_{2}T\) (solid black), \(\overline{\kappa}\) remains approximately constant.
2309.13951
Monotone $T$-convex $T$-differential fields
Let $T$ be a complete, model complete o-minimal theory extending the theory of real closed ordered fields and assume that $T$ is power bounded. Let $K$ be a model of $T$ equipped with a $T$-convex valuation ring $\mathcal{O}$ and a $T$-derivation $\partial$ such that $\partial$ is monotone, i.e., weakly contractive with respect to the valuation induced by $\mathcal{O}$. We show that the theory of monotone $T$-convex $T$-differential fields, i.e., the common theory of such $K$, has a model completion, which is complete and distal. Among the axioms of this model completion, we isolate an analogue of henselianity that we call $T^{\partial}$-henselianity. We establish an Ax--Kochen/Ershov theorem and further results for monotone $T$-convex $T$-differential fields that are $T^{\partial}$-henselian.
Elliot Kaplan, Nigel Pynn-Coates
2023-09-25T08:32:14Z
http://arxiv.org/abs/2309.13951v2
# Monotone \(T\)-convex \(T\)-differential fields ###### Abstract. Let \(T\) be a complete, model complete o-minimal theory extending the theory of real closed ordered fields and assume that \(T\) is power bounded. Let \(K\) be a model of \(T\) equipped with a \(T\)-convex valuation ring \(\mathcal{O}\) and a \(T\)-derivation \(\vartheta\) such that \(\vartheta\) is monotone, i.e., weakly contractive with respect to the valuation induced by \(\mathcal{O}\). We show that the theory of monotone \(T\)-convex \(T\)-differential fields, i.e., the common theory of such \(K\), has a model completion, which is complete and distal. Among the axioms of this model completion, we isolate an analogue of henselianity that we call \(T^{\vartheta}\)-henselianity. We establish an Ax-Kochen/Ershov theorem and further results for monotone \(T\)-convex \(T\)-differential fields that are \(T^{\vartheta}\)-henselian. ## 1. Introduction Let \(\mathbb{R}_{\mathrm{an}}\) denote the expansion of the real field \(\mathbb{R}\) by all globally subanalytic sets. Explicitly, \(\mathbb{R}_{\mathrm{an}}\) is the structure obtained by adding a new function symbol for each \(n\)-ary function \(F\) which is real analytic on a neighborhood of \([-1,1]^{n}\), and by interpreting this function symbol as the restriction of \(F\) to \([-1,1]^{n}\). Let \(T_{\mathrm{an}}\) be the elementary theory of \(\mathbb{R}_{\mathrm{an}}\) in the language extending the language of ordered rings by the function symbols described above. This theory is model complete and o-minimal [11, 5]. Let \(K\) be a model of \(T_{\mathrm{an}}\). A map \(\vartheta\colon K\to K\) is said to be a \(T_{\mathrm{an}}\)-_derivation_ if it is a field derivation on \(K\) which satisfies the identity \[\partial F(u)\ =\ \frac{\partial F}{\partial Y_{1}}(u)\partial u_{1}+\cdots+ \frac{\partial F}{\partial Y_{n}}(u)\partial u_{n}\] for each restricted analytic function \(F\) and each \(u=(u_{1},\ldots,u_{n})\in K^{n}\) with \(|u_{i}|<1\) for each \(i\). The only \(T_{\mathrm{an}}\)-derivation on \(\mathbb{R}_{\mathrm{an}}\) itself is the trivial derivation, which takes constant value zero. However, there are a number of interesting examples of nonstandard models of \(T_{\mathrm{an}}\) with nontrivial \(T_{\mathrm{an}}\)-derivations. 1. Consider \(\mathbb{R}((t^{1/\infty})\vDash\bigcup_{n}\mathbb{R}((t^{1/n}))\), the field of Puiseux series over \(\mathbb{R}\). We totally order \(\mathbb{R}((t^{1/\infty})\vDash)\) by taking \(t\) to be a positive infinitesimal element. Then \(\mathbb{R}((t^{1/\infty})\vDash)\) admits an expansion to a model of \(T_{\mathrm{an}}\), by extending each restricted analytic function on \(\mathbb{R}\) to the corresponding box in \(\mathbb{R}((t^{1/\infty})\vDash)\) via Taylor expansion. We put \(x\vDash 1/t\), and we let \(\vartheta\colon\mathbb{R}((t^{1/\infty})\vDash\mathbb{R}((t^{1/\infty})\vDash \mathbb{R}(t^{1/\infty})\vDash)\) be the derivation with respect to \(x\), so \[\vartheta\sum_{q\in\mathbb{Q}}r_{q}x^{q}\ \coloneqq\ \sum_{q\in\mathbb{Q}}r_{q}qx^{q-1}.\] As \(\vartheta\) commutes with infinite sums, it is routine to verify that \(\vartheta\) is indeed a \(T_{\mathrm{an}}\)-derivation. 2. The field \(\mathbb{T}\) of logarithmic-exponential transseries also admits a canonical expansion to a model of \(T_{\mathrm{an}}\) using Taylor expansion [8, Corollary 2.8]. The usual derivation on \(\mathbb{T}\) is a \(T_{\mathrm{an}}\)-derivation. 3. Let \(\boldsymbol{k}\vDash T_{\mathrm{an}}\) and let \(\vartheta_{\boldsymbol{k}}\) be a \(T_{\mathrm{an}}\)-derivation on \(\boldsymbol{k}\). Let \(\Gamma\) be a divisible ordered abelian group, and consider the Hahn field \(\boldsymbol{k}((t^{\Gamma})\vDash)\), ordered so that \(0<t<\boldsymbol{k}^{>}\). We expand \(\boldsymbol{k}((t^{\Gamma})\vDash)\) to a model of \(T_{\mathrm{an}}\) using Taylor expansion (see [16, Proposition 2.13] for details). Let \(c\colon\Gamma\to\boldsymbol{k}\) be an additive map. We use \(c\) to define a derivation \(\vartheta\) on \(\boldsymbol{k}((t^{\Gamma})\vDash)\) as follows: \[\vartheta\sum_{\gamma}f_{\gamma}t^{\gamma}\ \coloneqq\ \sum_{\gamma}\big{(} \partial_{\boldsymbol{k}}f_{\gamma}+f_{\gamma}c(\gamma)\big{)}t^{\gamma}.\] The map \(\vartheta\) is the unique derivation which extends \(\partial_{\boldsymbol{k}}\), commutes with infinite sums, and satisfies the identity \(\partial t^{\gamma}=c(\gamma)t^{\gamma}\) for each \(\gamma\in\Gamma\). By [16, Proposition 3.14], this derivation is even a \(T_{\mathrm{an}}\)-derivation. We denote this expansion of \(\boldsymbol{k}((t^{\Gamma})\vDash)\) by \(\boldsymbol{k}((t^{\Gamma})\vDash_{\mathrm{an},c}\). In each of the above examples, there is a natural convex valuation ring \(\mathcal{O}\) (the convex hull of \(\mathbb{R}\) in the first two examples, the convex hull of \(\boldsymbol{k}\) in the third). These convex valuation rings are even \(T_{\mathrm{an}}\)_-convex_, as defined by van den Dries and Lewenberg [7], and the derivation in each example is continuous. In this paper, we work not just with the theory \(T_{\rm an}\), but with any complete, model complete, power bounded o-minimal \(\mathcal{L}\)-theory \(T\) (where _power boundedness_ is the assumption that every definable function is eventually dominated by a power function \(x\mapsto x^{\lambda}\)). Let \(K\) be a model of \(T\). A \(T\)_-derivation_ is a map \(\partial\colon K\to K\) which satisfies the identity \[\partial F(u)\ =\ \frac{\partial F}{\partial Y_{1}}(u)\partial u_{1}+\cdots+ \frac{\partial F}{\partial Y_{n}}(u)\partial u_{n}\] for all \(\mathcal{L}(\emptyset)\)-definable functions \(F\) which are \(\mathcal{C}^{1}\) at \(u\), and a \(T\)_-convex valuation ring_ is a nonempty convex subset \(\mathcal{O}\subseteq K\) which is closed under all \(\mathcal{L}(\emptyset)\)-definable continuous functions. A \(T\)**-convex \(T\)-differential field** is the expansion of \(K\) by a \(T\)-convex valuation ring and a continuous \(T\)-derivation. In examples (1) and (3) above, the derivation \(\partial\) is **monotone**: the logarithmic derivative \(\partial a/a\) is in the valuation ring \(\mathcal{O}\) for each nonzero \(a\). The derivation in example (2) is not monotone. In this paper, our focus is primarily on monotone \(T\)-convex \(T\)-differential fields, and in this setting, our assumption that \(T\) is power bounded comes almost for free; see Remark 3.7. We prove the following: **Theorem**.: _The theory of monotone \(T\)-convex \(T\)-differential fields has a model completion. This model completion is complete and distal (in particular, it has NIP)._ Our model completion is quite similar to Scanlon's model completion for the theory of monotone valued differential fields [23]. In the case \(T=T_{\rm an}\), a model of this model completion can be constructed as follows: consider the \(T_{\rm an}\)-convex \(T_{\rm an}\)-differential field \(\boldsymbol{k}(\!(\!t^{\Gamma}\!)\!)_{\!{\rm an},c}\) in example (3), where the derivation \(\partial_{\boldsymbol{k}}\) on \(\boldsymbol{k}\) is _generic_, as defined in [10], and where \(c\) is taken to be the zero map. In axiomatizing this model completion, we introduce an analogue of differential-henselianity for \(T\)-convex \(T\)-differential fields, which we call \(T^{\flat}\)_-henselianity_. This condition states that \(\partial\) maps the maximal ideal \(\sigma\) of \(\mathcal{O}\) into itself (small derivation); that for any \(a_{0},\ldots,a_{r}\in\mathcal{O}\), not all in \(\sigma\), there is \(y\in K\) such that \(a_{0}y+a_{1}\partial y+\ldots+a_{r}\partial^{r}y-1\in\sigma\) (linearly surjective differential residue field); and that for every \(\mathcal{L}(K)\)-definable function \(F\colon K^{r}\to K\), if \(a\in K\) is an approximate zero of the function \(y\mapsto y^{(r)}-F(y,\ldots,y^{(r-1)})\), and if this function is well-approximated by a linear differential operator on a neighborhood of \(a\), then it has an actual zero in this neighborhood. This last condition is inspired by Rideau-Kikuchi's definition of \(\sigma\)_-henselianity_ for analytic difference valued fields [22]. We prove an Ax-Kochen/Ershov theorem for monotone \(T^{\flat}\)-henselian fields, from which we derive the following: **Theorem**.: _Any monotone \(T^{\flat}_{\rm an}\)-henselian field \((K,\mathcal{O},\partial)\) is elementarily equivalent to some \(\boldsymbol{k}(\!(\!t^{\Gamma}\!)\!)_{\!{\rm an},c}\)._ Hakobyan [13] previously established a similar statement for monotone differential-henselian fields. Let \(K\) be a \(T\)-convex \(T\)-differential field with small derivation and linearly surjective differential residue field. With an eye toward future applications, we develop the theory of \(T^{\flat}\)-henselian fields as much as possible without the assumption that \(\partial\) is monotone. We show that if \(K\) is spherically complete, then \(K\) is \(T^{\flat}\)-henselian, and that any \(T^{\flat}\)-henselian \(K\) admits a _lift of the differential residue field_: an elementary \(\mathcal{L}\)-substructure \(\boldsymbol{k}\prec_{\mathcal{L}}K\) which is closed under \(\partial\) and which maps isomorphically onto the differential residue field \({\rm res}(K)\). An essential ingredient in this proof is the fact, due to Garcia Ramirez, that \(\mathcal{L}\)-definable functions enjoy the _Jacobian property_[12]. We also show that if each immediate extension of \(K\) has a property we call the \(T^{\flat}\)_-henselian configuration property_, then \(K\) has a unique spherically complete immediate \(T\)-convex \(T\)-differential field extension with small derivation. The existence of such a spherical completion was previously established by the first author [17, Corollary 6.4], and the \(T^{\flat}\)-henselian configuration property is similar to the differential-henselian configuration property of van den Dries and the second author [9]. Again making use of the Jacobian property, we show that monotone fields enjoy the \(T^{\flat}\)-henselian configuration property, thereby establishing the following: **Theorem**.: _Let \(K\) be a monotone \(T\)-convex \(T\)-differential field with linearly surjective differential residue field. Then \(K\) has a unique spherically complete immediate monotone \(T\)-convex \(T\)-differential field extension, up to isomorphism over \(K\)._ This uniqueness plays an essential role in our Ax-Kochen/Ershov theorem. One may be able to adapt our definition of \(T^{\flat}\)-henselianity to study compatible derivations on other tame expansions of equicharacteristic zero valued fields. One broad class where our methods may generalize is the class of _Hensel minimal_ expansions of valued fields, introduced by Cluckers, Halupczok, and Rideau-Kikuchi [3]. The setting of Hensel minimal fields (1-h-minimal fields to be precise), includes our present o-minimal setting, as well as the fields with analytic structure studied by Cluckers and Lipshitz [4]. As an indicator that generalizing to this class may be possible, we note that the Jacobian property, which proves so useful in this paper, holds in the setting of 1-h-minimal fields [3, Theorem 5.4.10]. ### Organization of the paper Section 2 contains background information on \(T\)-convex valuation rings and \(T\)-convex differential fields and results that we need, drawn from [7, 6, 24, 12, 10, 1, 17]. In Section 3 we introduce \(T^{\flat}\)-henselianity and record basic consequences in Section 3.1. In Section 3.2, we show that \(T^{\flat}\)-henselianity yields a lift the differential residue field (Theorem 3.12). Section 3.3 relates \(T^{\flat}\)-henselianity to immediate extensions and shows that every spherically complete \(T\)-convex \(T\)-differential field with small derivation and linearly surjective differential residue field is \(T^{\flat}\)-henselian (Corollary 3.15). The next subsection develops further technical material for use in Section 3.5, which establishes the uniqueness of spherically complete immediate extensions and related results, conditional on the \(T^{\flat}\)-hensel configuration property introduced there. Section 4 focuses on monotone fields. In Section 4.1, we show that monotone \(T\)-convex \(T\)-differential fields have the \(T^{\flat}\)-hensel configuration property and summarize the consequences. The next subsection records variants of the results in Section 3.5, of which only Corollary 4.7 is needed later, in the proof of the Ax-Kochen/Ershov theorem. The short Section 5 explains what \(T^{\flat}\)-henselianity means when \(T\) is the theory of real closed ordered fields. Section 6 contains the Ax-Kochen/Ershov theorem and related three-sorted results, while Section 7 contains the model completion result and related one-sorted results. ## 2. Background ### Notation and conventions Throughout, \(m,n,q,r\) range over \(\mathbb{N}\coloneqq\{0,1,2,\dots\}\). For a unital ring \(R\), we let \(R^{\times}\) denote the multiplicative group of units of \(R\), and for an ordered set \(S\) equipped with a distinguished element \(0\), we set \(S^{>}\coloneqq\{s\in S:s>0\}\). In this paper, we fix a complete, model complete o-minimal theory \(T\) extending the theory of real closed ordered fields in an appropriate language \(\mathcal{L}\supseteq\{0,1,+,\cdot,<\}\). Throughout, \(K\) is a model of \(T\). Given a subset \(A\subseteq K\), we let \(\operatorname{dcl}_{\mathcal{L}}(A)\) denote the \(\mathcal{L}\)-definable closure of \(A\). Since \(T\) has definable Skolem functions, \(\operatorname{dcl}_{\mathcal{L}}(A)\) is (the underlying set of) an elementary substructure of \(K\). It is well-known that \(\operatorname{dcl}_{\mathcal{L}}\) is a pregeometry, and we denote the corresponding rank by \(\operatorname{rk}_{\mathcal{L}}\). Let \(M\) be a \(T\)-extension of \(K\) (that is, a model of \(T\) which extends \(K\)) and let \(A\subseteq M\). We denote by \(K\langle A\rangle\) the intermediate extension of \(K\) with underlying set \(\operatorname{dcl}_{\mathcal{L}}(K\cup A)\). We say that \(A\) is \(\mathcal{L}(K)\)**-independent** if \(A\) is independent over \(K\) with respect to the pregeometry \(\operatorname{dcl}_{\mathcal{L}}\). Otherwise \(A\) is said to be \(\mathcal{L}(K)\)**-dependent**. If \(A\) is \(\mathcal{L}(K)\)-independent and \(M=K\langle A\rangle\), then \(A\) is said to be a \(\operatorname{dcl}_{\mathcal{L}}\)**-basis for \(M\) over \(K\)**. Given \(a=(a_{1},\dots,a_{n})\in M^{n}\), we write \(K\langle a\rangle\) in place of \(K\langle\{a_{1},\dots,a_{n}\}\rangle\), and we say that \(a\) is \(\mathcal{L}(K)\)**-independent** if the set \(\{a_{1},\dots,a_{n}\}\) is \(\mathcal{L}(K)\)-independent and no components are repeated. Equivalently, \(a\) is \(\mathcal{L}(K)\)**-independent** if \(\operatorname{rk}_{\mathcal{L}}(a|K)=n\). For tuples \(a,b\in K^{n}\), we let \(a\cdot b\coloneqq a_{1}b_{1}+\dots+a_{n}b_{n}\) denote the inner product of \(a\) and \(b\). A **power function** is an \(\mathcal{L}(K)\)-definable endomorphism of the ordered multiplicative group \(K^{>}\). Each power function \(f\) is uniquely determined by its derivative at \(1\), and if \(f^{\prime}(1)=\lambda\), then we suggestively write \(a^{\lambda}\) instead of \(f(a)\) for \(a\in K^{>}\). The collection \(\Lambda\coloneqq\{f^{\prime}(1):f\text{ is a power function}\}\) forms a subfield of \(K\), called the **field of exponents of \(K\)**. If every \(\mathcal{L}(K)\)-definable function is eventually bounded by a power function, then \(K\) is said to be **power bounded**. Suppose \(K\) is power bounded. Then by Miller's dichotomy [20], every model of \(T\) is power bounded and every power function is \(\mathcal{L}(\emptyset)\)-definable, so \(\Lambda\) does not depend on \(K\). We just say that \(T\) is power bounded, and we call \(\Lambda\) the **field of exponents of \(T\)**. For the remainder of the paper, we assume that \(T\) is power bounded with field of exponents \(\Lambda\). ### Background on \(T\)-convex valuation rings In this subsection, let \(\mathcal{O}\) be a \(T\)**-convex valuation ring** of \(K\); that is, \(\mathcal{O}\subseteq K\) is convex and nonempty and \(F(\mathcal{O})\subseteq\mathcal{O}\) for every \(\mathcal{L}(\emptyset)\)-definable continuous \(F\colon K\to K\). We call \((K,\mathcal{O})\) a \(T\)**-convex valued field**. These structures were introduced and studied by van den Dries and Lewenberg [7] and additionally by van den Dries [6]. We briefly review notation, following [17, Section 1], and general facts we use; additional relevant facts on \(T\)-convex valuation rings can be found there, or in the original papers. For valuation-theoretic notation we follow [1, Chapter 3]. Let \(\mathcal{L}^{\mathcal{O}}\coloneqq\mathcal{L}\cup\{\mathcal{O}\}\) be the extension of \(\mathcal{L}\) by a unary predicate \(\mathcal{O}\) and \(T^{\mathcal{O}}\) be the theory extending \(T\) by axioms stating that \(\mathcal{O}\) is a _proper_\(T\)-convex valuation ring. This \(T^{\mathcal{O}}\) is complete and model complete, since is [7, Corollary 3.13]; if \(T\) has quantifier elimination and a universal axiomatization, then \(T^{\mathcal{O}}\) has quantifier elimination [7, Theorem 3.10]. Let \(\Gamma\) be the value group of the valuation \(v\colon K^{\times}\to\Gamma\) induced by \(\mathcal{O}\), which moreover is an ordered \(\Lambda\)-vector space with scalar multiplication \(\lambda va\coloneqq v(a^{\lambda})\) for \(a\in K^{>}\) and \(\lambda\in\Lambda\). We extend \(v\) to \(v\colon K\to\Gamma\cup\{\infty\}\) by \(v(0)\coloneqq\infty\), where \(\infty\) is a symbol not in \(\Gamma\), and extend the ordering and addition of \(\Gamma\) to \(\Gamma\cup\{\infty\}\) in the natural way. We also extend \(v\) to \(K^{n}\) by \(va\coloneqq\min\{va_{1},\ldots,va_{n}\}\) for \(a\in K^{n}\). For \(a,b\in K\), we set: \[a\preccurlyeq b\ \Leftrightarrow\ va\geqslant vb,\qquad a\preccurlyeq b\ \Leftrightarrow\ va>vb,\] \[a\asymp b\ \Leftrightarrow\ va=vb,\qquad a\sim b\ \Leftrightarrow\ a-b\prec b.\] The relations \(\asymp\) and \(\asymp\) are equivalence relations on \(K\) and \(K^{\times}\), respectively. If \(a\sim b\), then \(a\asymp b\) and also \(a>0\) if and only if \(b>0\). The unique maximal ideal of \(\mathcal{O}\) is \(\circ\) and \(\operatorname{res}(K)\coloneqq\mathcal{O}/\circ\) is the **residue field** of \(K\). We usually denote \(\operatorname{res}(K)\) by \(\boldsymbol{k}\) (unlike in [17]) and let \(\bar{a}\) or \(\operatorname{res}(a)\) denote the image of \(a\in\mathcal{O}\) under the residue map to \(\boldsymbol{k}\). In fact, \(\boldsymbol{k}\) can be expanded to a model of \(T\)[7, Remark 2.16], and we always construe \(\boldsymbol{k}\) this way. Related is the fact that the convex hull of an elementary \(\mathcal{L}\)-substructure of \(K\) is a \(T\)-convex valuation ring of \(K\); cf. Section 3.2 on lifting the residue field. If we need to indicate the dependence of these objects on \(K\), we do so using a subscript, as in \(\Gamma_{K}\) and \(\boldsymbol{k}_{K}\). Given a \(T^{\mathcal{O}}\)-extension \(M\) of \(K\), we identify \(\boldsymbol{k}\) with a \(T\)-submodel of \(\boldsymbol{k}_{M}\) and \(\Gamma\) with an ordered \(\Lambda\)-subspace of \(\Gamma_{M}\) in the natural way. If \(\Gamma_{M}=\Gamma\) and \(\boldsymbol{k}_{M}=\boldsymbol{k}\), then the \(T^{\mathcal{O}}\)-extension \(M\) is said to be **immediate**. As a consequence of our power boundedness assumption, we have the following analogue of the Abhyankar-Zariski inequality, established by van den Dries and referred to in the literature as the **Wilkie inequality**. **Fact 2.1** ([6, Section 5]).: _Let \(M\) be a \(T^{\mathcal{O}}\)-extension of \(K\) and suppose \(\operatorname{rk}_{\mathcal{L}}(M|K)\) is finite. Then_ \[\operatorname{rk}_{\mathcal{L}}(M|K)\ \geqslant\ \operatorname{rk}_{ \mathcal{L}}(\boldsymbol{k}_{M}|\boldsymbol{k})+\dim_{\Lambda}(\Gamma_{M}/ \Gamma).\] An **open \(v\)-ball** is a set of the form \(B(a,\gamma)\coloneqq\{b\in K:v(b-a)>\gamma\}\) for \(a\in K\) and \(\gamma\in\Gamma\). These open \(v\)-balls form a basis for the _valuation topology_ on \(K\), which coincides with its order topology since \(\mathcal{O}\) is assumed to be a proper subring. Similarly, a **closed \(v\)-ball** is a set of the form \(\{b\in K:v(b-a)\geqslant\gamma\}\); in this latter definition we allow \(\gamma\in\Gamma\cup\{\infty\}\) so that singletons are closed \(v\)-balls. Later we only use open \(v\)-balls, but closed \(v\)-balls are needed for the next definitions. A collection of closed \(v\)-balls is **nested** if any two balls in the collection have nonempty intersection (recall that any two \(v\)-balls are either disjoint or one is contained in the other). We call \(K\)**spherically complete** if every nonempty nested collection of closed \(v\)-balls has nonempty intersection. For the relationship between spherical completeness and immediate extensions, see the next subsection and the beginning of Section 3.3. In places, we expand \(K\) by two quotients. We denote the quotient \(K^{\times}/(1+\circ)\) by \(\operatorname{RV}_{K}^{\times}\), with corresponding quotient map \(\operatorname{rv}\colon K^{\times}\to\operatorname{RV}_{K}^{\times}\). We set \(\operatorname{RV}_{K}\coloneqq\operatorname{RV}_{K}^{\times}\cup\{0\}\), and we extend \(\operatorname{rv}\) to all of \(K\) by setting \(\operatorname{rv}(0)\coloneqq 0\). The residue map \(\mathcal{O}^{\times}\to\boldsymbol{k}^{\times}\) induces a bijection \(\operatorname{rv}(\mathcal{O}^{\times})\to\boldsymbol{k}^{\times}\), which we also call \(\operatorname{res}\), by \(\operatorname{res}\operatorname{rv}(a)=\bar{a}\) for \(a\in\mathcal{O}^{\times}\); conversely, in the same way we can recover \(\operatorname{res}\colon\mathcal{O}^{\times}\to\boldsymbol{k}^{\times}\) from \(\operatorname{res}\colon\operatorname{rv}(\mathcal{O}^{\times})\to\boldsymbol{k}^ {\times}\). We extend \(\operatorname{res}\) to a map \(\operatorname{rv}(\mathcal{O}^{\times})\cup\{0\}\to\boldsymbol{k}\) by setting \(\operatorname{res}(0)\coloneqq 0\), and we view \(\operatorname{res}\) as a partial map from \(\operatorname{RV}_{K}\) to \(\boldsymbol{k}\). Let \(\mathcal{L}^{\operatorname{RV}}\) be the language extending \(\mathcal{L}^{\mathcal{O}}\) by a sort for \(\operatorname{RV}_{K}\) (in the language \((\cdot,\,^{-1},1,0,<)\)), a sort for \(\boldsymbol{k}\) (in the language \(\mathcal{L}\)), and the maps \(\operatorname{rv}\) and \(\operatorname{res}\). Let \(T^{\operatorname{RV}}\) be the following \(\mathcal{L}^{\operatorname{RV}}\)-theory. 1. \((K,\mathcal{O})\models T^{\mathcal{O}}\); 2. \(\operatorname{RV}_{K}^{\times}\) is an abelian (multiplicative) group in the language \((\cdot,\,^{-1},1)\); 3. \(\operatorname{rv}\colon K^{\times}\to\operatorname{RV}_{K}^{\times}\) is a surjective group homomorphism with \(\operatorname{ker}\operatorname{rv}=1+\circ\), extended to \(K\) by \(\operatorname{rv}(0)=0\); 4. \(<\) is interpreted in \(\operatorname{RV}_{K}\) by \(\operatorname{rv}(a)<\operatorname{rv}(b)\) if \(a<b\) and \(a\not\sim b\), for \(a,b\in K\); 5. \(\operatorname{res}\colon\operatorname{rv}(\mathcal{O}^{\times})\to\boldsymbol{k}^ {\times}\) is a group isomorphism extended to \(\operatorname{RV}_{K}\) by \(\operatorname{res}(\operatorname{RV}_{K}\setminus\operatorname{rv}(\mathcal{O}^{ \times}))=\{0\}\); 6. \(\boldsymbol{k}\models T\) and if \(F\colon K^{n}\to K\) is an \(\mathcal{L}(\emptyset)\)-definable continuous function (with the corresponding function \(\boldsymbol{k}^{n}\to\boldsymbol{k}\) also denoted by \(F\)), then \(\operatorname{res}(F(a))=F(\operatorname{res}(a))\) for all \(a\in\mathcal{O}^{n}\). We denote such a structure simply by \((K,\operatorname{RV}_{K})\). Yin [24] introduced the language \(\mathcal{L}^{\operatorname{RV}}\) and the theory \(T^{\operatorname{RV}}\) (he denoted them by \(\mathcal{L}_{\operatorname{TRV}}\) and TCVF, respectively), although for Yin, the residue field \(\boldsymbol{k}\) is a predicate on the sort for \(\operatorname{RV}_{K}\), not a sort in its own right, and \(\mathcal{O}\) is not in the language \(\mathcal{L}^{\operatorname{RV}}\) (but it is definable). Here are some initial observations about a model \((K,\operatorname{RV}_{K})\) of \(T^{\operatorname{RV}}\). The relation \(<\) in \(\operatorname{RV}_{K}\) is a linear order induced by \(K\) via \(\operatorname{rv}\), which makes \(\operatorname{RV}_{K}^{>}\coloneqq\{d\in\operatorname{RV}_{K}:d>0\}\) an ordered abelian group. The map \(\operatorname{res}\colon\mathcal{O}\to\boldsymbol{k}\) has kernel \(\circ\), so it induces an isomorphism \(\operatorname{res}(K)\to\boldsymbol{k}\) of (ordered) fields. Moreover, it follows from (6) that the map \(\operatorname{res}(K)\to\boldsymbol{k}\) is an isomorphism of \(\mathcal{L}\)-structures by [7, Lemma 1.13] and [24, Remark 2.3] (the latter is a syntactic manoeuvre replacing all primitives of \(\mathcal{L}\) except \(<\) by function symbols interpreted as continuous functions). Observe that, again using [24, Remark 2.3], \(\operatorname{RV}_{K}\) and \(\boldsymbol{k}\) are interpretable in \((K,\mathcal{O})\), so \((K,\operatorname{RV}_{K})\) is a reduct of \((K,\mathcal{O})^{\operatorname{eq}}\), the expansion of \((K,\mathcal{O})\) by all imaginaries; more precisely, \((K,\operatorname{RV}_{K})\) is an expansion by definitions of such a reduct. It follows on general model-theoretic grounds that \((K,\operatorname{RV}_{K})\) is the unique expansion of \((K,\mathcal{O})\) to a model of \(T^{\operatorname{RV}}\) and that if \((M,\mathcal{O}_{M})\) is an elementary \(T^{\mathcal{O}}\)-extension of \((K,\mathcal{O})\), then \((M,\operatorname{RV}_{M})\) is an elementary \(T^{\operatorname{RV}}\)-extension of \((K,\operatorname{RV}_{K})\); these facts were first observed by Yin in [24, Proposition 2.13] and [24, Corollary 2.17], respectively. Next we consider the further extension of \(\mathcal{L}^{\operatorname{RV}}\) by all the imaginary sorts coming from \(\operatorname{RV}_{K}\). The corresponding language is denoted by \(\mathcal{L}^{\operatorname{RV}^{\operatorname{eq}}}\) and the corresponding theory is denoted by \(T^{\operatorname{RV}^{\operatorname{eq}}}\). We let \(\operatorname{RV}_{K}^{\operatorname{eq}}\) be the structure consisting of the sort \(\operatorname{RV}_{K}\), the sort \(\boldsymbol{k}\), and all of these imaginary sorts. As before, \((K,\mathcal{O})\) admits a unique expansion to a model \((K,\operatorname{RV}_{K}^{\operatorname{eq}})\) of \(T^{\operatorname{RV}^{\operatorname{eq}}}\), and if \((M,\mathcal{O}_{M})\) is an elementary \(T^{\operatorname{\mathcal{O}}}\)-extension of \((K,\mathcal{O})\), then \((M,\operatorname{RV}_{M}^{\operatorname{eq}})\) is an elementary \(T^{\operatorname{RV}^{\operatorname{eq}}}\)-extension of \((K,\operatorname{RV}_{K}^{\operatorname{eq}})\). Also note that if \((M,\mathcal{O}_{M})\) is an _immediate_\(T^{\mathcal{O}}\)-extension of \((K,\mathcal{O})\), then \(\operatorname{RV}_{M}^{\operatorname{eq}}=\operatorname{RV}_{K}^{\operatorname {eq}}\). In addition, any \(\mathcal{L}^{\operatorname{RV}^{\operatorname{eq}}}(K\cup\operatorname{RV}_{K} ^{\operatorname{eq}})\)-definable subset of \(K^{n}\) is already \(\mathcal{L}^{\mathcal{O}}(K)\)-definable. We use these facts in combination with the following key result, the _Jacobian property_ for definable functions in models of \(T^{\mathcal{O}}\): **Fact 2.2** ([12, Theorem 3.18]).: _Let \(A\subseteq K\cup\operatorname{RV}_{K}^{\operatorname{eq}}\) and let \(F\colon K^{n}\to K\) be an \(\mathcal{L}^{\operatorname{RV}^{\operatorname{eq}}}(A)\)-definable function. Then there is an \(\mathcal{L}^{\operatorname{RV}^{\operatorname{eq}}}(A)\)-definable map \(\chi\colon K^{n}\to\operatorname{RV}_{K}^{\operatorname{eq}}\) such that for each \(s\in\chi(K^{n})\), if \(\chi^{-1}(s)\) contains an open \(v\)-ball, then either \(F\) is constant on \(\chi^{-1}(s)\) or there is \(d\in K^{n}\) such that_ \[v\big{(}F(x)-F(y)-d\cdot(x-y)\big{)}\ >\ vd+v(x-y)\] _for all \(x,y\in\chi^{-1}(s)\) with \(x\neq y\)._ ### Background on \(T\)-derivations Let \(\partial\colon K\to K\) be a map. For \(a\in K\), we use \(a^{\prime}\) or \(\partial a\) in place of \(\partial(a)\) and if \(a\neq 0\), then we set \(a^{\dagger}\coloneqq a^{\prime}/a\). Given \(r\in\mathbb{N}\), we write \(a^{(r)}\) in place of \(\partial^{r}(a)\), and we let \(\mathcal{J}_{\partial}^{r}(a)\) denote the tuple \((a,a^{\prime},\ldots,a^{(r)})\). We use \(\mathcal{J}_{\partial}^{\infty}(a)\) for the infinite tuple \((a,a^{\prime},a^{\prime\prime},\ldots)\). Let \(F\colon U\to K\) be an \(\mathcal{L}(\emptyset)\)-definable \(\mathcal{C}^{1}\)-function with \(U\subseteq K^{n}\) open. We say that \(\partial\) is **compatible with**\(F\) if \[F(u)^{\prime}\ =\ \frac{\partial F}{\partial Y_{1}}(u)u_{1}^{\prime}+\cdots+\frac{ \partial F}{\partial Y_{n}}(u)u_{n}^{\prime}\] for each \(u=(u_{1},\ldots,u_{n})\in U\). We say that \(\partial\) is a \(T\)**-derivation on**\(K\) if \(\partial\) is compatible with every \(\mathcal{L}(\emptyset)\)-definable \(\mathcal{C}^{1}\)-function with open domain. Let \(T^{\partial}\) be the \(\mathcal{L}^{\partial}\coloneqq\mathcal{L}\cup\{\partial\}\)-theory which extends \(T\) by axioms stating that \(\partial\) is a \(T\)-derivation. The study of \(T\)-derivations was initiated in [10]. Any \(T\)-derivation on \(K\) is a _derivation_ on \(K\), that is, a map satisfying the identities \((a+b)^{\prime}=a^{\prime}+b^{\prime}\) and \((ab)^{\prime}=a^{\prime}b+ab^{\prime}\) for \(a,b\in K\) (this follows from compatibility with the functions \((x,y)\mapsto x+y\) and \((x,y)\mapsto xy\)). For the remainder of this subsection, \(\partial\) is a \(T\)-derivation on \(K\). We also call \((K,\partial)\) a \(T\)**-differential field**. It is straightforward to verify the following fact. **Fact 2.3**.: _If \(x\mapsto x^{\lambda}\colon K^{>}\to K\) is an \(\mathcal{L}(\emptyset)\)-definable power function on \(K\), then \((y^{\lambda})^{\prime}=\lambda y^{\lambda-1}y^{\prime}\) for all \(y\in K^{>}\)._ Given an element \(a\) in a \(T^{\partial}\)-extension of \(K\), we write \(K\langle\!\langle a\rangle\!\rangle\) in place of \(K\langle\!\langle a\rangle\!\rangle\). Then \(K\langle\!\langle a\rangle\!\rangle\) is itself a \(T^{\mathfrak{d}}\)-extension of \(K\). We say that \(a\) is \(T^{\mathfrak{d}}\)**-algebraic over**\(K\) if \(\mathcal{J}_{\partial}^{r}(a)\) is \(\mathcal{L}(K)\)-dependent for some \(r\); otherwise, \(a\) is said to be \(T^{\mathfrak{d}}\)**-transcendental**. Equivalently, \(a\) is \(T^{\mathfrak{d}}\)-algebraic over \(K\) if \(a\in\operatorname{cl}^{\partial}(K)\), where \(\operatorname{cl}^{\mathfrak{d}}\) is the \(\partial\)-closure pregeometry considered in [10, Section 3]. More generally, a \(T^{\mathfrak{d}}\)-extension \(L\) of \(K\) is said to be \(T^{\mathfrak{d}}\)**-algebraic over**\(K\) if each \(a\in L\) is \(T^{\mathfrak{d}}\)-algebraic over \(K\) or, equivalently, if \(L\subseteq\operatorname{cl}^{\mathfrak{d}}(K)\). As a consequence of [10, Lemma 3.2 (4)], we have the following: **Fact 2.4**.: _Let \(a\) be an element in a \(T^{\mathfrak{d}}\)-extension of \(K\). If the tuple \(\mathcal{J}_{\partial}^{r}(a)\) is \(\mathcal{L}(K)\)-dependent, then \(K\langle\!\langle a\rangle\!\rangle=K\langle\!\langle\mathcal{J}_{\partial}^{r-1}a\rangle\)._ Any \(T\)-extension of \(K\) can be expanded to a \(T^{\mathfrak{d}}\)-extension of \(K\), and this expansion depends entirely on how the derivation is extended to a \(\operatorname{dcl}_{\mathcal{L}}\)-basis: **Fact 2.5** ([10, Lemma 2.13]).: _Let \(M\) be a \(T\)-extension of \(K\), let \(A\) be a \(\operatorname{dcl}_{\mathcal{L}}\)-basis for \(M\) over \(K\), and let \(s\colon A\to M\) be a map. Then there is a unique extension of \(\partial\) to a \(T\)-derivation on \(M\) such that \(a^{\prime}=s(a)\) for all \(a\in A\)._ The \(T\)-derivation \(\partial\) is said to be **generic** if for each \(\mathcal{L}(K)\)-definable function \(F\colon U\to K\) with \(U\subseteq K^{r}\) open, there is \(a\in K\) with \(\partial_{\partial}^{r-1}(a)\in U\) such that \(a^{(r)}=F(\partial_{\partial}^{r-1}(a))\). Let \(T^{\partial}_{\mathcal{G}}\) be the \(\mathcal{L}^{\partial}\)-theory which extends \(T^{\partial}\) by axioms stating that \(\partial\) is generic. In [10], it was shown that the theory \(T^{\partial}_{\mathcal{G}}\) is the model completion of \(T^{\partial}\). We record here the main extension and embedding results which go into this proof, for use in Section 7. **Fact 2.6** ([10, Proposition 4.3 and Lemma 4.7]).: _There is a model \(M\models T^{\partial}_{\mathcal{G}}\) which extends \(K\) with \(|M|=|K|\). Given any such extension \(M\) and any \(|K|^{+}\)-saturated model \(M^{*}\models T^{\partial}_{\mathcal{G}}\) extending \(K\), there is an \(\mathcal{L}^{\partial}(K)\)-embedding \(M\to M^{*}\)._ Next we recall some terminology and notation about linear differential operators from [1, Chapter 5]. Let \(K\{Y\}\coloneqq K[Y,Y^{\prime},\dots]\) be the differential ring of differential polynomials over \(K\) and let \(K[\partial]\) be the ring of linear differential operators over \(K\). An element in \(K[\partial]\) corresponds to a \(C\)-linear operator \(y\mapsto A(y)\) on \(K\), where \(A(Y)=a_{0}Y+a_{1}Y^{\prime}+\dots+a_{r}Y^{(r)}\in K\{Y\}\) is a homogeneous linear differential polynomial. Viewed as an operator in \(K[\partial]\), we write \(A=a_{0}+a_{1}\partial+\dots+a_{r}\partial^{r}\), and we freely pass between viewing \(A\) as an element of \(K[\partial]\) and of \(K\{Y\}\). Henceforth, \(A=a_{0}+a_{1}\partial+\dots+a_{r}\partial^{r}\in K[\partial]\). If \(a_{r}\neq 0\), we say that \(A\) has **order**\(r\). Multiplication in the ring \(K[\partial]\) is given by composition. In particular, for \(g\in K^{\times}\) the linear differential operator \(Ag\in K[\partial]\) corresponds to the differential polynomial \(A_{\times g}(Y)=A(gY)\in K\{Y\}\). We call \(K\)**linearly surjective** if every \(A\in K[\partial]^{\neq}\coloneqq K[\partial]\setminus\{0\}\) is surjective. ### Background on \(T\)-convex \(T\)-differential fields Let \(\mathcal{L}^{\mathcal{O},\partial}\coloneqq\mathcal{L}^{\mathcal{O}}\cup \mathcal{L}^{\partial}=\mathcal{L}\cup\{\mathcal{O},\partial\}\), and let \(T^{\mathcal{O},\partial}\) be the \(\mathcal{L}^{\mathcal{O},\partial}\)-theory which extends \(T^{\partial}\) and \(T^{\mathcal{O}}\) by the additional axiom "\(\partial\circ\subseteq\circ\)". **Assumption 2.7**.: _For the remainder of this paper, let \(K=(K,\mathcal{O},\partial)\models T^{\mathcal{O},\partial}\)._ This additional axiom, called **small derivation**, ensures that \(\partial\) is continuous with respect to the valuation topology (equivalently, order topology) on \(K\)[1, Lemma 4.4.6], so every model of \(T^{\mathcal{O},\partial}\) is a \(T\)-convex \(T\)-differential field, as defined in the introduction. Also, small derivation gives \(\partial\mathcal{O}\subseteq\mathcal{O}\) too [1, Lemma 4.4.2], so \(\partial\) induces a derivation on \(\boldsymbol{k}\). Moreover, \(\boldsymbol{k}\) is a \(T\)-differential field (see [17, p. 280]). In this paper we are interested in the case that the derivation induced on \(\boldsymbol{k}\) is nontrivial; indeed, often it will be linearly surjective or even generic. A consequence of the main results of [17] provides spherically complete extensions in this case: **Fact 2.8** ([17, Corollary 6.4]).: _If the derivation induced on \(\boldsymbol{k}\) is nontrivial, then \(K\) has a spherically complete immediate \(T^{\mathcal{O},\partial}\)-extension._ It follows that for \(K\) such that the derivation induced on \(\boldsymbol{k}\) is nontrivial, \(K\) is spherically complete if and only if \(K\) has no proper immediate \(T^{\mathcal{O},\partial}\)-extension. The notion of an \(\mathcal{L}(K)\)-definable function being in _implicit form_, defined in the next section, plays an important role in the result above and our work here, as does the notion of _vanishing_, which we define in Section 3.3 when it is needed. Now we define an important function on \(\Gamma\) defined by a linear differential operator \(A\in K[\partial]^{\neq}\); for details see [1, Section 4.5]. The valuation of \(K\) induces a valuation \(v\) on \(K[\partial]^{\neq}\) given by \(vA\coloneqq\min\{va_{i}:0\leqslant i\leqslant r\}\). Combining this with multiplicative conjugation yields a strictly increasing function \(v_{A}\colon\Gamma\to\Gamma\) defined by \(v_{A}(\gamma)\coloneqq v(A_{\times g})\) for any \(g\in K^{\times}\) with \(vg=\gamma\). As a consequence of the Equalizer Theorem [1, Theorem 6.0.1], \(v_{A}\) is bijective. We extend \(v_{A}\) to \(\Gamma\cup\{\infty\}\) by \(v_{A}(\infty)=v(A_{\times 0})\coloneqq\infty\). ## 3. \(T^{\partial}\)-henselianity Given an \(\mathcal{L}(K)\)-definable function \(F\colon K^{1+r}\to K\), a linear differential operator \(A\in K[\partial]^{\neq}\), and an open \(v\)-ball \(B\subseteq K\), we say that \(A\)**linearly approximates \(F\) on \(B\)** if \[v\big{(}F(\partial_{\partial}^{r}b)-F(\partial_{\partial}^{r}a)-A(b-a)\big{)} \ >\ vA_{\times(b-a)}\] for all \(a,b\in B\) with \(a\neq b\). Note that if \(A\) linearly approximates \(F\) on \(B\), then \(A\) linearly approximates \(F\) on any open \(v\)-ball contained in \(B\). In this section, \(F\colon K^{1+r}\to K\) will always be an \(\mathcal{L}(K)\)-definable function in **implicit form**, that is, \[F\ =\ \mathfrak{m}_{F}\big{(}Y_{r}-I_{F}(Y_{0},\ldots,Y_{r-1})\big{)}\] for some \(\mathfrak{m}_{F}\in K^{\times}\) and \(\mathcal{L}(K)\)-definable function \(I_{F}\colon K^{r}\to K\). Additionally, let \(A\) range over \(K[\partial]\), \(a\) range over \(K\), and \(\gamma\) range over \(\Gamma\). We say that \((F,A,a,\gamma)\)**is in \(T^{\mathfrak{a}}\)-hensel configuration** if \(A\) linearly approximates \(F\) on \(B(a,\gamma)\) and \(vF(\mathfrak{J}_{\partial}^{r}a)>v_{A}(\gamma)\). We say that \(K\) is \(T^{\mathfrak{a}}\)**-henselian** if: 1. its differential residue field \(\boldsymbol{k}\) is linearly surjective; 2. for every \((F,A,a,\gamma)\) in \(T^{\mathfrak{a}}\)-hensel configuration, there exists \(b\in B(a,\gamma)\) with \(F(\mathfrak{J}_{\partial}^{r}b)=0\) and \(vA_{\times(b-a)}\geqslant vF(\mathfrak{J}_{\partial}^{r}a)\). We allow all valuations to be infinite, so if \(F(\mathfrak{J}_{\partial}^{r}a)=0\), then we may take \(b=a\). The definition of \(T^{\mathfrak{a}}\)-henselianity is inspired by that of \(\sigma\)_-henselianity_ for analytic difference valued fields from [22]. Implicit form of definable functions was introduced and exploited in [17]. ### Basic consequences Here are some consequences of \(T^{\mathfrak{a}}\)-henselianity needed later. The proofs are mostly routine adaptations of results from [1, Sections 7.1 and 7.5], using \(T^{\mathfrak{a}}\)-henselianity instead of d-henselianity, so we give a couple as an illustration and omit most of the others. For the next lemma, we call \(A\in K[\partial]^{\neq}\)**nearly surjective** if for every \(b\in K^{\times}\) there is \(a\in K^{\times}\) with \(A(a)=b\) and \(v_{A}(va)=vb\). **Lemma 3.1**.: _Suppose that \(K\) is \(T^{\mathfrak{a}}\)-henselian. Then every \(A\in K[\partial]^{\neq}\) is neatly surjective._ Proof.: Let \(A\in K[\partial]^{\neq}\) have order \(r\). To see that \(A\) is neatly surjective, let \(b\in K^{\times}\) and take \(\alpha\in\Gamma\) with \(v_{A}(\alpha)=\beta\coloneqqvb\). We need to find \(a\in K^{\times}\) with \(va=\alpha\) and \(A(a)=b\). Take \(\phi\in K^{\times}\) with \(v\phi=\alpha\) and let \(D\coloneqq b^{-1}A\phi\in K[\partial]^{\neq}\), so \(v(D)=0\). Take an \(\mathcal{L}(K)\)-definable function \(F\colon K^{1+r}\to K\) in implicit form such that \(F(\mathfrak{J}_{\partial}^{r}a)=D(a)-1\) for all \(a\in K\). Since \(\boldsymbol{k}\) is linearly surjective, we have \(u\in\mathcal{O}^{\times}\) with \(D(u)-1\prec 1\). As \(D\) linearly approximates \(F\) on \(K\), \((F,D,u,0)\) is in \(T^{\mathfrak{a}}\)-hensel configuration, and thus we have \(y\sim u\) with \(F(\mathfrak{J}_{\partial}^{r}y)=0\). Then \(a\coloneqq\phi y\) works. The next corollary follows from Lemma 3.1 as [1, Corollary 7.1.9] follows from [1, Lemma 7.1.8]. **Corollary 3.2**.: _If \(K\) is \(T^{\mathfrak{a}}\)-henselian, then \(\o=(1+\o)^{\dagger}\)._ Under the assumption of monotonicity this yields additional information as in [1, Corollary 7.1.11]. We say that \(K\) has **many constants** if \(v(C^{\times})=\Gamma\). Note that if \(K\) has many constants (and small derivation), then \(K\) is monotone. **Corollary 3.3**.: _Suppose that \(K\) is \(T^{\mathfrak{a}}\)-henselian and monotone, and \((\boldsymbol{k}^{\times})^{\dagger}=\boldsymbol{k}\). Then \(K\) has many constants and \((K^{\times})^{\dagger}=(\mathcal{O}^{\times})^{\dagger}=\mathcal{O}\)._ Rather opposite to monotonicity, we say that \(K\) has **few constants** if \(C\subseteq\mathcal{O}\). Now we record several lemmas about \(K\) with few constants adapted from [1, Section 7.5]. By [1, Lemma 9.1.1] and Lemma 3.1, any \(T^{\mathfrak{a}}\)-henselian \(K\) with few constants is **asymptotic** in the sense of [1, Chapter 9]; that is, for all nonzero \(a,b\in\o\), we have \(a\prec b\iff a^{\prime}\prec b^{\prime}\). Conversely, any asymptotic \(K\) obviously satisfies \(C\subseteq\mathcal{O}\). The next lemma is analogous to [1, Lemma 7.5.1] but has a simpler proof. **Lemma 3.4**.: _Suppose that \(K\) is \(T^{\mathfrak{a}}\)-henselian and let \(A\in K[\partial]^{\neq}\) have order \(r\). If \(A(1)\prec A\), then \(A(y)=0\) for some \(y\in K\) with \(y\sim 1\) and \(vA_{\times(y-1)}\geqslant vA(1)\)._ Proof.: Take an \(\mathcal{L}(K)\)-definable function \(F\colon K^{1+r}\to K\) in implicit form such that \(F(\mathfrak{J}_{\partial}^{r}a)=A(a)\) for all \(a\in K\). If \(A(1)\prec A\), then \((F,A,1,0)\) is in \(T^{\mathfrak{a}}\)-hensel configuration, so \(T^{\mathfrak{a}}\)-henselianity yields the desired \(y\). The next lemma follows from Lemma 3.4 as [1, Lemma 7.5.2] follows from [1, Lemma 7.5.1]. **Lemma 3.5**.: _Suppose that \(K\) is \(T^{\mathfrak{a}}\)-henselian and \(C\subseteq\mathcal{O}\). Let \(A\in K[\partial]^{\neq}\) have order \(r\). There do not exist \(b_{0},\ldots,b_{r}\in K^{\times}\) such that \(b_{0}\succ b_{1}\succ\cdots\succ b_{r}\) and \(A(b_{i})\prec Ab_{i}\) for \(i=0,\ldots,r\)._ The proof of Lemma 3.6 is adapted from that of [1, Lemma 7.5.5]. **Lemma 3.6**.: _Suppose that \(K\) is \(T^{\mathfrak{a}}\)-henselian and \(C\subseteq\mathcal{O}\). Let \(F\colon K^{1+r}\to K\) be an \(\mathcal{L}(K)\)-definable function in implicit form and \(A\in K[\partial]^{\neq}\) have order \(q\). There do not exist \(y_{0},\ldots,y_{q+1}\in K\) such that_ 1. \(y_{i-1}-y_{i}\succ y_{i}-y_{i+1}\) _for_ \(i=1,\ldots,q\) _and_ \(y_{q}\neq y_{q+1}\)_;_ 2. \(F(\vec{q}_{\_}{\sigma}^{\,\sigma}y_{i})=0\) _for_ \(i=0,\ldots,q+1\)_;_ 3. \((F,A,y_{q+1},\gamma)\) _is in_ \(T^{\mathfrak{d}}\)_-hensel configuration and_ \(v(y_{0}-y_{q+1})>\gamma\) _for some_ \(\gamma\in\Gamma\)_._ Proof.: Suppose towards a contradiction that we have \(y_{0},\ldots,y_{q+1}\in K\) and \(\gamma\in\Gamma\) satisfying (i)-(iii). Below, \(i\) ranges over \(\{0,\ldots,q\}\). Set \(G\coloneqq F_{+y_{q+1}}\) and \(b_{i}\coloneqq y_{i}-y_{q+1}\). Then \(b_{i}\sim y_{i}-y_{i+1}\), so \(b_{0}\succ b_{1}\succ\cdots\succ b_{q}\neq 0\). Also, \(G(\vec{q}_{\_}{\sigma}^{\,\sigma}b_{i})=F(\vec{q}_{\_}{\sigma}^{\,\sigma}y_{i })=0\) and \(G(0)=F(\vec{q}_{\_}{\sigma}^{\,\sigma}y_{q+1})=0\). Since \(vb_{\_}{0}>\gamma\), by (iii) we have \(\varepsilon_{\_}i\) such that \(v\varepsilon_{\_}i>vA\times_{b_{\_}i}\) and \(0=G(\vec{q}_{\_}{\sigma}^{\,\sigma}b_{i})=A(b_{\_}i)+\varepsilon_{\_}i\). In particular, \(vA(b_{\_}i)=v(\varepsilon_{\_}i)>vA\times_{b_{\_}i}\), contradicting Lemma 3.5 with \(q\) in the role of \(r\). **Remark 3.7**.: What of our assumption that \(T\) is power bounded? The definitions of \(T\)-convex valuation ring, \(T\)-derivation, and \(T^{\mathfrak{d}}\)-henselianity do not use it, nor do the above consequences of \(T^{\mathfrak{d}}\)-henselianity. Suppose temporarily that \(T\) is not power bounded. Then by Miller's dichotomy [20], we have an \(\mathcal{L}(\emptyset)\)-definable exponential function \(E\colon K\to K^{>}\) (i.e., \(E\) is an isomorphism from the ordered additive group of \(K\) to the ordered multiplicative group \(K^{>}\) which is equal to its own derivative). As we now explain, this \(E\) is not compatible with monotonicity, at least if we suppose in addition that the derivation of \(\boldsymbol{k}\) is nontrivial and that \(K\) is nontrivially valued. Let \(a\in K\) with \(a\succ 1\). If \(a^{\prime}\succ 1\), then \(E(a)^{\prime}=a^{\prime}E(a)\succ E(a)\), so \(K\) is not monotone. If \(a^{\prime}\preccurlyeq 1\), taking \(u\in K\) with \(u\asymp u^{\prime}\asymp 1\) yields \((au)^{\prime}=a^{\prime}u+au^{\prime}\sim au^{\prime}\asymp a\), so \(b\coloneqq au\) satisfies \(b^{\prime}\succ 1\) and \(E(b)^{\prime}\succ E(b)\). ### Lifting the residue field In this subsection we show that every \(T^{\mathfrak{d}}\)-henselian \(T^{\mathcal{O},\mathfrak{d}}\)-model admits a lift of its differential residue field as an \(\mathcal{L}^{\mathfrak{d}}\)-structure in the sense defined before Corollary 3.11 (cf. [1, Proposition 7.1.3]). A **partial \(T\)-lift of the residue field \(\boldsymbol{k}\)** is a \(T\)-submodel \(E\subseteq K\) which is contained in \(\mathcal{O}\). If \(E\) is a partial \(T\)-lift of \(\boldsymbol{k}\), then the residue map induces an \(\mathcal{L}\)-embedding \(E\to\boldsymbol{k}\). If this embedding is surjective onto \(\boldsymbol{k}\), then \(E\) is called a \(T\)**-lift of \(\boldsymbol{k}\)**. By [7, Theorem 2.12], \(\boldsymbol{k}\) always admits a \(T\)-lift. **Lemma 3.8**.: _Let \(E\subseteq K\) be a partial \(T\)-lift of \(\boldsymbol{k}\) and let \(a,b\) be tuples in \(\mathcal{O}^{n}\) with \(a-b\in\mathcal{o}^{n}\). Suppose that \(a\) is \(\mathcal{L}(E)\)-independent and that \(E\langle a\rangle\) is a partial \(T\)-lift of \(\boldsymbol{k}\). Then \(a\) and \(b\) have the same \(\mathcal{L}^{\mathrm{RV}}\)-type over \(E\cup\mathrm{rv}\big{(}E\langle a\rangle\big{)}\). In particular, \(E\langle b\rangle\) is also a partial \(T\)-lift of \(\boldsymbol{k}\)._ Proof.: We proceed by induction on \(n\), with the case \(n=0\) holding trivially. Assume that \((a_{\_}1,\ldots,a_{\_}{n-1})\) and \((b_{\_}1,\ldots,b_{\_}{n-1})\) have the same \(\mathcal{L}^{\mathrm{RV}}\)-type over \(E\cup\mathrm{rv}\big{(}E\langle a_{\_}1,\ldots,a_{\_}{n-1}\rangle\big{)}\). This assumption yields a partial elementary \(\mathcal{L}^{\mathrm{RV}}\)-embedding \(\imath\colon E\langle a_{\_}1,\ldots,a_{\_}{n-1}\rangle\to E\langle b_{\_}1, \ldots,b_{\_}{n-1}\rangle\) which fixes \(E\) and \(\mathrm{rv}\big{(}E\langle a_{\_}1,\ldots,a_{\_}{n-1}\rangle\big{)}\) and which sends \(a_{\_}i\) to \(b_{\_}i\) for each \(i<n\). Let \(g\in E\langle a_{\_}1,\ldots,a_{\_}{n-1}\rangle\). We claim that \(g<a_{\_}n\iff\imath(g)<b_{\_}n\). Take some \(\mathcal{L}(E)\)-definable function \(G\) with \(g=G(a_{\_}1,\ldots,a_{\_}{n-1})\). Since \(a\) is \(\mathcal{L}(E)\)-independent, the function \(G\) is continuous on some \(\mathcal{L}(E)\)-definable neighborhood \(U\) of \((a_{\_}1,\ldots,a_{\_}{n-1})\). Note that \((b_{\_}1,\ldots,b_{\_}{n-1})\in U\) as well, so \(G(a_{\_}1,\ldots,a_{\_}{n-1})-G(b_{\_}1,\ldots,b_{\_}{n-1})\prec 1\) by [7, Lemma 1.13]. Since \(E\langle a\rangle\) is a partial \(T\)-lift of \(\boldsymbol{k}\), we have \(a_{\_}n-G(a_{\_}1,\ldots,a_{\_}{n-1})\asymp 1\). By assumption, \(a_{\_}n-b_{\_}n\prec 1\), so \[a_{\_}n-g\ =\ a_{\_}n-G(a_{\_}1,\ldots,a_{\_}{n-1})\ \sim\ b_{\_}n-G(b_{\_}1, \ldots,b_{\_}{n-1})\ =\ b_{\_}n-\imath(g).\] In particular, \(a_{\_}n-g\) and \(b_{\_}n-\imath(g)\) have the same sign. This allows us to extend \(\imath\) to an \(\mathcal{L}(E)\)-embedding \(\jmath\colon E\langle a\rangle\to E\langle b\rangle\) by mapping \(a_{\_}n\) to \(b_{\_}n\). To see that \(\jmath\) is even an elementary \(\mathcal{L}^{\mathrm{RV}}\)-embedding over \(E\cup\mathrm{rv}\big{(}E\langle a\rangle\big{)}\), it suffices to show that \(\mathrm{rv}\,h=\mathrm{rv}\,\jmath(h)\) for each \(h\in E\langle a\rangle\). We may assume that \(h\neq 0\), so \(h\asymp 1\). Take some \(\mathcal{L}(E)\)-definable function \(H\) with \(h=H(a)\). Again, \(H\) is continuous on an open set containing \(a\) and \(b\), so we may use [7, Lemma 1.13] to get that \(H(a)-H(b)\prec 1\). Thus, \(H(a)\sim H(b)\), so \[\mathrm{rv}\,h\ =\ \mathrm{rv}\,H(a)\ =\ \mathrm{rv}\,H(b)\ =\ \mathrm{rv}\,\jmath(h).\qed\] **Lemma 3.9**.: _Let \(n\) be given, let \(E\) be a partial \(T\)-lift of \(\boldsymbol{k}\), let \(a\in\mathcal{O}^{\times}\) with \(\bar{a}\not\in\mathrm{res}(E)\), and suppose that \(\vec{q}_{\_}{\sigma}^{n-1}(a)\) is \(\mathcal{L}(E)\)-independent and that \(E\langle\vec{q}_{\_}{\sigma}^{n-1}(a)\rangle\) is a partial \(T\)-lift of \(\boldsymbol{k}\). Let \(G\colon K^{1+n}\to K\) be an \(\mathcal{L}(E)\)-definable function in implicit form with \(\mathfrak{m}_{\_}{G}=1\). Then there is \(A\in K[\vec{a}]\) with \(vA=0\) which linearly approximates \(G\) on \(a+\mathcal{o}\)._ Proof.: By applying Fact 2.2 to the function \(I_{\_}G\), we find an \(\mathcal{L}^{\mathrm{RV}^{\text{eq}}}(E)\)-definable map \(\chi\colon K^{n}\to\mathrm{RV}^{\text{eq}}_ for all \(x,y\in\chi^{-1}(s)\) with \(x\neq y\). Let \(s_{0}\coloneqq\chi\big{(}\!\!\left(\genfrac{}{}{0.0pt}{}{a_{\beta}^{n-1}(a)}{a} \right)\) and let \(U\coloneqq\chi^{-1}(s_{0})\). Note that if \(x\in\genfrac{}{}{0.0pt}{}{a_{\beta}^{n-1}(a)}{a}+\sigma^{n}\), then \(x\) and \(\genfrac{}{}{0.0pt}{}{a_{\beta}^{n-1}(a)}{a}\) have the same \(\mathcal{L}^{\mathrm{RV}}\)-type over \(E\cup\mathrm{rv}\,E\langle\genfrac{}{}{0.0pt}{}{a_{\beta}^{n-1}(a)}{a}\rangle\) by Lemma 3.8, so \(x\in U\). In particular, \(\genfrac{}{}{0.0pt}{}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J} }{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J} }{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}} {\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{\mathfrak{J}}{ \mathfrak{J}}{\mathfrak{ \(E_{0}\subseteq E\) be a \(T\)-lift of \(\operatorname{res}(E)\), so \(E_{0}\langle\beta_{\delta}^{n-1}b\rangle\) is a \(T\)-lift of \(\operatorname{res}(E)\langle\beta_{\delta}^{n-1}a\rangle\). Since \(a^{(n)}\in\operatorname{res}(E)\langle\beta_{\delta}^{n-1}a\rangle\), there is some \(\mathcal{L}(E_{0})\)-definable function \(G\colon K^{n}\to K\) with \(b^{(n)}-G(\beta_{\delta}^{n-1}b)\prec 1\). Let \(H\colon K^{n+1}\to K\) be the function \[H(Y_{0},\ldots,Y_{n})\ \coloneqq\ Y_{n}-G(Y_{0},\ldots,Y_{n-1}),\] so \(H\) is in implicit form with \(\mathfrak{m}_{H}=1\) and \(H(\beta_{\delta}^{n}b)\prec 1\). Applying Lemma 3.9, we get a linear differential operator \(A\) with \(vA=0\) which linearly approximates \(H\) on \(b+o\). Then \((H,A,b,0)\) is in \(T^{\flat}\)-hensel configuration, and since \(K\) is assumed to be \(T^{\flat}\)-henselian, we may replace \(b\) with some element in \(b+o\) and arrange that \(H(\beta_{\delta}^{n}b)=0\). Note that then \(E\langle\beta_{\delta}^{n-1}b\rangle=E\langle\!\langle b\rangle\!\rangle\), so (i) holds. As above, (ii) follows from (i) and the Wilkie inequality. For (iii), let \(K^{*}\) and \(a\) be given. Arguing as we did with \(K\), we can take a lift \(b^{*}\) of \(u(a)\) in \(K^{*}\) such that \(H(\beta_{\delta}^{n}b^{*})=0\). Let \(j\colon E\langle\!\langle b\rangle\!\rangle\to K^{*}\) be the map that fixes each element of \(E\) and sends \(\beta_{\delta}^{n-1}(b)\) to \(\beta_{\delta}^{n-1}(b^{*})\). As above, this map is easily seen to be an \(\mathcal{L}^{\mathcal{O},\beta}(E)\)-embedding lifting \(\imath\). A **partial \(T^{\flat}\)-lift of the differential residue field \(k\)** is a \(T^{\flat}\)-submodel \(E\subseteq K\) which is contained in \(\mathcal{O}\). Equivalently, a partial \(T^{\flat}\)-lift is a partial \(T\)-lift which is closed under \(\partial\). If \(E\) is a partial \(T^{\flat}\)-lift of \(k\), then the residue map induces an \(\mathcal{L}^{\flat}\)-embedding \(E\to k\). If \(\operatorname{res}(E)=k\), then \(E\) is called a \(T^{\flat}\)**-lift of \(k\)**. **Corollary 3.11**.: _Suppose that \(K\) is \(T^{\flat}\)-henselian. Then any partial \(T^{\flat}\)-lift of \(k\) can be extended to a \(T^{\flat}\)-lift of \(k\)._ Proof.: Let \(E\) be a partial \(T^{\flat}\)-lift of \(k\) and let \(a\in k\setminus\operatorname{res}(E)\). Proposition 3.10 gives us \(b\in\mathcal{O}\) with \(\operatorname{res}(b)=a\) such that \(E\langle\!\langle b\rangle\!\rangle\) is a partial \(T^{\flat}\)-lift of \(k\). The corollary follows by Zorn's lemma. Note that the prime model of \(T\) (identified with \(\operatorname{dcl}_{\mathcal{L}}(\emptyset)\subseteq K\) and equipped with the trivial derivation) is a partial \(T^{\flat}\)-lift of \(k\), so Corollary 3.11 has the following consequence: **Theorem 3.12**.: _Suppose that \(K\) is \(T^{\flat}\)-henselian. Then \(k\) admits a \(T^{\flat}\)-lift._ ### Spherically complete implies \(T^{\flat}\)-henselian In this subsection we assume that the derivation on \(k\) is nontrivial. To establish the result claimed in the subsection heading, under the assumption that \(k\) is linearly surjective, we work as usual with pseudocauchy sequences, which we abbreviate as _pc-sequences_; see [1, SS2.2 and SS3.2] for definitions and basic facts about them. In particular, recall that \(K\) is spherically complete if and only if every pc-sequence in \(K\) has a pseudolimit in \(K\), and that if \(L\) is an immediate \(T^{\mathcal{O},\flat}\)-extension of \(K\), then every element of \(L\) is the pseudolimit of a divergent pc-sequence in \(K\) (i.e., a pc-sequence in \(K\) that has no pseudolimit in \(K\)). We are interested in the behavior of \(\mathcal{L}(K)\)-definable functions along pc-sequences, which will allow us to study immediate \(T^{\mathcal{O},\flat}\)-extensions of \(K\). **Lemma 3.13**.: _Suppose that \(k\) is linearly surjective, \((F,A,a,\gamma)\) is in \(T^{\flat}\)-hensel configuration, and \(F(\beta_{\delta}^{r}a)\neq 0\). Then there is \(b\in B(a,\gamma)\) such that \((F,A,b,\gamma)\) is in \(T^{\flat}\)-hensel configuration, \(vA_{\times(b-a)}=vF(\beta_{\delta}^{r}a)\), and \(F(\beta_{\delta}^{r}b)\prec F(\beta_{\delta}^{r}a)\). Moreover, if \(b^{*}\) is any element of \(K\) with \(b^{*}-a\sim b-a\), then \((F,A,b^{*},\gamma)\) is in \(T^{\flat}\)-hensel configuration, \(vA_{\times(b^{*}-a)}=vF(\beta_{\delta}^{r}a)\), and \(F(\beta_{\delta}^{r}b^{*})\prec F(\beta_{\delta}^{r}a)\)._ Proof.: Let \(\alpha\coloneqq vF(\beta_{\delta}^{r}a)>v_{A}(\gamma)\) and take \(g\in K\) with \(v_{A}(vg)=\alpha\), so \(vg>\gamma\). Then for \(y\in\mathcal{O}^{\times}\), we have \(a+gy\in B(a,\gamma)\) and \[F\big{(}\beta_{\delta}^{r}(a+gy)\big{)}\ =\ F(\beta_{\delta}^{r}a)+A(gy)+\varepsilon,\] where \(v\varepsilon>vA_{\times gy}=v_{A}(vg)=\alpha\). Let \(D\) be the linear differential operator \(F(\beta_{\delta}^{r}a)^{-1}A_{\times g}\), so \(D\asymp 1\). Since \(k\) is linearly surjective, we can find \(u\in\mathcal{O}^{\times}\) with \(1+D(u)\prec 1\). Let \(b\coloneqq a+gu\), so \[F(\beta_{\delta}^{r}b)\ =\ F(\beta_{\delta}^{r}a)+A(gu)+\varepsilon\ =\ F(\beta_{\delta}^{r}a)\big{(}1+D(u)\big{)}+ \varepsilon\ \prec\ F(\beta_{\delta}^{r}a).\] Since \(b-a\asymp g\), we have \(vA_{\times(b-a)}=v_{A}(vg)=\alpha\). Since \(B(b,\gamma)=B(a,\gamma)\) and \(vF(\beta_{\delta}^{r}b)>vF(\beta_{\delta}^{r}a)>v_{A}(\gamma)\), the tuple \((F,A,b,\gamma)\) is in \(T^{\flat}\)-hensel configuration. Now, let \(b^{*}\in K\) with \(b^{*}-a\sim b-a=gu\). Then we have \(u^{*}\in K\) with \(b^{*}=a+gu^{*}\) and \(u^{*}\sim u\). In particular, \(1+D(u^{*})\prec 1\), so the same argument as above gives that \((F,A,b^{*},\gamma)\) is in \(T^{\flat}\)-hensel configuration, \(vA_{\times(b^{*}-a)}=vF(\beta_{\delta}^{r}a)\), and \(F(\beta_{\delta}^{r}b^{*})\prec F(\beta_{\delta}^{r}a)\). **Lemma 3.14**.: _Suppose that \(k\) is linearly surjective, \((F,A,a,\gamma)\) is in \(T^{\flat}\)-hensel configuration, and \(F(\beta_{\delta}^{r}b)\neq 0\) for all \(b\in B(a,\gamma)\) with \(vA_{\times(b-a)}\geqslant vF(\beta_{\delta}^{r}a)\). Then there is a divergent pc-sequence \((a_{\rho})\) in \(K\) such that \(F(\beta_{\delta}^{r}a_{\rho})\rightsquigarrow 0\)._ Proof.: Suppose we have a nonzero ordinal \(\lambda\) and a sequence \((a_{\rho})_{\rho<\lambda}\) in \(B(a,\gamma)\) satisfying 1. \(a_{0}=a\) and \((F,A,a_{\rho},\gamma)\) is in \(T^{\natural}\)-hensel configuration for each \(\rho<\lambda\); 2. \(v_{A}(\gamma_{\rho})=vF(\mathscr{F}_{\circ}^{x}a_{\rho})\) whenever \(\rho+1<\lambda\), where \(\gamma_{\rho}\coloneqq v(a_{\rho+1}-a_{\rho})\); 3. \(vF(\mathscr{F}_{\circ}^{x}a_{\rho})\) is strictly increasing as a function of \(\rho\), for \(\rho<\lambda\). Such a sequence exists when \(\lambda=1\), so it suffices to extend \((a_{\rho})_{\rho<\lambda}\) to a sequence \((a_{\rho})_{\rho<\lambda+1}\) in \(B(a,\gamma)\) satisfying (1)-(3) with \(\lambda+1\) in place of \(\lambda\). If \(\lambda=\mu+1\) is a successor ordinal, then we use Lemma 3.13 to find \(a_{\lambda}\in B(a,\gamma)\) such that \((F,A,a_{\lambda},\gamma)\) is in \(T^{\natural}\)-hensel configuration, \(vA_{\times(a_{\lambda}-a_{\mu})}=vF(\mathscr{F}_{\circ}^{x}a_{\mu})\), and \(F(\mathscr{F}_{\circ}^{x}a_{\lambda})\prec F(\mathscr{F}_{\circ}^{x}a_{\mu})\). This extended sequence \((a_{\rho})_{\rho<\lambda+1}\) satisfies conditions (1)-(3). Suppose that \(\lambda\) is a limit ordinal. Then \(v_{A}(\gamma_{\rho})\) is strictly increasing as a function of \(\rho\), so \(\gamma_{\rho}\) is also strictly increasing by [1, Lemma 4.5.1(iii)]. Hence, \((a_{\rho})_{\rho<\lambda}\) is a pc-sequence with \(F(\mathscr{F}_{\circ}^{x}a_{\rho})\leadsto 0\). If \((a_{\rho})_{\rho<\lambda}\) is divergent, then we are done. Otherwise, let \(a_{\lambda}\) be any pseudolimit of \((a_{\rho})_{\rho<\lambda}\) in \(K\). Since \(a_{\lambda}-a_{\rho}\sim a_{\rho+1}-a_{\rho}\) for each \(\rho<\lambda\), Lemma 3.13 gives that \((F,A,a_{\lambda},\gamma)\) is in \(T^{\natural}\)-hensel configuration and that \(F(\mathscr{F}_{\circ}^{x}a_{\lambda})\prec F(\mathscr{F}_{\circ}^{x}a_{\rho})\) for each \(\rho<\lambda\). Thus, the extended sequence \((a_{\rho})_{\rho<\lambda+1}\) satisfies (1)-(3). **Corollary 3.15**.: _If \(\boldsymbol{k}\) is linearly surjective and \(K\) is spherically complete, then \(K\) is \(T^{\natural}\)-henselian._ In fact, the divergent pc-sequence we build in Lemma 3.14 is of a specific form analogous to "differential-algebraic type" in valued differential fields with small derivation (see [1, Sections 4.4 and 6.9]). To introduce this notion and refine Corollary 3.15, let \((a_{\rho})\) be a divergent pc-sequence in \(K\), and let \(\ell\) be a pseudolimit of \((a_{\rho})\) in some \(T^{\mathcal{O},\vartheta}\)-extension \(L\) of \(K\). Then \(v(\ell-K)\subseteq\Gamma\) has no greatest element, and we say that a property holds **for all \(y\in K\) sufficiently close to \(\ell\)** if there exists \(\gamma\in v(\ell-K)\) such that the property holds for all \(y\in K\) with \(v(\ell-y)>\gamma\). Under the assumptions that in this paper \(K\) has small derivation and in this section \(\boldsymbol{k}\) has nontrivial derivation, the definition of vanishing from [17] simplifies as follows. We say that \(F\)**vanishes at \((K,\ell)\)** if whenever \(a\in K\) and \(d\in K^{\times}\) satisfy \(\ell-a\prec d\), we have \(I_{F_{+a,\times d}}(\mathscr{F}_{\circ}^{-1}y)\prec 1\) for all \(y\in K\) sufficiently close to \(d^{-1}(\ell-a)\). Let \(Z_{r}(K,\ell)\) be the set of all \(F\) of arity \(1+r\) that vanish at \((K,\ell)\) and \(Z(K,\ell)\coloneqq\bigcup_{r}Z_{r}(K,\ell)\); we have \(Z_{0}(K,\ell)=\emptyset\) by [17, Lemma 5.2]. We say that \((a_{\rho})\) is of \(T^{\natural}\)**-algebraic type over \(K\)** if \(Z(K,\ell)\neq\emptyset\). Otherwise, \((a_{\rho})\) is said to be of \(T^{\natural}\)**-transcendental type over \(K\)**. Note that \(Z(K,\ell)\) does not depend on \(\ell\) or even \((a_{\rho})\), only on \(v(\ell-K)\subseteq\Gamma\). Thus, if \((a_{\rho})\) is of \(T^{\natural}\)-algebraic type over \(K\), then so is \((b_{\sigma})\) for any pc-sequence \((b_{\sigma})\) in \(K\) equivalent to \((a_{\rho})\) (recall the various characterizations of equivalence of pc-sequences in [1, Lemma 2.2.17]). If \((a_{\rho})\) is of \(T^{\natural}\)-algebraic type over \(K\) and \(r\) is minimal with \(Z_{r}(K,\ell)\neq\emptyset\), then any member of \(Z_{r}(K,\ell)\) is called a **minimal \(\mathcal{L}^{\natural}\)-function** of \((a_{\rho})\) over \(K\). We use the following two propositions to construct immediate extensions. **Fact 3.16**.: _[_17_, Proposition 6.1]_ _Suppose that \((a_{\rho})\) is of \(T^{\natural}\)-transcendental type over \(K\). Then \(\ell\) is \(T^{\natural}\)-transcendental over \(K\) and \(K\langle\!\langle\ell\rangle\!\rangle\) is an immediate \(T^{\mathcal{O},\vartheta}\)-extension of \(K\). If \(b\) is a pseudolimit of \((a_{\rho})\) in a \(T^{\mathcal{O},\vartheta}\)-extension \(M\) of \(K\), then there is a unique \(\mathcal{L}^{\mathcal{O},\vartheta}(K)\)-embedding \(K\langle\!\langle\ell\rangle\!\rangle\to M\) sending \(\ell\) to \(b\)._ **Fact 3.17**.: _[_17_, Proposition 6.2]_ _Suppose that \((a_{\rho})\) is of \(T^{\natural}\)-algebraic type over \(K\), and let \(F\) be a minimal \(\mathcal{L}^{\flat}\)-function of \((a_{\rho})\) over \(K\). Then \(K\) has an immediate \(T^{\mathcal{O},\vartheta}\)-extension \(K\langle\!\langle a\rangle\!\rangle\) with \(F(\mathscr{F}_{\circ}^{x}a)=0\) and \(a_{\rho}\leadsto a\). If \(b\) is a pseudolimit of \((a_{\rho})\) in a \(T^{\mathcal{O},\vartheta}\)-extension \(M\) of \(K\) with \(F(\mathscr{F}_{\circ}^{x}b)=0\), then there is a unique \(\mathcal{L}^{\mathcal{O},\vartheta}(K)\)-embedding \(K\langle\!\langle a\rangle\!\rangle\to M\) sending \(a\) to \(b\)._ Let us connect pc-sequences of \(T^{\natural}\)-algebraic type to \(T^{\natural}\)-algebraic \(T^{\mathcal{O},\vartheta}\)-extensions. We say that \(K\) is \(T^{\natural}\)**-algebraically maximal** if \(K\) has no proper immediate \(T^{\mathcal{O},\vartheta}\)-extension that is \(T^{\natural}\)-algebraic over \(K\). **Lemma 3.18**.: _The following are equivalent:_ 1. \(K\) _is_ \(T^{\flat}\)_-algebraically maximal;_ 2. _every pc-sequence of_ \(T^{\flat}\)_-algebraic type over_ \(K\) _has a pseudolimit in_ \(K\)_._ Proof.: If \((a_{\rho})\) is of \(T^{\flat}\)-algebraic type over \(K\), then Fact 3.17 provides a proper immediate \(T^{\mathcal{O},\vartheta}\)-extension of \(K\) that is \(T^{\flat}\)-algebraic over \(K\). Conversely, if \(a\) is an element in a proper immediate \(T^{\mathcal{O},\vartheta}\)-extension that is \(T^{\flat}\)-algebraic over \(K\), then any divergent pc-sequence in \(K\) with pseudolimit \(a\) is necessarily of \(T^{\flat}\)-algebraic type over \(K\) by Fact 3.16. Now we show that the divergent pc-sequence constructed in Lemma 3.14 is of \(T^{\flat}\)-algebraic type over \(K\). **Lemma 3.19**.: _If \(Z_{q}(K,\ell)=\emptyset\) for all \(q<r\) and \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\rightsquigarrow 0\), then \(F\in Z_{r}(K,\ell)\). In particular, if \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\rightsquigarrow 0\), then \(Z(K,\ell)\neq\emptyset\)._ Proof.: Suppose that \(Z_{q}(K,\ell)=\emptyset\) for all \(q<r\) and \(F\not\in Z_{r}(K,\ell)\). By [17, Proposition 5.4], \(F(\mathscr{J}_{\delta}^{r}\ell)\neq 0\) and \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\sim F(\mathscr{J}_{\delta}^{r}\ell)\) for all sufficiently large \(\rho\), and thus \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\not\rightsquigarrow 0\). In light of Lemmas 3.18 and 3.19, Lemma 3.14 yields the following refinement of Corollary 3.15. **Corollary 3.20**.: _If \(\mathbf{k}\) is linearly surjective and \(K\) is \(T^{\natural}\)-algebraically maximal, then \(K\) is \(T^{\natural}\)-henselian._ ### More on behavior along pc-sequences In this subsection, we prove some technical lemmas regarding behavior along pc-sequences. For the remainder of this subsection, let \((a_{\rho})\) be a divergent pc-sequence in \(K\), let \(L\) be a \(T^{\mathcal{O},\natural}\)-extension of \(K\), and let \(\ell\in L\) be a pseudolimit of \((a_{\rho})\). Set \(\gamma_{\rho}\coloneqq v(a_{\rho+1}-a_{\rho})\), and set \(B_{\rho}\coloneqq B(a_{\rho+1},\gamma_{\rho})\). In this context, _eventually_ means for all sufficiently large \(\rho\). The next lemma is an analogue of [1, Lemma 6.8.1]. **Lemma 3.21**.: _Suppose that \(A\) linearly approximates \(F\) on \(B_{\rho}^{L}\), eventually. Then there is a pc-sequence \((b_{\rho})\) in \(K\) equivalent to \((a_{\rho})\) such that \(F(\mathscr{J}_{\delta}^{r}b_{\rho})\rightsquigarrow F(\mathscr{J}_{\delta}^{r }\ell)\)._ Proof.: By removing some initial terms of the sequence, we can assume that for all \(\rho\), \(A\) linearly approximates \(F\) on \(B_{\rho}^{L}\) and \(\gamma_{\rho}=v(\ell-a_{\rho})\), and also that \(\gamma_{\rho}\) is strictly increasing as a function of \(\rho\). Take \(g_{\rho}\in K\) and \(y_{\rho}\in\mathcal{O}_{L}^{\times}\) as in the proof of [1, Lemma 6.8.1] (with \(\ell\), \(F\), and \(A\) replacing \(a\), \(G\), and \(P\), respectively) so that \(vg_{\rho}=\gamma_{\rho}\), \(b_{\rho}\coloneqq\ell+g_{\rho}y_{\rho}\in K\), and \(vA(g_{\rho}y_{\rho})=v_{A}(\gamma_{\rho})\). Then \((b_{\rho})\) is a pc-sequence equivalent to \((a_{\rho})\) and \[F(\mathscr{J}_{\delta}^{r}b_{\rho})-F(\mathscr{J}_{\delta}^{r}\ell)\ =\ A(g_{\rho}y_{\rho})+ \varepsilon_{\rho},\] where \(\varepsilon_{\rho}\in L\) with \(v\varepsilon_{\rho}>vA_{\times g_{\rho}y_{\rho}}=v_{A}(\gamma_{\rho})\). Since \(\gamma_{\rho}\) is strictly increasing and \(vA(g_{\rho}y_{\rho})=v_{A}(\gamma_{\rho})\), we have \(F(\mathscr{J}_{\delta}^{r}b_{\rho})\rightsquigarrow F(\mathscr{J}_{\delta}^{r }\ell)\), as desired. We say that \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)**-hensel configuration** if there is an index \(\rho_{0}\) such that \((F,A,a_{\rho^{\prime}},\gamma_{\rho})\) is in \(T^{\natural}\)-hensel configuration for all \(\rho^{\prime}>\rho\geqslant\rho_{0}\). Tacitly, this is relative to \(K\). We say that \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)**-hensel configuration in \(L\)** if when we identify \(F\) with \(F^{L}\colon L^{1+r}\to L\) and \(A\) with the obvious element of \(L[\partial]\), \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)-hensel configuration relative to \(L\). Modulo passing to cofinal subsequences, being in \(T^{\natural}\)-hensel configuration is preserved by equivalence of pc-sequences: **Lemma 3.22**.: _Suppose that \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)-hensel configuration in \(L\) and let \((b_{\sigma})\) be a pc-sequence in \(K\) equivalent to \((a_{\rho})\). There exists a cofinal subsequence \((b_{\lambda})\) of \((b_{\sigma})\) such that \((F,A,(b_{\lambda}))\) is in \(T^{\natural}\)-hensel configuration in \(L\)._ Proof.: By discarding some initial terms, we can assume that \(\gamma_{\rho}\) and \(\delta_{\sigma}\coloneqq v(b_{\sigma+1}-b_{\sigma})\) are strictly increasing. First, take \(\rho_{0}\) and \(\sigma_{0}\) large enough that \((F,A,a_{\rho^{\prime}},\gamma_{\rho})\) is in \(T^{\natural}\)-hensel configuration in \(L\) for all \(\rho^{\prime}>\rho\geqslant\rho_{0}\) and \(v(a_{\rho}-b_{\sigma})>\gamma_{\rho_{0}}\) for all \(\rho>\rho_{0}\) and \(\sigma>\sigma_{0}\). By increasing \(\sigma_{0}\), we can arrange that \(\delta_{\sigma_{0}}\geqslant\gamma_{\rho_{0}}\). Then for any \(\rho>\rho_{0}\) and \(\sigma>\sigma_{0}\), \(A\) linearly approximates \(F\) on \(B_{\rho_{0}}^{L}\supseteq B_{\sigma_{0}}^{L}\). Second, take \(\sigma_{1}>\sigma_{0}\) and \(\rho_{1}>\rho_{0}\) sufficiently large that \(v(b_{\sigma}-a_{\rho})>\delta_{\sigma_{1}}\) for all \(\sigma>\sigma_{1}\) and \(\rho>\rho_{1}\). Let \(\sigma>\sigma_{1}\). To show that \((F,A,b_{\sigma},\delta_{\sigma_{1}})\) is in \(T^{\natural}\)-hensel configuration in \(L\), suppose towards a contradiction that \(vF(\mathscr{J}_{\delta}^{r}b_{\sigma})\leqslant v_{A}(\delta_{\sigma_{1}})\). By increasing \(\rho_{1}\), we can assume that \(\gamma_{\rho_{1}}>\delta_{\sigma_{1}}\). Then for \(\rho>\rho_{1}\) we have \(vF(\mathscr{J}_{\delta}^{r}a_{\rho})>v_{A}(\gamma_{\rho_{1}})>v_{A}(\delta_{ \sigma_{1}})\) and \(vA(b_{\sigma}-a_{\rho})\geqslant vA_{\times(b_{\sigma}-a_{\rho})}>v_{A}(\delta_{ \sigma_{1}})\), and thus \[v\big{(}F(\mathscr{J}_{\delta}^{r}b_{\sigma})-F(\mathscr{J}_{\delta}^{r}a_{\rho })-A(b_{\sigma}-a_{\rho})\big{)}\ =\ vF(\mathscr{J}_{\delta}^{r}b_{\sigma})\ \leqslant\ v_{A}(\delta_{\sigma_{1}})\ <\ vA_{\times(b_{\sigma}-a_{\rho})},\] contradicting that \(A\) linearly approximates \(F\) on \(B_{\rho_{0}}^{L}\). Iterating this second step yields a cofinal subsequence of \((b_{\sigma})\) with the desired property. If \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)-hensel configuration in \(L\), then by definition, \(A\) linearly approximates \(F\) on \(B_{\rho}^{L}\), eventually. The converse holds, under the assumption that \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\rightsquigarrow 0\). **Lemma 3.23**.: _Suppose that \(F(\mathscr{J}_{\delta}^{r}a_{\rho})\rightsquigarrow 0\) and that \(A\) linearly approximates \(F\) on \(B_{\rho}^{L}\), eventually. Then \((F,A,(a_{\rho}))\) is in \(T^{\natural}\)-hensel configuration in \(L\)._ Proof.: Take \(\rho_{0}\) such that \(A\) linearly approximates \(F\) on \(B_{\rho_{0}}^{L}\). By increasing \(\rho_{0}\), we may assume that \(vF(\mathscr{F}_{\partial}^{\tau}a_{\rho})\) is strictly increasing for \(\rho>\rho_{0}\). Let \(\rho^{\prime}>\rho>\rho_{0}\) be given. We need to show that \((F,A,a_{\rho^{\prime}},\gamma_{\rho})\) is in \(T^{\mathfrak{d}}\)-hensel configuration in \(L\). Since \(A\) linearly approximates \(F\) on \(B_{\rho}^{L}\), it suffices to show that \(vF(\mathscr{F}_{\partial}^{\tau}a_{\rho^{\prime}})>v_{A}(\gamma_{\rho})\). Since \(a_{\rho^{\prime}}\) and \(a_{\rho}\) are in \(B_{\rho_{0}}\), we have \[v\big{(}F(\mathscr{F}_{\partial}^{\tau}a_{\rho^{\prime}})-F(\mathscr{F}_{ \partial}^{\tau}a_{\rho})-A(a_{\rho^{\prime}}-a_{\rho})\big{)}\ >\ vA_{\times(a_{\rho^{\prime}}-a_{\rho})}\ =\ v_{A}(\gamma_{\rho}).\] Since \(vA(a_{\rho^{\prime}}-a_{\rho})\geqslant v_{A}(\gamma_{\rho})\), we must have \(v\big{(}F(\mathscr{F}_{\partial}^{\tau}a_{\rho^{\prime}})-F(\mathscr{F}_{ \partial}^{\tau}a_{\rho})\big{)}\geqslant v_{A}(\gamma_{\rho})\) as well. This gives \(vF(\mathscr{F}_{\partial}^{\tau}a_{\rho})\geqslant v_{A}(\gamma_{\rho})\), since \(F(\mathscr{F}_{\partial}^{\tau}a_{\rho^{\prime}})\prec F(\mathscr{F}_{ \partial}^{\tau}a_{\rho})\), so \(vF(\mathscr{F}_{\partial}^{\tau}a_{\rho})>v_{A}(\gamma_{\rho})\), as desired. The next lemma shows that \(T^{\mathfrak{d}}\)-hensel configuration in the presence of \(T^{\mathfrak{d}}\)-henselianity allows us to find a pseudolimit that is also a zero of the definable function, a key step towards our results. **Lemma 3.24**.: _Suppose that \(L\) is \(T^{\mathfrak{d}}\)-henselian and that \((F,A,(a_{\rho}))\) is in \(T^{\mathfrak{d}}\)-hensel configuration in \(L\). Then there exists \(b\in L\) such that \(a_{\rho}\rightsquigarrow b\) and \(F(\mathscr{F}_{\partial}^{\tau}b)=0\)._ Proof.: Suppose that \((F,A,a_{\rho^{\prime}},\gamma_{\rho})\) is in \(T^{\mathfrak{d}}\)-hensel configuration in \(L\) for all \(\rho^{\prime}>\rho\geqslant\rho_{0}\). By increasing \(\rho_{0}\), we can assume that \(v(\ell-a_{\rho})=\gamma_{\rho}\) for all \(\rho>\rho_{0}\) and \(\gamma_{\rho}\) is strictly increasing as a function of \(\rho>\rho_{0}\). For \(\rho>\rho_{0}\), an argument similar to one in the previous proof shows that \((F,A,\ell,\gamma_{\rho})\) is in \(T^{\mathfrak{d}}\)-hensel configuration in \(L\). Hence \(T^{\mathfrak{d}}\)-henselianity yields \(b\in L\) with \(F(\mathscr{F}_{\partial}^{\tau}b)=0\) and \(vA_{\times(b-\ell)}\geqslant vF(\mathscr{F}_{\partial}^{\tau}\ell)\). From \(vF(\mathscr{F}_{\partial}^{\tau})>v_{A}(\gamma_{\rho})\) for all \(\rho>\rho_{0}\), we get \(v(b-\ell)>\gamma_{\rho}\) for all \(\rho>\rho_{0}\), and thus \(a_{\rho}\rightsquigarrow b\). ### Uniqueness and the \(T^{\mathfrak{d}}\)-hensel configuration property In this subsection, we establish the key property needed for our theorem on the uniqueness of spherically complete immediate \(T^{\mathcal{O},\mathfrak{d}}\)-extensions of monotone \(T^{\mathcal{O},\mathfrak{d}}\)-models. We say that \(K\) has the \(T^{\mathfrak{d}}\)**-hensel configuration property** if whenever we have 1. a divergent pc-sequence \((a_{\rho})\) in \(K\) of \(T^{\mathfrak{d}}\)-algebraic type over \(K\), 2. a minimal \(\mathcal{L}^{\mathfrak{d}}\)-function \(F\) of \((a_{\rho})\) over \(K\), and 3. an immediate \(T^{\mathcal{O},\mathfrak{d}}\)-extension \(L\) of \(K\) containing a pseudolimit of \((a_{\rho})\), there is an \(A\in K[\mathfrak{d}]^{\neq}\) that linearly approximates \(F\) on \(B(a_{\rho+1},\gamma_{\rho})^{L}\) for all sufficiently large \(\rho\), where \(\gamma_{\rho}\coloneqq v(a_{\rho+1}-a_{\rho})\). The \(T^{\mathfrak{d}}\)-hensel configuration property is an analogue of the differential-henselian configuration property for valued differential fields with small derivation that was implicitly used in [1, Chapter 7] and explicitly introduced in [9]. In this subsection, we will show how uniqueness follows from the \(T^{\mathfrak{d}}\)-hensel configuration property, without assuming monotonicity. Then we show in Section 4.1 that monotone \(T^{\mathcal{O},\mathfrak{d}}\)-models have the \(T^{\mathfrak{d}}\)-hensel configuration property. First, we use the \(T^{\mathfrak{d}}\)-hensel configuration property to give an alternative description of minimal \(\mathcal{L}^{\mathfrak{d}}\)-functions. **Lemma 3.25**.: _Suppose that \(K\) has the \(T^{\mathfrak{d}}\)-hensel configuration property, and let \((a_{\rho})\) be a divergent pc-sequence in \(K\). Then the following are equivalent:_ 1. \((a_{\rho})\) _is of_ \(T^{\mathfrak{d}}\)_-algebraic type over_ \(K\) _and_ \(F\) _is a minimal_ \(\mathcal{L}^{\mathfrak{d}}\)_-function for_ \((a_{\rho})\) _over_ \(K\)_._ 2. \(F(\mathscr{F}_{\partial}^{\tau}b_{\sigma})\rightsquigarrow 0\) _for some pc-sequence_ \((b_{\sigma})\) _in_ \(K\) _equivalent to_ \((a_{\rho})\)_, and_ \(G(\mathscr{F}_{\partial}^{\mathfrak{d}}b_{\sigma})\not\rightsquigarrow 0\) _for_ \(q<r\)_, every_ \(\mathcal{L}(K)\)_-definable function_ \(G\colon K^{1+q}\to K\) _in implicit form, and every pc-sequence_ \((b_{\sigma})\) _in_ \(K\) _equivalent to_ \((a_{\rho})\)_._ Proof.: Suppose that \(F\) is a minimal \(\mathcal{L}^{\mathfrak{d}}\)-function for \((a_{\rho})\) over \(K\). Fact 3.17 yields an immediate \(T^{\mathcal{O},\mathfrak{d}}\)-extension \(K\langle\!\langle a\rangle\!\rangle\) of \(K\) such that \(a_{\rho}\rightsquigarrow a\) and \(F(\mathscr{F}_{\partial}^{\tau}a)=0\). As \(K\) has the \(T^{\mathfrak{d}}\)-hensel configuration property, Lemma 3.21 provides a pc-sequence \((b_{\rho})\) in \(K\) equivalent to \((a_{\rho})\) such that \(F(\mathscr{F}_{\partial}^{\tau}b_{\rho})\rightsquigarrow 0\). For each \(q<r\), we have \(Z_{q}(K,a)=\emptyset\), so \(G(\mathscr{G}_{\partial}^{\tau}b_{\sigma})\not\rightsquigarrow 0\) for any \(\mathcal{L}(K)\)-definable function \(G\colon K^{1+q}\to K\) in implicit form and any pc-sequence \((b_{\sigma})\) in \(K\) equivalent to \((a_{\rho})\) by Lemma 3.19. Now suppose that \(F\) is not a minimal \(\mathcal{L}^{\mathfrak{d}}\)-function for \((a_{\rho})\) over \(K\), and fix a pseudolimit \(\ell\) of \((a_{\rho})\) in some \(T^{\mathcal{O},\mathfrak{d}}\)-extension of \(K\). If \(G\in Z_{q}(K,\ell)\) is a minimal \(\mathcal{L}^{\mathfrak{d}}\)-function for \((a_{\rho})\) over \(K\) for some \(q<r\), then we have \(G(\mathscr{G}_{\partial}^{\mathfrak{d}}b_{\sigma})\rightsquigarrow 0\) for some pc-sequence \((b_{\sigma})\) in \(K\) equivalent to \((a_{\rho})\) by the first part of this proof. If \(Z_{q}(K,\ell)=\emptyset\) for all \(q<r\), then \(F\not\in Z_{r}(K,\ell)\), so \(F(\mathscr{G}_{\partial}^{\tau}b_{\sigma})\not\rightsquigarrow 0\) for any pc-sequence \((b_{\sigma})\) in \(K\) equivalent to \((a_{\rho})\) by Lemma 3.19. **Assumption 3.26**.: _For the rest of this section, suppose that every immediate \(T^{\mathcal{O},\mathfrak{d}}\)-extension of \(K\) has the \(T^{\mathfrak{d}}\)-hensel configuration property._ **Theorem 3.27**.: _Suppose that \(\boldsymbol{k}\) is linearly surjective. Any two spherically complete immediate \(T^{\mathcal{O},\beta}\)-extensions of \(K\) are \(\mathcal{L}^{\mathcal{O},\beta}\)-isomorphic over \(K\). Any two \(T^{\flat}\)-algebraically maximal immediate \(T^{\mathcal{O},\beta}\)-extensions of \(K\) that are \(T^{\flat}\)-algebraic over \(K\) are \(\mathcal{L}^{\mathcal{O},\beta}\)-isomorphic over \(K\)._ Proof.: Let \(L_{0}\) and \(L_{1}\) be spherically complete immediate \(T^{\mathcal{O},\beta}\)-extensions of \(K\). Let \(\mu\colon K_{0}\to K_{1}\) be a maximal \(\mathcal{L}^{\mathcal{O},\beta}\)-isomorphism between \(T^{\mathcal{O},\beta}\)-extensions \(K_{0}\subseteq L_{0}\) and \(K_{1}\subseteq L_{1}\) of \(K\). For convenience, we identify \(K_{0}\) and \(K_{1}\) via \(\mu\), so \(\mu\) becomes the identity, and assume moreover that \(K_{0}=K_{1}=K\). Suppose that \(K\neq L_{0}\) (equivalently, \(K\neq L_{1}\)), so we have \(\ell\in L_{0}\setminus K\) and a divergent pc-sequence \((a_{\rho})\) in \(K\) with \(a_{\rho}\rightsquigarrow\ell\). If \((a_{\rho})\) is \(T^{\flat}\)-transcendental, then we can take \(b\in L_{1}\) with \(a_{\rho}\rightsquigarrow b\) and extend \(\mu\) to an \(\mathcal{L}^{\mathcal{O},\beta}\)-isomorphism sending \(\ell\) to \(b\) by Fact 3.16, contradicting the maximality of \(\mu\). Now suppose that \((a_{\rho})\) is \(T^{\flat}\)-algebraic, and let \(F\) be a minimal \(\mathcal{L}^{\flat}\)-function of \((a_{\rho})\) over \(K\). Using Lemma 3.25, we replace \((a_{\rho})\) by an equivalent pc-sequence in \(K\) to arrange that \(F(\mathscr{J}_{\partial}^{\tau}a_{\rho})\rightsquigarrow 0\). By assumption \(K\) has the \(T^{\flat}\)-hensel configuration property, so by Lemma 3.23, we have \(A_{0},A_{1}\in K[\partial]\) such that \((F,A_{0},(a_{\rho}))\) is in \(T^{\flat}\)-hensel configuration in \(L_{0}\) and \((F,A_{1},(a_{\rho}))\) is in \(T^{\flat}\)-hensel configuration in \(L_{1}\). By Corollary 3.15 and Lemma 3.24, we have \(b_{0}\in L_{0}\setminus K\) and \(b_{1}\in L_{1}\setminus K\) such that \(a_{\rho}\rightsquigarrow b_{0}\), \(a_{\rho}\rightsquigarrow b_{1}\), \(F(\mathscr{J}_{\partial}^{\tau}b_{0})=0\), and \(F(\mathscr{J}_{\partial}^{\tau}b_{1})=0\). Then Fact 3.17 yields an extension of \(\mu\) to an \(\mathcal{L}^{\mathcal{O},\beta}\)-isomorphism \(K\langle\!\langle b_{0}\rangle\!\rangle\to K\langle\!\langle b_{1}\rangle\!\rangle\), contradicting the maximality of \(\mu\). The proof of the second statement is similar but it uses Lemma 3.18 and also Corollary 3.20 replaces Corollary 3.15. In the case of few constants, we have two additional results and an easy corollary. The first is a converse to Corollary 3.20, and it does not need the assumption that proper immediate \(T^{\mathcal{O},\beta}\)-extensions of \(K\) have the \(T^{\flat}\)-hensel configuration property, only that \(K\) itself does. **Theorem 3.28**.: _Suppose that \(K\) is \(T^{\flat}\)-henselian and \(C\subseteq\mathcal{O}\). Then \(K\) is \(T^{\flat}\)-algebraically maximal._ Proof.: Let \((a_{\rho})\) be a pc-sequence in \(K\) of \(T^{\flat}\)-algebraic type over \(K\) with minimal \(\mathcal{L}^{\flat}\)-function \(F\) over \(K\). By Lemma 3.18, it suffices to show that \((a_{\rho})\) has a pseudolimit in \(K\). Assume towards a contradiction that \((a_{\rho})\) is divergent. Then we may replace \((a_{\rho})\) by an equivalent pc-sequence in \(K\) to arrange that \(F(\mathscr{J}_{\partial}^{\tau}a_{\rho})\rightsquigarrow 0\) by Lemma 3.25. By assumption, \(K\) has the \(T^{\flat}\)-hensel configuration property, so by Lemma 3.23 we have \(A\in K[\partial]^{\neq}\) of order \(q\) such that \((F,A,(a_{\rho}))\) is in \(T^{\flat}\)-hensel configuration in \(K\). By removing some initial terms of the sequence, we arrange that \(\gamma_{\rho}\coloneqq v(a_{\rho+1}-a_{\rho})\) is strictly increasing as a function of \(\rho\), \(\gamma_{\rho}=v(a-a_{\rho})\), and \((F,A,a_{\rho^{\prime}},\gamma_{\rho})\) is in \(T^{\flat}\)-hensel configuration for all \(\rho^{\prime}>\rho\). By \(T^{\flat}\)-henselainity take for each \(\rho\) a \(z_{\rho}\in B(a_{\rho+1},\gamma_{\rho})\) with \(F(\mathscr{J}_{\partial}^{\tau}z_{\rho})=0\). Then \(v(a-z_{\rho})>\gamma_{\rho}\) and \(v(z_{\rho}-a_{\rho})=\gamma_{\rho}\). Since \((\gamma_{\rho})\) is cofinal in \(v(a-K)\), we can take indices \(\rho_{0}<\cdots<\rho_{q+2}\) such that \(a-z_{\rho_{j}}\prec a-z_{\rho_{i}}\) whenever \(0\leqslant i<j\leqslant q+2\). Set \(y_{i}\coloneqq z_{\rho_{i+1}}\) for \(i=0,\ldots,q+1\). If \(1\leqslant i\leqslant q\), then \[y_{i}-y_{i-1}\ \sim\ a-y_{i-1}\ \succ\ a-y_{i}\ \sim\ y_{i+1}-y_{i},\] so conditions (i) and (ii) from Lemma 3.6 are satisfied. Letting \(\gamma\coloneqq\gamma_{\rho_{0}}\), we will reach a contradiction with that lemma by showing that \((F,A,y_{q+1},\gamma)\) is in \(T^{\flat}\)-hensel configuration and \(v(y_{0}-y_{q+1})>\gamma\). The latter holds because \[v(a-y_{q+1})\ >\ v(a-y_{0})\ >\ \gamma_{\rho_{1}}\ >\ \gamma.\] We also have \(B(y_{q+1},\gamma)=B(y_{0},\gamma)=B(a_{\rho_{1}},\gamma)\), so since \((F,A,a_{\rho_{1}},\gamma)\) is in \(T^{\flat}\)-hensel configuration, \(A\) linearly approximates \(F\) on \(B(y_{q+1},\gamma)\). It remains to note that \(vF(\mathscr{J}_{\partial}^{\tau}y_{q+1})=\infty>v_{A}(\gamma)\). The previous result fails without the assumption \(C\subseteq\mathcal{O}\), as the next example shows. **Example 3.29**.: We build an increasing sequence \((\Gamma_{n})\) of divisible ordered abelian groups as follows: set \(\Gamma_{0}\coloneqq\{0\}\) and given \(\Gamma_{n}\), set \(\Gamma_{n+1}\coloneqq\Gamma_{n}\oplus\mathbb{Q}\gamma_{n}\), where \(\gamma_{n}\) is a new element greater than \(\Gamma_{n}\). Now take a linearly surjective \(\boldsymbol{k}\models T^{\flat}_{\mathrm{an}}\). From \(\boldsymbol{k}\) and \((\Gamma_{n})\) we obtain the ordered Hahn field \(\boldsymbol{k}(\!(t^{\Gamma_{n}})\!)\) with \(0<t<\boldsymbol{k}^{>}\). As in the introduction, we expand \(\boldsymbol{k}(\!(t^{\Gamma_{n}})\!)\) to a model \(E_{n}\coloneqq\boldsymbol{k}(\!(t^{\Gamma_{n}})\!)_{\mathrm{an},c}\models T^{ \mathcal{O},\beta}_{\mathrm{an}}\) with \(c\) the zero map. That is, we expand \(\boldsymbol{k}(\!(t^{\Gamma_{n}})\!)\) to a model of \(T_{\mathrm{an}}\) by Taylor expansion; equip it with the convex hull of \(\boldsymbol{k}\), which is a \(T_{\mathrm{an}}\)-convex valuation ring; and equip it with the derivation given by \(\partial(\sum_{\gamma}f_{\gamma}t^{\gamma})=\sum_{\gamma}\partial(f_{\gamma})t ^{\gamma}\), which is a \(T^{\flat}_{\mathrm{an}}\)-derivation by [16, Proposition 3.14]. Identifying \(E_{n}\) with an \(\mathcal{L}^{\mathcal{O},\beta}\)-substructure of \(E_{n+1}\) in the obvious way, we get an increasing sequence \((E_{n})\) of \(T^{\mathcal{O},\beta}_{\mathrm{an}}\)-models. Then \(E\coloneqq\bigcup_{n}E_{n}\models T^{\mathcal{O},\beta}_{\mathrm{an}}\) by [16, Corollary 3.16]. Note that \(E\) is \(T^{\mathcal{Y}}\)-henselian since each \(E_{n}\) is by Corollary 3.15. Now set \(\Gamma\coloneqq\bigcup_{n}\Gamma_{n}\) and \(L\coloneqq\boldsymbol{k}(\!(t^{\Gamma})\!)_{\mathrm{an},c}\) as before, so \(E\subseteq L\). We have \(\sum_{n}t^{\gamma_{n}}\in L\setminus E\) with \(\partial(\sum_{n}t^{\gamma_{n}})=0\), so \(E\) is not \(T^{\mathcal{Y}}_{\mathrm{an}}\)-algebraically maximal. Recall that any \(T^{\mathcal{Y}}\)-henselian \(K\) with \(C\subseteq\mathcal{O}\) is asymptotic (see Section 3.1), so Theorem 3.28 is really about asymptotic \(T^{\mathcal{O},\beta}\)-models, as are the next results about minimal \(T^{\mathcal{Y}}\)-henselian extensions. An immediate \(T^{\mathcal{O},\beta}\)-extension \(L\) of \(K\) is a \(T^{\mathcal{Y}}\)**-henselization of \(K\)** if \(L\) is \(T^{\mathcal{Y}}\)-henselian and for every immediate \(T^{\mathcal{Y}}\)-henselian \(T^{\mathcal{Y}}\)-extension \(M\) of \(K\), there is an \(\mathcal{L}^{\mathcal{O},\beta}\)-embedding \(L\to M\) that is the identity on \(K\). **Theorem 3.30**.: _Suppose that \(\boldsymbol{k}\) is linearly surjective and \(K\) is asymptotic. Then \(K\) has a \(T^{\mathcal{Y}}\)-henselization that is \(T^{\mathcal{Y}}\)-algebraic over \(K\) and has no proper \(T^{\mathcal{Y}}\)-henselian \(\mathcal{L}^{\mathcal{O},\beta}\)-substructure containing \(K\). In particular, any two \(T^{\mathcal{Y}}\)-henselizations of \(K\) are isomorphic over \(K\)._ Proof.: Let \(L\) be a \(T^{\mathcal{Y}}\)-algebraically maximal immediate \(T^{\mathcal{O},\beta}\)-extension of \(K\) that is \(T^{\mathcal{Y}}\)-algebraic over \(K\). Then \(L\) is \(T^{\mathcal{Y}}\)-henselian by Corollary 3.20 and asymptotic by [2, Lemma 1.12] (or [1, Lemmas 9.4.2 and 9.4.5]). In particular, \(C_{L}\subseteq\mathcal{O}_{L}\), and thus by Theorem 3.28 no proper \(\mathcal{L}^{\mathcal{O},\beta}\)-substructure of \(L\) containing \(K\) is \(T^{\mathcal{Y}}\)-henselian. Now let \(M\) be an immediate \(T^{\mathcal{Y}}\)-henselian \(T^{\mathcal{O},\beta}\)-extension of \(K\), so \(M\) is asymptotic and hence \(T^{\mathcal{Y}}\)-algebraically maximal by Theorem 3.28, making Lemma 3.24 available. To see that there is an \(\mathcal{L}^{\mathcal{O},\beta}\)-embedding of \(L\) into \(M\) over \(K\), argue as in the proof of Theorem 3.27. **Corollary 3.31**.: _Suppose that \(\boldsymbol{k}\) is linearly surjective and \(K\) is asymptotic. Any immediate \(T^{\mathcal{Y}}\)-henselian \(T^{\mathcal{O},\beta}\)-extension of \(K\) that is \(T^{\mathcal{Y}}\)-algebraic over \(K\) is a \(T^{\mathcal{Y}}\)-henselization of \(K\)._ Proof.: Let \(M\) be an immediate \(T^{\mathcal{Y}}\)-henselian \(T^{\mathcal{O},\beta}\)-extension of \(K\) that is \(T^{\mathcal{Y}}\)-algebraic over \(K\) and let \(L\) be a \(T^{\mathcal{Y}}\)-henselization of \(K\). Then the \(\mathcal{L}^{\mathcal{O},\beta}\)-embedding \(L\to M\) over \(K\) is surjective by Theorem 3.28. ## 4. Monotone fields ### \(T^{\mathcal{Y}}\)-hensel configuration property for monotone fields In this subsection, we assume that \(K\) is monotone and the derivation of \(\boldsymbol{k}\) is nontrivial. Let \((a_{\rho})\) be a divergent pc-sequence in \(K\), and let \(\ell\) be a pseudolimit of \((a_{\rho})\) in a monotone \(T^{\mathcal{O},\beta}\)-extension \(M\) of \(K\). Let \(F\colon K^{1+r}\to K\) be an \(\mathcal{L}(K)\)-definable function in implicit form, and assume that \(Z_{q}(K,\ell)=\emptyset\) for all \(q<r\). For each \(\rho\), set \(\gamma_{\rho}\coloneqq v(a_{\rho+1}-a_{\rho})\) and set \(B_{\rho}\coloneqq B(a_{\rho+1},\gamma_{\rho})\). We assume that \(\gamma_{\rho}\) is strictly increasing as a function of \(\rho\), so \(B_{\rho^{\prime}}\subsetneq B_{\rho}\) for \(\rho^{\prime}>\rho\). **Proposition 4.1**.: _There is an \(A\in K[\partial]^{\neq}\) and an index \(\rho_{0}\) such that \(A\) linearly approximates \(F\) on \(B_{\rho_{0}}^{M}\)._ Proof.: Note that if \(A\) linearly approximates \(\mathfrak{m}_{F}^{-1}F\) on an open \(v\)-ball \(B\), then \(\mathfrak{m}_{F}A\) linearly approximates \(F\) on \(B\), so we may assume that \(\mathfrak{m}_{F}=1\). By applying Fact 2.2 to the function \(I_{F}\), we find an \(\mathcal{L}^{\mathrm{RV}^{\mathrm{req}}}(K)\)-definable map \(\chi\colon K^{r}\to\mathrm{RV}^{\mathrm{req}}_{K}\) such that for each \(s\in\chi(K^{r})\), if \(\chi^{-1}(s)\) contains an open \(v\)-ball, then either \(I_{F}\) is constant on \(\chi^{-1}(s)\) or there is \(d\in K^{r}\) such that \[v\big{(}I_{F}(x)-I_{F}(y)-d\cdot(x-y)\big{)}\ >\ vd+v(x-y) \tag{4.1}\] for all \(x,y\in\chi^{-1}(s)\) with \(x\neq y\). By [17, Lemma 5.5], \(L\coloneqq K\langle\mathscr{Y}^{r-1}_{\partial}\ell\rangle\subseteq M\) is an immediate \(T^{\mathcal{O}}\)-extension of \(K\). Let \(\chi^{L}\) and \(\chi^{M}\) denote the natural extensions of \(\chi\) to \(L^{r}\) and \(M^{r}\), respectively, and let \[s_{0}\ \coloneqq\ \chi^{L}(\mathscr{Y}^{r-1}_{\partial}\ell)\ \in\ \mathrm{RV}^{\mathrm{req}}_{\mathrm{e}}\ =\ \mathrm{RV}^{\mathrm{req}}_{K}\,.\] Let \(U\coloneqq\chi^{-1}(s_{0})\subseteq K^{r}\). Then \(U\) is \(\mathcal{L}^{\mathcal{O}}(K)\)-definable by [24, Corollary 2.18]. Since \(\mathscr{Y}^{r-1}_{\partial}(\ell)\in U^{M}\), we can apply [17, Lemma 5.6] to get that \(U\) has nonempty interior and that \(\mathscr{Y}^{r-1}_{\partial}(y)\in U\) for all \(y\in M\) sufficiently close to \(\ell\). Take \(\rho_{0}\) such that \(\mathscr{Y}^{r-1}_{\partial}(y)\in U^{M}\) whenever \(v(\ell-y)>\gamma_{\rho_{0}}\), so \(B_{\rho_{0}}^{M}\subseteq U^{M}\). We choose the linear differential operator \(A\) as follows: If \(I_{F}\) is constant on \(U\), then we let \(A\) be \(\partial^{r}\in K[\partial]\). If \(I_{F}\) is not constant on \(U\), then we let \(A\) be \[\partial^{r}-d_{r}\partial^{r-1}-\cdots-d_{1}\ \in\ K[\partial],\] where \(d=(d_{1},\ldots,d_{r})\in K^{r}\) is chosen such that (4.1) holds for \(x,y\in U\) with \(x\neq y\). We claim that \(A\) linearly approximates \(F\) on \(B_{\rho_{0}}^{M}\). This is clear if \(I_{F}\) is constant on \(U\), for then \(I_{F}\) is constant on \(U^{M}\) as well. Suppose that \(I_{F}\) is not constant on \(U\), and let \(a,b\in B_{\rho_{0}}^{M}\) with \(a\neq b\). Then \[v\big{(}F(\mathscr{Y}^{r}_{\partial}b)-F(\mathscr{Y}^{r}_{\partial}a)-A(b-a) \big{)}\ =\ v\big{(}I_{F}(\mathscr{Y}^{r-1}_{\partial}b)-I_{F}(\mathscr{Y}^{r-1}_{ \partial}a)-d\cdot\mathscr{Y}^{r-1}_{\partial}(b-a)\big{)}\ >\ vd+v\big{(}\mathscr{Y}^{r-1}_{\partial}(b-a)\big{)},\] where the inequality holds since \(M\) is an elementary \(T^{\mathcal{O}}\)-extension of \(K\). We have \(v\big{(}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Theorem 4.8**.: _Suppose that \(\mathbf{k}\) is linearly surjective, \(\mathcal{C}\) has the \(T^{\vartheta}\)-henselian configuration property, \(\mathcal{I}(K)\subseteq\mathcal{C}\), and every \(M\in\mathcal{C}\) is asymptotic. Then \(K\) has a \(\mathcal{C}\)-\(T^{\vartheta}\)-henselization that is \(T^{\vartheta}\)-algebraic over \(K\) and has no proper \(T^{\vartheta}\)-henselian \(\mathcal{L}^{\mathcal{O},\vartheta}\)-substructure containing \(K\). In particular, any two \(\mathcal{C}\)-\(T^{\vartheta}\)-henselizations of \(K\) are isomorphic over \(K\)._ **Corollary 4.9**.: _Suppose that \(K\) is monotone and asymptotic, and \(\mathbf{k}\) is linearly surjective. Then the \(T^{\vartheta}\)-henselization of \(K\) embeds into every monotone, asymptotic, \(T^{\vartheta}\)-henselian \(T^{\mathcal{O},\vartheta}\)-extension of \(K\)._ ### The \(c\)-map In this subsection, assume that \(K\) is monotone. A **section for**\(K\) is a \(\Lambda\)-vector space embedding \(s\colon\Gamma\to K^{>}\) such that \(v(s\gamma)=\gamma\) for all \(\gamma\in\Gamma\). An **angular component for**\(K\) is a \(\Lambda\)-linear map \(\operatorname{ac}\colon K^{>}\to\mathbf{k}^{>}\) such that \(\operatorname{ac}(a)=\operatorname{res}(a)\) whenever \(a\asymp 1\). We extend any angular component map \(\operatorname{ac}\) to a map \(K\to\mathbf{k}\) (also denoted by \(\operatorname{ac}\)) by setting \(\operatorname{ac}(0)\coloneqq 0\) and \(\operatorname{ac}(-a)\coloneqq-\operatorname{ac}(a)\) for \(a\in K^{>}\). Given any section \(s\colon\Gamma\to K^{>}\), the map \(a\mapsto\operatorname{res}\bigl{(}a/s(va)\bigr{)}\colon K^{>}\to\mathbf{k}^{>}\) is an angular component for \(K\), which we say is **induced by \(s\)**. **Lemma 4.10**.: _Let \(A\subseteq K^{>}\) be a \(\Lambda\)-subspace such that \(\{va:a\in A\}=\Gamma\). Then there is a section \(s\colon\Gamma\to K^{>}\) with image contained in \(A\)._ Proof.: The inclusion \(\mathcal{O}^{\times}\cap A\to A\) and the restricted valuation map \(A\to\Gamma\) yield an exact sequence \[1\to\mathcal{O}^{\times}\cap A\to A\to\Gamma\to 0\] of \(\Lambda\)-vector spaces. This exact sequence splits, yielding a section \(s\colon\Gamma\to A\) as claimed. **Corollary 4.11**.: _Let \(\operatorname{ac}\) be an angular component for \(K\). Then \(\operatorname{ac}\) is induced by a section for \(K\)._ Proof.: Apply the previous lemma to the set \(A\coloneqq\{a\in K^{>}:\operatorname{ac}(a)=1\}\). Let \(s\colon\Gamma\to K^{>}\) be a section for \(K\). We define a map \(c\colon\Gamma\to\mathbf{k}\) by setting \(c(\gamma)\coloneqq\operatorname{res}\bigl{(}s(\gamma)^{\dagger}\bigr{)}\) for \(\gamma\in\Gamma\). Since \((a^{\lambda})^{\dagger}=\lambda a^{\dagger}\) for \(a\in K^{>}\) by Fact 2.3, the map \(c\) is \(\Lambda\)-linear. If \(K\) has many constants, then by Lemma 4.10 with \(C^{>}\) in place of \(A\), we can choose \(s\) so that its image is contained in the constant field. In this case, \(c\) is the zero map. The argument above tells us that we can associate to any monotone \(T^{\mathcal{O},\vartheta}\)-model \(K\) a structure \((\mathbf{k},\Gamma,c)\) where \(\mathbf{k}\models T^{\vartheta}\), \(\Gamma\) is an ordered \(\Lambda\)-vector space, and \(c\colon\Gamma\to\mathbf{k}\) is \(\Lambda\)-linear. As a converse, we show below that any such structure \((\mathbf{k},\Gamma,c)\) comes from a monotone \(T^{\mathcal{O},\vartheta}\)-model. We need the following fact. **Fact 4.12** ([18, Proposition 2.6]).: _Let \(E\) be a \(T\)-convex valued field, let \(M\) be a \(T^{\mathcal{O}}\)-extension of \(E\), let \(a\in M\) with \(a\not\sim f\) for all \(f\in E\), and let \(F\colon E\to E\) be an \(\mathcal{L}(E)\)-definable function. Then \(\frac{\partial F}{\partial Y}(a)\preccurlyeq a^{-1}F(a)\)._ **Proposition 4.13**.: _Let \(\mathbf{k}\models T^{\vartheta}\), let \(\Gamma\) be an ordered \(\Lambda\)-vector space, and let \(c\colon\Gamma\to\mathbf{k}\) be a \(\Lambda\)-linear map. Then there is a monotone \(T^{\mathcal{O},\vartheta}\)-model \(K\) with differential residue field \(\mathbf{k}\) and value group \(\Gamma\) and a section \(s\colon\Gamma\to K^{>}\) such that \(\operatorname{res}\bigl{(}s(\gamma)^{\dagger}\bigr{)}=c(\gamma)\) for all \(\gamma\in\Gamma\)._ Proof.: Let \((\gamma_{\alpha})_{\alpha<\beta}\) be a \(\Lambda\)-basis for \(\Gamma\), so we may write each \(\gamma\in\Gamma\) uniquely as a sum \(\gamma=\sum_{\alpha<\beta}\lambda_{\alpha}\gamma_{\alpha}\) where each \(\lambda_{\alpha}\) is in \(\Lambda\) and only finitely many \(\lambda_{\alpha}\) are nonzero. Let \(K\coloneqq\mathbf{k}\langle(t_{\alpha})_{\alpha<\beta}\rangle\) be a \(T^{\mathcal{O}}\)-extension of \(\mathbf{k}\), equipped with the convex hull of \(\mathbf{k}\) as its \(T\)-convex valuation ring and ordered so that \(t_{\alpha}>0\) and \(v(t_{\alpha})=\gamma_{\alpha}\) for each \(\alpha\) (such an extension can be built by a transfinite construction, using repeated applications of [18, Lemma 2.3]). Then \(K\) has residue field \(\mathbf{k}\) and value group \(\Gamma\), and the sequence \((t_{\alpha})\) is necessarily \(\mathcal{L}(\mathbf{k})\)-independent. Let \(s\colon\Gamma\to K^{>}\) be the section mapping \(\gamma=\sum_{\alpha<\beta}\lambda_{\alpha}\gamma_{\alpha}\in\Gamma\) to \(\prod_{\alpha<\beta}t_{\alpha}^{\lambda_{\alpha}}\in K^{>}\). Using Fact 2.5, we extend the \(T\)-derivation on \(k\) to a \(T\)-derivation on \(K\) by setting \(t_{\alpha}^{\prime}=c(\gamma_{\alpha})t_{\alpha}\). Since \((t_{\alpha}^{\prime})^{\dagger}=\lambda t_{\alpha}^{\dagger}\) by Fact 2.3, we have \(s(\gamma)^{\dagger}=c(\gamma)\in\mathbf{k}\) for all \(\gamma\in\Gamma\). Thus, we need only check that \(K\) is monotone. Each element of \(K\) is of the form \(G(a,t)\) where \(G\colon K^{m+n}\to K\) is \(\mathcal{L}(\emptyset)\)-definable, \(a=(a_{1},\ldots,a_{m})\) is an \(\mathcal{L}(\emptyset)\)-independent tuple from \(\mathbf{k}\), and \(t=(t_{\alpha_{1}},\ldots,t_{\alpha_{n}})\) for some distinct \(\alpha_{1},\ldots,\alpha_{n}<\beta\). We fix such an element \(G(a,t)\), and we need to show that \(G(a,t)^{\prime}\preccurlyeq G(a,t)\). Viewing \(G\) as a function of the variables \(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n}\), we have \[G(a,t)^{\prime}\ =\ \frac{\partial G}{\partial X_{1}}(a,t)a_{1}^{\prime}+\cdots+ \frac{\partial G}{\partial X_{m}}(a,t)a_{m}^{\prime}+\frac{\partial G}{\partial Y _{1}}(a,t)t_{\alpha_{1}}^{\prime}+\cdots+\frac{\partial G}{\partial Y_{n}}(a,t)t_ {\alpha_{n}}^{\prime}.\] We will show that \(\frac{\partial G}{\partial X_{i}}(a,t)a_{i}^{\prime},\frac{\partial G}{ \partial Y_{j}}(a,t)t_{\alpha_{j}}^{\prime}\preccurlyeq G(a,t)\) for all \(i\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,n\}\). By symmetry, it suffices to handle the case \(i=j=1\). We start with \(\frac{\partial G}{\partial X_{1}}(a,t)a_{1}^{\prime}\). Since \(a_{1}^{\prime}\preccurlyeq 1\), it suffices to show that \(\frac{\partial G}{\partial X_{1}}(a,t)\preccurlyeq G(a,t)\). Set \(E\coloneqq\mathrm{dcl}_{\mathcal{L}}(a_{2},\ldots,a_{m},t)\), so \(E\) is an \(\mathcal{L}^{\mathcal{O}}\)-substructure of \(K\), and set \(E_{0}\coloneqq\mathrm{dcl}_{\mathcal{L}}(a_{2},\ldots,a_{m})\subseteq E\). Since \(E_{0}\) is trivially valued and \(a_{1}\not\subsetneq f\), we have \(a_{1}\not\sim f\) for any \(f\in E_{0}\). The Wilkie inequality gives \(\mathrm{res}(E)=\mathrm{res}(E_{0})\), so \(a_{1}\not\sim f\) for any \(f\in E\). Applying Fact 4.12 with \(a_{1}\) in place of \(a\) and the function \(x\mapsto G(x,a_{2},\ldots,a_{m},t)\) in place of \(F\) gives \[\frac{\partial G}{\partial X_{1}}(a,t)\preccurlyeq a_{1}^{-1}G(a,t)\asymp G(a,t),\] as desired. Next, we show that \(\frac{\partial G}{\partial Y_{1}}(a,t)t^{\prime}_{\alpha_{1}}\preccurlyeq G(a,t)\). This time, we set \(E\coloneqq\mathrm{dcl}_{\mathcal{L}}(a,t_{\alpha_{2}},\ldots,t_{\alpha_{n}})\), so \(t_{\alpha_{1}}\not\sim f\) for any \(f\in E\). In particular, \(t_{\alpha_{1}}\not\sim f\) for any \(f\in E\), so Fact 4.12 (this time with \(x\mapsto G(a,x,t_{\alpha_{2}},\ldots,t_{\alpha_{n}})\) in place of \(F\)) gives \[\frac{\partial G}{\partial Y_{1}}(a,t)\preccurlyeq t_{\alpha_{1}}^{-1}G(a,t).\] Thus, \(\frac{\partial G}{\partial Y_{1}}(a,t)t^{\prime}_{\alpha_{1}}\preccurlyeq t_{ \alpha_{1}}^{\dagger}G(a,t)\preccurlyeq G(a,t)\), since \(t_{\alpha_{1}}^{\dagger}=c(\gamma_{\alpha_{1}})\preccurlyeq 1\). **Lemma 4.14**.: _Let \(K\) be a monotone \(T^{\mathcal{O},\vartheta}\)-model, let \(c\colon\Gamma\to\boldsymbol{k}\) be a \(\Lambda\)-linear map, and suppose that for each \(\gamma\in\Gamma\), there is \(a\in K^{>}\) with \(va=\gamma\) and \(\mathrm{res}(a^{\dagger})=c(\gamma)\). Then there is a section \(s\colon\Gamma\to K^{>}\) such that \(\mathrm{res}\big{(}s(\gamma)^{\dagger}\big{)}=c(\gamma)\) for all \(\gamma\in\Gamma\). The corresponding angular component map \(\mathrm{ac}\colon K\to\boldsymbol{k}\) induced by \(s\) satisfies the equality \(\mathrm{ac}(a)^{\dagger}=\mathrm{res}(a^{\dagger})-c(va)\) for all \(a\in K^{\times}\)._ Proof.: Let \(A\subseteq K^{>}\) be the set of all \(a\in K^{>}\) with \(\mathrm{res}(a^{\dagger})=c(va)\). Then \(A\) is a multiplicative \(\Lambda\)-subspace of \(K^{>}\) and \(\{va:a\in A\}=\Gamma\), so there is a section \(s\colon\Gamma\to K^{>}\) with image contained in \(A\) by Lemma 4.10. Let \(\mathrm{ac}\colon K\to\boldsymbol{k}\) be the angular component map induced by \(s\). Then for \(a\in K^{\times}\), we have \[\mathrm{ac}(a)^{\dagger}\ =\ \mathrm{res}\big{(}a/s(va)\big{)}^{\dagger}\ =\ \mathrm{res}(a^{ \dagger}-s(va)^{\dagger})\ =\ \mathrm{res}(a^{\dagger})-c(va).\qed\] ## 5. When \(T=\mathrm{RCF}\) Let \(T=\mathrm{RCF}\) in the language of ordered rings. In this setting we show that the notions and results in the previous sections specialize to the analogous notions and results of [1, Chapter 7]. By Fact 2.8 and [1, Corollary 6.9.4], if \(\boldsymbol{k}\) has nontrivial derivation, then \(K\) has no proper immediate \(T^{\mathcal{O},\vartheta}\)-extension if and only if it has no proper immediate valued differential field extension with small derivation. Additionally: **Lemma 5.1**.: _The \(T^{\mathcal{O},\vartheta}\)-model \(K\) is \(T^{\vartheta}\)-algebraically maximal if and only if the valued differential field \(K\) is \(\mathrm{d}\)-algebraically maximal in the sense of [1, Chapter 7]._ Proof.: Since \(\mathcal{L}(K)\)-definable functions are semialgebraic, the notions \(T^{\vartheta}\)-algebraic and \(\mathrm{d}\)-algebraic coincide. Thus the right-to-left direction is trivial. Conversely, let \(L\) be a \(\mathrm{d}\)-algebraic immediate valued differential field extension of \(K\) with small derivation. Since \(\Gamma\) is divisible and \(\boldsymbol{k}\) is real closed, the henselization \(L^{\mathrm{h}}\) of \(L\) is real closed, and it has small derivation by [1, Proposition 6.2.1]. Additionally, its valuation ring is convex, and hence \(T\)-convex by (see [7, Proposition 4.2]). Thus \(L^{\mathrm{h}}\) is a \(T^{\vartheta}\)-algebraic immediate \(T^{\mathcal{O},\vartheta}\)-extension of \(K\). Combining this lemma with the observation preceding it shows that in this case Theorem 4.2 becomes [1, Theorem 7.4.3], Theorem 4.3 becomes [1, Theorem 7.0.3], and Theorem 4.4 becomes a special case of [21, Theorem 3.7]. Lemma 5.1 also yields: **Corollary 5.2**.: _Suppose that \(K\) is monotone and asymptotic. Then \(K\) is \(T^{\vartheta}\)-henselian if and only if \(K\) is \(\mathrm{d}\)-henselian in the sense of [1, Chapter 7]._ Proof.: Suppose that \(\boldsymbol{k}\) is linearly surjective. Then by [1, Theorems 7.0.1, 7.0.3], \(K\) is \(\mathrm{d}\)-algebraically maximal if and only if \(K\) is \(\mathrm{d}\)-henselian. Likewise, by Theorem 4.3, \(K\) is \(T^{\vartheta}\)-algebraically maximal if and only if \(K\) is \(T^{\vartheta}\)-henselian. The result follows. This raises the question of whether \(T^{\vartheta}\)-henselianity always implies \(\mathrm{d}\)-henselianity. Moreover, at present the only proof we have of this implication in the case that \(K\) is monotone and asymptotic is the roundabout proof given above. ## 6. An AKE theorem for monotone \(T^{\partial}\)-henselian fields To establish our Ax-Kochen/Ershov theorem for monotone \(T^{\partial}\)-henselian fields, we construe a monotone \(T^{\mathcal{O},\partial}\)-model \(K\) as a three-sorted structure \(\mathcal{K}=(K,\boldsymbol{k},\Gamma;\pi,v,c,\mathrm{ac})\) with a sort f for \(K\) as a structure in the language \(\mathcal{L}_{\mathrm{f}}=\mathcal{L}^{\partial}\), a sort for \(\boldsymbol{k}\) as a structure in the language \(\mathcal{L}_{\mathrm{r}}=\mathcal{L}^{\partial}\), and a sort for \(\Gamma\) as a structure in the (one-sorted) language \(\mathcal{L}_{\mathrm{v}}\) ordered \(\Lambda\)-vector spaces, together with symbols for maps \(\pi,v,c,\mathrm{ac}\) connecting the sorts as follows. Suppose that: 1. \(K\models T^{\partial}\); 2. \(\boldsymbol{k}\models T^{\partial}\); 3. \(\Gamma\) is an ordered \(\Lambda\)-vector space; 4. \(v\colon K^{\times}\to\Gamma\) is a (surjective) valuation making \(K\) a monotone \(T^{\mathcal{O},\partial}\)-model such that \(v(a^{\lambda})=\lambda va\) for all \(a\in K^{>}\) and \(\lambda\in\Lambda\); 5. \(\pi\colon\mathcal{O}\to\boldsymbol{k}\) is a map such that the map \(\mathrm{res}(K)\to\boldsymbol{k}\) induced by \(\pi\) is an \(\mathcal{L}^{\partial}\)-isomorphism; 6. \(c\colon\Gamma\to\boldsymbol{k}\) is \(\Lambda\)-linear and for every \(\gamma\in\Gamma\), there is \(a\in K^{>}\) with \(va=\gamma\) and \(\pi(a^{\dagger})=c(\gamma)\); 7. \(\mathrm{ac}\colon K\to\boldsymbol{k}\) is an angular component map such that \(\mathrm{ac}(a)^{\dagger}=\pi(a^{\dagger})-c(va)\) for all \(a\in K^{\times}\). Let \(\mathcal{L}_{3}\) be this three-sorted language of \(\mathcal{K}\), where we extend \(v\) and \(\pi\) to \(K\) by \(v(0)=0\) and \(\pi(K\setminus\mathcal{O})=\{0\}\), respectively. Note that in \(\mathcal{L}_{3}\) we have two distinct copies of the language \(\mathcal{L}^{\partial}\), one for the sort f and one for the sort r. Let \(T^{\mathrm{ac}}_{\mathrm{mon}}\) be the theory whose models are such \(\mathcal{K}\). Note that by Section 4.3, any monotone \(T^{\mathcal{O},\partial}\)-model can be expanded to a \(T^{\mathrm{ac}}_{\mathrm{mon}}\)-model. ### Back-and-forth Let \(\mathcal{K}=(K,\boldsymbol{k},\Gamma;\pi,v,c,\mathrm{ac})\) and \(\mathcal{K}^{*}=(K^{*},\boldsymbol{k}^{*},\Gamma^{*};\pi^{*},v^{*},c^{*}, \mathrm{ac}^{*})\) be \(T^{\mathrm{ac}}_{\mathrm{mon}}\)-models. Our goal is to construct a back-and-forth system between \(\mathcal{K}\) and \(\mathcal{K}^{*}\), when they are \(T^{\partial}\)-henselian and appropriately saturated. A **good substructure** of \(\mathcal{K}\) is a triple \(\mathcal{E}=(E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\) such that: 1. \(E\) is a \(T^{\partial}\)-submodel of \(K\); 2. \(\boldsymbol{k}_{\mathcal{E}}\) is a \(T^{\partial}\)-submodel of \(\boldsymbol{k}\) with \(\mathrm{ac}(E)\subseteq\boldsymbol{k}_{\mathcal{E}}\) (so \(\pi(E)\subseteq\boldsymbol{k}_{\mathcal{E}}\)); 3. \(\Gamma_{\mathcal{E}}\) is an ordered \(\Lambda\)-subspace of \(\Gamma\) with \(v(E^{\times})\subseteq\Gamma_{\mathcal{E}}\) and \(c(\Gamma_{\mathcal{E}})\subseteq\boldsymbol{k}_{\mathcal{E}}\). Note that in this definition we neither require \(\mathrm{ac}(E)=\boldsymbol{k}_{E}\), let alone \(\pi(E)=\boldsymbol{k}_{E}\), nor \(v(E^{\times})=\Gamma_{E}\). When needed we construe \(E\) as a \(T^{\mathcal{O},\partial}\)-model \((E,\mathcal{O}_{E})\) with the induced valuation ring \(\mathcal{O}_{E}\coloneqq\mathcal{O}\cap E\). If \(\mathcal{E}_{1}=(E,\boldsymbol{k}_{\mathcal{E}_{1}},\Gamma_{\mathcal{E}_{1}})\) and \(\mathcal{E}_{2}=(E_{2},\boldsymbol{k}_{\mathcal{E}_{2}},\Gamma_{\mathcal{E}_{2}})\) are good substructures of \(\mathcal{K}\), then \(\mathcal{E}_{1}\subseteq\mathcal{E}_{2}\) means \(E_{1}\subseteq E_{2}\), \(\boldsymbol{k}_{\mathcal{E}_{1}}\subseteq\boldsymbol{k}_{\mathcal{E}_{2}}\), and \(\mathcal{E}_{\mathcal{E}_{1}}\subseteq\Gamma_{\mathcal{E}_{2}}\). Now let \(\mathcal{E}=(E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\) and \(\mathcal{E}^{*}=(E^{*},\boldsymbol{k}_{\mathcal{E}^{*}},\Gamma_{\mathcal{E}^{*}})\) be good substructures of \(\mathcal{K}\) and \(\mathcal{K}^{*}\), respectively. A **good map**\(\boldsymbol{f}\colon\mathcal{E}\to\mathcal{E}^{*}\) is a triple \(\boldsymbol{f}=(f,f_{\mathrm{r}},f_{\mathrm{v}})\) consisting of \(\mathcal{L}^{\partial}\)-isomorphisms \(f\colon E\to E^{*}\) and \(f_{\mathrm{r}}\colon\boldsymbol{k}_{\mathcal{E}}\to\boldsymbol{k}_{\mathcal{E}^{*}}\) and an isomorphism \(f_{\mathrm{v}}\colon\Gamma_{\mathcal{E}}\to\Gamma_{\mathcal{E}^{*}}\) of ordered \(\Lambda\)-vector spaces such that: 1. \(f_{\mathrm{r}}\big{(}\,\mathrm{ac}(a)\big{)}=\mathrm{ac}^{*}\big{(}f(a)\big{)}\) for all \(a\in E\); 2. \(f_{\mathrm{v}}\big{(}v(a)\big{)}=v^{*}\big{(}f(a)\big{)}\) for all \(a\in E^{\times}\); 3. \((f_{\mathrm{r}},f_{\mathrm{v}})\) is a partial elementary map \((\boldsymbol{k},\Gamma;c)\to(\boldsymbol{k}^{*},\Gamma^{*};c^{*})\) (so \(f_{\mathrm{r}}\big{(}c(\gamma)\big{)}=c^{*}\big{(}f_{\mathrm{v}}(\gamma)\big{)}\) for all \(\gamma\in\Gamma_{\mathcal{E}}\)). This lemma handles residue field extensions. **Lemma 6.1**.: _Suppose that \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are \(T^{\partial}\)-henselian and let \(\boldsymbol{f}\colon\mathcal{E}\to\mathcal{E}^{*}\) be a good map. Let \(d\in\boldsymbol{k}_{\mathcal{E}}\setminus\pi(E)\). Then there are \(b\in\mathcal{O}\) with \(\pi(b)=d\) and a good map \(\boldsymbol{g}\colon(E\langle\!\langle b\rangle\!\rangle,\boldsymbol{k}_{ \mathcal{E}},\Gamma_{\mathcal{E}})\to\mathcal{K}^{*}\) extending \(\boldsymbol{f}\)._ Proof.: Proposition 3.10 gives \(b\in\mathcal{O}\) with \(\pi(b)=d\) and \(v\big{(}E\langle\!\langle b\rangle\!\rangle^{\times}\big{)}=v(E^{\times})\), as well as an \(\mathcal{L}^{\mathcal{O},\partial}\)-embedding \(g\colon E\langle\!\langle b\rangle\!\rangle\to K^{*}\) extending \(f\) such that \(\pi^{*}\big{(}g(a)\big{)}=f_{\mathrm{r}}\big{(}\pi(a)\big{)}\) for all \(a\in E\langle\!\langle b\rangle\!\rangle\). Let \(\boldsymbol{g}=(g,f_{\mathrm{r}},f_{\mathrm{v}})\). To see that \((E\langle\!\langle b\rangle\!\rangle,\boldsymbol{k}_{\mathcal{E}},\Gamma_{ \mathcal{E}})\) is a good substructure and that \(\boldsymbol{g}\) is a good map, we only need to show that \(\mathrm{ac}(a)\in\boldsymbol{k}_{\mathcal{E}}\) and that \(\mathrm{ac}^{*}\big{(}g(a)\big{)}=f_{\mathrm{r}}\big{(}\,\mathrm{ac}(a)\big{)}\) for all \(a\in E\langle\!\langle b\rangle\!\rangle\). This holds by [14, Lemma 4.2], but we repeat the short argument here. Take \(y\in E\) with \(va=vy\) and take \(u\in E\langle\!\langle b\rangle\!\rangle\) with \(a=uy\). Then \(u\asymp 1\), so \(\mathrm{ac}(a)=\mathrm{ac}(u)\,\mathrm{ac}(y)=\pi(u)\,\mathrm{ac}(y)\in \boldsymbol{k}_{\mathcal{E}}\) and \[f_{\mathrm{r}}\big{(}\,\mathrm{ac}(a)\ )\ =\ f_{\mathrm{r}}\big{(}\pi(u)\big{)}f_{ \mathrm{r}}\big{(}\,\mathrm{ac}(y)\big{)}\ =\ \pi^{*}\big{(}g(u)\big{)}\,\mathrm{ac}^{*}\big{(}f(y)\big{)}\ =\ \mathrm{ac}^{*}\big{(}g(a)\big{)}.\qed\] The next lemma handles value group extensions. ** Proof.: Let \(E_{0}\subseteq E\) be a \(T^{\flat}\)-lift of \(\mathbf{k}_{\mathcal{E}}\). We will find \(b\in K^{>}\) with \(b^{\dagger}\in E_{0}\), \(\pi(b^{\dagger})=c(\gamma)\), \(vb=\gamma\), and \(\operatorname{ac}(b)=1\). By assumption, we have \(a\in K^{>}\) with \(\pi(a^{\dagger})=c(\gamma)\) and \(va=\gamma\). Take \(u\in E_{0}\) with \(\pi(u)=\pi(a^{\dagger})=c(\gamma)\), and let \(\varepsilon\coloneqq a^{\dagger}-u\in c\). By Corollary 3.2, we have \(\delta\in c\) with \(\varepsilon=(1+\delta)^{\dagger}\), so by replacing \(a\) with \(a/(1+\delta)\), we arrange that \(a^{\dagger}\in E_{0}\). By Corollary 3.11, extend \(E_{0}\) to a \(T^{\flat}\)-lift \(E_{1}\subseteq K\) of \(\mathbf{k}\). Now, take \(e\in E_{1}\) with \(\pi(e)=\operatorname{ac}(a)\). We have \[\pi(e^{\dagger})\ =\ \pi(e)^{\dagger}\ =\ \operatorname{ac}(a)^{\dagger}\ =\ \pi(a^{\dagger})-c(\gamma)\ =\ 0.\] Since \(e^{\dagger}\in E_{1}\), it follows that \(e^{\dagger}=0\). Let \(b\coloneqq a/e\), so \(b^{\dagger}=a^{\dagger}\in E_{0}\), \(vb=va=\gamma\), and \(\operatorname{ac}(b)=\operatorname{ac}(a)/\pi(e)=1\). Now \(f(E_{0})\subseteq E^{*}\) is a \(T^{\flat}\)-lift of \(\mathbf{k}_{\mathcal{E}^{*}}\), so as above take \(b^{*}\in(K^{*})^{>}\) with \((b^{*})^{\dagger}\in f(E_{0})\), \(\pi^{*}((b^{*})^{\dagger})=c^{*}(f_{\mathrm{v}}\gamma)\), \(vb^{*}=f_{\mathrm{v}}\gamma\), and \(\operatorname{ac}^{*}(b^{*})=1\). Since \(b^{\dagger}\in E_{0}\subseteq E\), we have \(E\langle\!\langle b\rangle\!\rangle=E\langle b\rangle\). Then \(\operatorname{res}(E\langle b\rangle)=\operatorname{res}(E)\) and \(v(E\langle b\rangle^{\times})=v(E^{\times})\oplus\Lambda\gamma\subseteq\Gamma_ {\mathcal{E}}\) by the Wilkie inequality. Since \(b\) and \(b^{*}\) have the same sign and realize the same cut over \(v(E^{\times})\), we may use [18, Lemma 2.3] to get an \(\mathcal{L}^{\mathcal{O}}\)-embedding \(g\colon E\langle b\rangle\to K^{*}\) extending \(f\) and satisfying \(gb=b^{*}\). Note that \((b^{*})^{\dagger}=f(b^{\dagger})\), so \(g\) is even an \(\mathcal{L}^{\mathcal{O},\beta}\)-embedding by Fact 2.5. To verify that \(\mathbf{g}\coloneqq(g,f_{\mathrm{r}},f_{\mathrm{v}})\) is a good map, it remains to check that \(\mathbf{g}\) preserves \(\operatorname{ac}\). For this, use that for \(\mathcal{L}(E)\)-definable \(F\colon K\to K\) with \(F(b)\neq 0\), we have \(\lambda\in\Lambda\) and \(d\in E^{\times}\) with \(F(b)\sim b^{\lambda}d\), so \(\operatorname{ac}(F(b))=\operatorname{ac}(b^{\lambda}d)=\operatorname{ac}(b) ^{\lambda}\operatorname{ac}(d)=\operatorname{ac}(d)\). **Theorem 6.3**.: _Suppose that \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are \(T^{\flat}\)-henselian. Then any good map \(\mathcal{E}\to\mathcal{E}^{*}\) is a partial elementary map \(\mathcal{K}\to\mathcal{K}^{*}\)._ Proof.: Let \(\kappa\) be an uncountable cardinal with \(\max\{|\mathbf{k}_{\mathcal{E}}|,|\Gamma_{\mathcal{E}}|\}<\kappa\). By passing to elementary extensions we arrange that \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are \(\kappa^{+}\)-saturated. We call a good substructure \((E_{1},\mathbf{k}_{1},\Gamma_{1})\) of \(\mathcal{K}\)**small** if \(\max\{|\mathbf{k}_{1}|,|\Gamma_{1}|\}<\kappa\). It suffices to show that the set of good maps with small domain is a back-and-forth system from \(\mathcal{K}\) to \(\mathcal{K}^{*}\). First, we describe several extension procedures. 1. Given \(d\in\mathbf{k}\), arranging that \(d\in\mathbf{k}_{\mathcal{E}}\): By the saturation assumption, we can extend \(f_{\mathrm{r}}\) to a map \(g_{\mathrm{r}}\colon\mathbf{k}_{\mathcal{E}}\langle\!\langle d\rangle\!\rangle \to\mathbf{k}^{*}\) so that \((g_{\mathrm{r}},f_{\mathrm{v}})\) is a partial elementary map. Then \((f,g_{\mathrm{r}},f_{\mathrm{v}})\) is the desired extension of \(\mathbf{f}\). 2. Given \(\gamma\in\Gamma\), arranging that \(\gamma\in\Gamma_{\mathcal{E}}\): First use (1) to arrange \(c(\gamma)\in\mathbf{k}_{\mathcal{E}}\), then use saturation as before to extend \(f_{\mathrm{v}}\) to \(g_{\mathrm{v}}\colon\Gamma_{\mathcal{E}}\oplus\Lambda\gamma\to\Gamma^{*}\) with the desired properties. 3. Arranging \(\pi(E)=\mathbf{k}_{\mathcal{E}}\): If \(d\in\mathbf{k}_{\mathcal{E}}\setminus\pi(E)\), then Lemma 6.1 yields \(b\in K\) and an extension of \(\mathbf{f}\) to a good map \((g,f_{\mathrm{r}},f_{\mathrm{v}})\) with small domain \((E\langle\!\langle b\rangle\!\rangle,\mathbf{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\). Iterate this procedure to arrange \(\pi(E)=\mathbf{k}_{\mathcal{E}}\). 4. Arranging that \((E,\mathcal{O}_{E})\) is \(T^{\flat}\)-henselian: By (1) and (3) we can assume that \(\mathbf{k}_{\mathcal{E}}\) is linearly surjective and \(\pi(E)=\mathbf{k}_{\mathcal{E}}\). Now use Fact 2.8 to take a spherically complete immediate \(T^{\mathcal{O},\flat}\)-extension \(L\) of \(E\). Then \(L\) is \(T^{\flat}\)-henselian by Corollary 3.15, and \(L\) embeds over \(E\) into both \(K\) and \(K^{*}\) by Corollary 4.7. Let \(g\) be the extension of \(f\) to an \(\mathcal{L}^{\mathcal{O},\flat}\)-isomorphism between these images of \(L\) in \(K\) and \(K^{*}\), respectively. Then \((g,f_{\mathrm{r}},f_{\mathrm{v}})\) is a good map by [14, Corollary 4.4]. 5. Arranging \(v(E^{\times})=\Gamma_{\mathcal{E}}\): We can assume that \(\pi(E)=\mathbf{k}_{\mathcal{E}}\) and that \(\mathcal{E}\) is \(T^{\flat}\)-henselian and is equipped with a \(T^{\flat}\)-lift of \(\mathbf{k}_{\mathcal{E}}\) by Theorem 3.12. If \(\gamma\in\Gamma_{\mathcal{E}}\setminus v(E^{\times})\), then Lemma 6.2 yields \(b\in K\) and an extension of \(\mathbf{f}\) to a good map \((g,f_{\mathrm{r}},f_{\mathrm{v}})\) with small domain \((E\langle\!\langle b\rangle\!\rangle,\mathbf{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\). Iterate this procedure to arrange \(v(E^{\times})=\Gamma_{\mathcal{E}}\). Given \(a\in K\), we need to extend \(\mathbf{f}\) to a good map with small domain containing \(a\). By the above, we can assume that \(\pi(E)=\mathbf{k}_{\mathcal{E}}\) and \(v(E^{\times})=\Gamma_{\mathcal{E}}\). From \[\operatorname{rk}_{\mathcal{L}}\big{(}\pi(E\langle\!\langle a\rangle\!\rangle)| \pi(E)\ \leqslant\ \operatorname{rk}_{\mathcal{L}}(E\langle\!\langle a\rangle\!\rangle|E)\ \leqslant\ \aleph_{0},\] we get \(|\pi(E\langle\!\langle a\rangle\!\rangle)|<\kappa\), and from \[\dim_{\Lambda}\big{(}v(E\langle\!\langle a\rangle\!\rangle^{\times})|v(E^{ \times})\big{)}\ \leqslant\ \operatorname{rk}_{\mathcal{L}}(E\langle\!\langle a\rangle\!\rangle|E)\ \leqslant\ \aleph_{0},\] we get \(|v(E\langle\!\langle a\rangle\!\rangle^{\times})|<\kappa\). Hence by (1)-(5) we extend \(\mathbf{f}\) to a good map \(\mathbf{f}_{1}=(f_{1},f_{1,\mathrm{r}},f_{1,\mathrm{v}})\) with small domain \(\mathcal{E}_{1}=(E_{1},\mathbf{k}_{1},\Gamma_{1})\supseteq\mathcal{E}\) such that \(\mathbf{k}_{1}\) is linearly surjective and \[\pi\big{(}E\langle\!\langle a\rangle\!\rangle\big{)}\subseteq\mathbf{k}_{1}=\pi (E_{1})\qquad\text{and}\qquad v\big{(}E\langle\!\langle a\rangle\!\rangle^{\times} \big{)}\subseteq\Gamma_{1} In the same way, we extend \(\boldsymbol{f}_{1}\) to a good map \(\boldsymbol{f}_{2}\) with small domain \(\mathcal{E}_{2}=(E_{2},\boldsymbol{k}_{2},\Gamma_{2})\supseteq\mathcal{E}_{1}\) such that \(\boldsymbol{k}_{2}\) is linearly surjective and \[\pi\big{(}E_{1}\langle\!\langle a\rangle\!\rangle\big{)}\subseteq\boldsymbol{k} _{2}=\pi(E_{2})\qquad\text{and}\qquad v\big{(}E_{1}\langle\!\langle a\rangle \!\rangle^{\times}\big{)}\subseteq\Gamma_{2}=v(E_{2}^{\times}).\] Iterating this procedure and taking unions yields an extension of \(\boldsymbol{f}\) to a good map \(\boldsymbol{f}_{\omega}=(f_{\omega},f_{\omega,\mathrm{r}},f_{\omega,\mathrm{v}})\) with small domain \(\mathcal{E}_{\omega}=(E_{\omega},\boldsymbol{k}_{\omega},\Gamma_{\omega}) \supseteq\mathcal{E}\) such that \(\boldsymbol{k}_{\omega}\) is linearly surjective and \[\pi\big{(}E_{\omega}\langle\!\langle a\rangle\!\rangle\big{)}=\boldsymbol{k} _{\omega}=\pi(E_{\omega})\qquad\text{and}\qquad v\big{(}E_{\omega}\langle\! \langle a\rangle\!\rangle^{\times}\big{)}=\Gamma_{\omega}=v(E_{\omega}^{ \times}).\] This makes \(\big{(}E_{\omega}\langle\!\langle a\rangle\!\rangle,\mathcal{O}_{E_{\omega} \langle\!\langle a\rangle\!\rangle}\big{)}\) an immediate \(T^{\mathcal{O},\partial}\)-extension of \((E_{\omega},\mathcal{O}_{E_{\omega}})\), so by Fact 2.8 and Corollary 4.7 we can take a spherically complete immediate \(T^{\mathcal{O},\partial}\)-extension \((E_{\omega+1},\mathcal{O}_{E_{\omega+1}})\) of \(\big{(}E_{\omega}\langle\!\langle a\rangle\!\rangle,\mathcal{O}_{E_{\omega} \langle\!\langle a\rangle\!\rangle}\big{)}\) inside \(\mathcal{K}\), which is also an immediate \(T^{\mathcal{O},\partial}\)-extension of \((E_{\omega},\mathcal{O}_{E_{\omega}})\). Then \(\mathcal{E}_{\omega+1}=(E_{\omega+1},\boldsymbol{k}_{\omega},\Gamma_{\omega})\) is a good substructure of \(\mathcal{K}\). Likewise taking a spherically complete immediate \(T^{\mathcal{O},\partial}\)-extension of \(\big{(}f_{\omega}(E_{\omega}),f_{\omega}(\mathcal{O}_{E_{\omega}})\big{)}\) inside \(\mathcal{K}^{*}\), Theorem 4.2 and [14, Corollary 4.4] yield an extension of \(\boldsymbol{f}_{\omega}\) to a good map with small domain \(\mathcal{E}_{\omega+1}\) containing \(a\). For the next result, we construre \((\boldsymbol{k},\Gamma;c)\) as a structure in the two-sorted language \(\mathcal{L}_{\mathrm{rv},c}=\mathcal{L}_{\mathrm{r}}\cup\mathcal{L}_{\mathrm{ v}}\cup\{c\}\). **Corollary 6.4**.: _Suppose that \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are \(T^{\flat}\)-henselian. Then \(\mathcal{K}\equiv\mathcal{K}^{*}\) if and only if \((\boldsymbol{k},\Gamma;c)\equiv(\boldsymbol{k}^{*},\Gamma^{*};c^{*})\)._ Proof.: The left-to-right direction is obvious. For the converse, suppose that \((\boldsymbol{k},\Gamma;c)\equiv(\boldsymbol{k}^{*},\Gamma^{*};c^{*})\). Let \(\mathbb{P}\) be the prime model of \(T\). Then \(\mathcal{E}=(\mathbb{P},\mathbb{P},\{0\})\) is a good substructure of \(\mathcal{K}\) and \(\mathcal{E}^{*}=(\mathbb{P},\mathbb{P},\{0\})\) is a good substructure of \(\mathcal{K}^{*}\), and we have a good map \(\mathcal{E}\to\mathcal{E}^{*}\), which is partial elementary by Theorem 6.3. For \(T=T_{\mathrm{an}}\), this yields a theorem claimed in the introduction. **Corollary 6.5**.: _Any \(T_{\mathrm{an}}^{\flat}\)-henselian monotone \(T_{\mathrm{an}}^{\mathcal{O},\flat}\)-model is elementarily equivalent to \(\boldsymbol{k}(\!(t^{\Gamma})\!)_{\mathrm{an},c}\) for some \(\boldsymbol{k}\models T_{\mathrm{an}}^{\flat}\), divisible ordered abelian group \(\Gamma\), and additive map \(c\colon\Gamma\to\boldsymbol{k}\)._ **Corollary 6.6**.: _Suppose that \(\mathcal{K}\) is \(T^{\flat}\)-henselian and let \(\mathcal{E}=(E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}};\pi,v,c,ac) \subseteq\mathcal{K}\) be a \(T^{\flat}\)-henselian \(T_{\mathrm{mon}}^{\mathrm{ac}}\)-model such that \((\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}};c)\preccurlyeq( \boldsymbol{k},\Gamma;c)\). Then \(\mathcal{E}\preccurlyeq\mathcal{K}\)._ Proof.: The identity map on \((E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\) is a good map from \(\mathcal{E}\) to \(\mathcal{K}\), so \(\mathcal{E}\preccurlyeq\mathcal{K}\) by Theorem 6.3. We can eliminate angular components from the previous corollary. **Corollary 6.7**.: _Suppose that \((K,\boldsymbol{k},\Gamma;\pi,v,c)\) is \(T^{\flat}\)-henselian and let \((E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}};\pi,v,c)\subseteq(K, \boldsymbol{k},\Gamma;\pi,v,c)\) be \(T^{\flat}\)-henselian such that \((\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}};c)\preccurlyeq(\boldsymbol{k},\Gamma;c)\). Then \((E,\boldsymbol{k}_{\mathcal{E}},\Gamma_{\mathcal{E}};\pi,v,c)\preccurlyeq(K, \boldsymbol{k},\Gamma;\pi,v,c)\)._ Proof.: Let \(\Delta\) be a \(\Lambda\)-subspace of \(\Gamma\) such that \(\Gamma=\Gamma_{\mathcal{E}}\oplus\Delta\) and let \(B\) be a \(\Lambda\)-subspace of \(K^{>}\) such that \(K^{>}=E^{>}\cdot B\) is the direct sum of \(E^{>}\) and \(B\). By Lemma 4.10, take a section \(s_{E}\colon\Gamma_{\mathcal{E}}\to E^{>}\). By the proof of the same lemma, take a \(\Lambda\)-vector space embedding \(s_{B}\colon\Delta\to B\) such that \(v(s_{B}(\delta))=\delta\) for all \(\delta\in\Delta\). Then \(s\colon\Gamma\to K^{>}\) defined by \(s(\gamma+\delta)=s_{E}(\gamma)s_{B}(\delta)\) for \(\gamma\in\Gamma_{\mathcal{E}}\) and \(\delta\in\Delta\) is a section for \(K\). Letting ac be the angular component induced by \(s\), the result now follows from Corollary 6.6. ### Relative quantifier elimination and preservation of NIP In this subsection we eliminate quantifiers relative to the two-sorted structure \((\boldsymbol{k},\Gamma;c)\) in the language \(\mathcal{L}_{\mathrm{rv},c}\coloneqq\mathcal{L}_{\mathrm{r}}\cup\mathcal{L}_{ \mathrm{v}}\cup\{c\}\). Let \(x\) be an \(l\)-tuple of variables of sort f, \(y\) be an \(m\)-tuple of variables of sort r, and \(z\) be an \(n\)-tuple of variables of sort v. A formula in \((x,y,z)\) is **special** if it is of the form \[\psi\big{(}\operatorname{ac}(F_{1}(\delta_{\vartheta}^{r}x)),\ldots, \operatorname{ac}(F_{s}(\delta_{\vartheta}^{r}x)),y,v(G_{1}(\delta_{\vartheta}^{r} x)),\ldots,v(G_{t}(\delta_{\vartheta}^{r}x)),z\big{)}\] where \(F_{1},\ldots,F_{s},G_{1},\ldots,G_{t}\colon K^{(1+r)l}\to K\) are \(\mathcal{L}(\emptyset)\)-definable functions and \(\psi(u_{1},\ldots,u_{s},y,v_{1},\ldots,v_{t},z)\) is an \(\mathcal{L}_{\mathrm{rv},c}\)-formula with \(u_{1},\ldots,u_{s}\) of sort r and \(v_{1},\ldots,v_{t}\) of sort v. **Theorem 6.8**.: _Every \(\mathcal{L}_{3}\)-formula is equivalent to a special formula._ Proof.: For a \(T_{\mathrm{mon}}^{\mathrm{ac}}\)-model \(\mathcal{K}\) and \(a\in K^{l}\), \(d\in\boldsymbol{k}^{m}\), and \(\gamma\in\Gamma^{n}\), define the **special type** of \((a,d,\gamma)\) to be \[\operatorname{sptp}(a,d,\gamma)\ \coloneqq\ \{\theta(x,y,z):\mathcal{K}\models \theta(a,d,\gamma)\text{ and }\theta\text{ is special}\}.\] Now let \(\mathcal{K}\) and \(\mathcal{K}^{*}\) be \(T^{\flat}\)-henselian \(T_{\mathrm{mon}}^{\mathrm{ac}}\)-models and \(a\in K^{l}\), \(a^{*}\in(K^{*})^{l}\), \(d\in\boldsymbol{k}^{m}\), \(d^{*}\in(\boldsymbol{k}^{*})^{m}\), \(\gamma\in\Gamma^{n}\), \(\gamma^{*}\in(\Gamma^{ Let \(E\) be the \(\mathcal{L}^{\circ}\)-substructure of \(K\) generated by \(a\), \(\Gamma_{\mathcal{E}}\) be the ordered \(\Lambda\)-subspace of \(\Gamma\) generated by \(\gamma\) and \(v(E^{\times})\), and \(\mathbf{k}_{\mathcal{E}}\) be the \(\mathcal{L}^{\circ}\)-substructure of \(\mathbf{k}\) generated by \(\operatorname{ac}(E)\), \(c(\Gamma_{\mathcal{E}})\), and \(d\). Then \(\mathcal{E}=(E,\mathbf{k}_{\mathcal{E}},\Gamma_{\mathcal{E}})\) is a good substructure of \(\mathcal{K}\). Likewise define a good substructure \(\mathcal{E}^{*}=(E^{*},\mathbf{k}_{\mathcal{E}^{*}},\Gamma_{\mathcal{E}^{*}})\) of \(\mathcal{K}^{*}\). Note that for an \(\mathcal{L}(\emptyset)\)-definable \(F\colon K^{(1+r)l}\to K\), we have \(F(\mathcal{J}_{\partial}^{r}a)=0\) if and only if \(\operatorname{ac}(F(\mathcal{J}_{\partial}^{r}a))=0\), and likewise in \(\mathcal{K}^{*}\). From this and the assumption that \((a,d,\gamma)\) and \((a^{*},d^{*},\gamma^{*})\) have the same special type we obtain an \(\mathcal{L}^{\circ}\)-isomorphism \(f\colon E\to E^{*}\) with \(f(a)=a^{*}\). We next get an isomorphism \(f_{\mathrm{v}}\colon\Gamma_{\mathcal{E}}\to\Gamma_{\mathcal{E}^{*}}\) of ordered \(\Lambda\)-vector spaces such that \(f_{\mathrm{v}}(\gamma)=\gamma^{*}\) and \(f_{\mathrm{v}}(vb)=v^{*}(f(b))\) for all \(b\in E^{\times}\). Finally, we get an \(\mathcal{L}^{\circ}\)-isomorphism \(f_{\mathrm{r}}\colon\mathbf{k}_{\mathcal{E}}\to\mathbf{k}_{\mathcal{E}^{*}}\) such that \(f_{\mathrm{r}}(d)=d^{*}\), \(f_{\mathrm{r}}(\operatorname{ac}(b))=\operatorname{ac}^{*}(f(b))\) for all \(b\in E\), and \(f_{\mathrm{r}}(c(\delta))=c^{*}(f_{\mathrm{v}}(\delta))\) for all \(\delta\in\Gamma_{\mathcal{E}}\). By the assumption on special types, \((f_{\mathrm{r}},f_{\mathrm{v}})\) is a partial elementary map \((\mathbf{k},\Gamma;c)\to(\mathbf{k}^{*},\Gamma^{*};c^{*})\), so \((f,f_{\mathrm{r}},f_{\mathrm{v}})\) is a good map \(\mathcal{E}\to\mathcal{E}^{*}\). The result now follows from Theorem 6.3. It follows that \((\mathbf{k},\Gamma;c)\) is stably embedded in \(\mathcal{K}\): Any subset of \(\mathbf{k}^{m}\times\Gamma^{n}\) definable in \(\mathcal{K}\) is definable in \((\mathbf{k},\Gamma;c)\). Now we turn to preservation of NIP. Consider the following languages, each of which has the same three sorts as \(\mathcal{L}_{3}\): 1. The language \(\mathcal{L}^{\prime}\), where we drop the derivation on the field sort f. Explicitly, the field sort is an \(\mathcal{L}\)-structure, the residue field sort is still an \(\mathcal{L}^{\circ}\)-structure, the value group sort is still an ordered \(\Lambda\)-vector space, and we keep the maps \(\pi\), \(v\), \(c\), and \(\operatorname{ac}\). We let \(T^{\prime}\) be the restriction of \(T^{\operatorname{ac}}_{\mathrm{mon}}\) to the language \(\mathcal{L}^{\prime}\); that is, \(T^{\prime}\) consists of all \(\mathcal{L}^{\prime}\)-sentences which hold in all models of \(T^{\operatorname{ac}}_{\mathrm{mon}}\). 2. The language \(\mathcal{L}^{\operatorname{ac}}\), where we drop the derivation on both the field sort f and the residue field sort r, as well as \(c\). Explicitly, the field sort and the residue field sort are both \(\mathcal{L}\)-structures, the value group sort is still an ordered \(\Lambda\)-vector space, and we keep the maps \(\pi\), \(v\), and \(\operatorname{ac}\). We let \(T^{\operatorname{ac}}\) be the restriction of \(T^{\operatorname{ac}}_{\mathrm{mon}}\) to the language \(\mathcal{L}^{\operatorname{ac}}\). The theory \(T^{\operatorname{ac}}\) is complete and has NIP. The residue field and value group are stably embedded in any model of \(T^{\operatorname{ac}}\). Proof.: Let \(\mathcal{L}^{*}\) extend \(\mathcal{L}^{\operatorname{ac}}\) by an additional map \(s\) from the value group sort to the field sort, and let \(T^{*}\) be the \(\mathcal{L}^{s}\)-theory which extends \(T^{\operatorname{ac}}\) by axioms stating that \(s\) is a section and that \(\operatorname{ac}\) is induced by \(s\). By Corollary 4.11, any model of \(T^{\operatorname{ac}}\) admits an expansion to a model of \(T^{s}\). The theory \(T^{s}\) is complete and has NIP [19, Corollary 2.2 and Proposition 4.2], and the residue field and value group are stably embedded in any model of \(T^{s}\)[19, Corollary 2.4], so the lemma follows. Note that in [19], the language \(\mathcal{L}^{s}\) does not contain a function symbol for the angular component, but this is nonetheless definable by \(\operatorname{ac}(y)=\pi(y/s(vy))\) for nonzero \(y\). Let \(\mathcal{K}=(K,\mathbf{k},\Gamma;\pi,v,c,\operatorname{ac})\models T^{ \operatorname{ac}}_{\mathrm{mon}}\) and suppose that \(\operatorname{Th}(\mathbf{k},\Gamma;c)\) has NIP. Then \(\operatorname{Th}(\mathcal{K})\) has NIP. Proof.: By Lemma 6.10, the theory of the reduct \(\mathcal{K}|_{\mathcal{L}^{\operatorname{ac}}}\) has NIP, with stably embedded residue field and value group. By [15, Proposition 2.5], we may expand the residue field and value group by any additional NIP structure, and the theory of the resulting expansion of \(\mathcal{K}|_{\mathcal{L}^{\operatorname{ac}}}\) still has NIP. In particular, the theory of the intermediate reduct \(\mathcal{K}|_{\mathcal{L}^{\operatorname{c}}}\) has NIP. Suppose now that \(\operatorname{Th}(\mathcal{K})\) has IP. After replacing \(\mathcal{K}\) with an elementary extension, we find an \(\mathcal{L}_{3}\)-indiscernible sequence \((a_{i})_{i<\omega}\), a tuple of parameters \(b\), and an \(\mathcal{L}_{3}\)-formula \(\phi(x,y)\) such that \(\mathcal{K}\models\phi(a_{i},b)\) if and only if \(i\) is even. Note that \(x\) and \(y\) may span multiple sorts, and we let \(x_{\mathrm{f}}\), \(x_{\mathrm{r}}\), and \(x_{\mathrm{v}}\) denote the parts of \(x\) coming from the field sort, residue field sort, and value group sort; likewise for \(y_{\mathrm{f}}\), \(y_{\mathrm{r}}\), and \(y_{\mathrm{v}}\). By Theorem 6.8, we may assume that \(\phi(x,y)\) is of the form \[\psi\big{(}\operatorname{ac}(F_{1}(\mathcal{J}_{\partial}^{r}x_{\mathrm{f}}, \mathcal{J}_{\partial}^{r}y_{\mathrm{f}})),\dots,\operatorname{ac}(F_{s}( \mathcal{J}_{\partial}^{r}x_{\mathrm{f}},\mathcal{J}_{\partial}^{r}y_{ \mathrm{f}})),x_{\mathrm{r}},y_{\mathrm{r}},v(G_{1}(\mathcal{J}_{\partial}^{r}x_ {\mathrm{f}},\mathcal{J}_{\partial}^{r}y_{\mathrm{f}})),\dots,v(G_{t}(\mathcal{ J}_{\partial}^{r}x_{\mathrm{f}},\mathcal{J}_{\partial}^{r}y_{\mathrm{f}})),x_{\mathrm{v}},y_{ \mathrm{v}}\big{)},\] where \(\psi\) is an \(\mathcal{L}_{\mathrm{rv},c}\)-formula. Now, we augment each \(a_{i}\) to a tuple \(a_{i}^{*}\) as follows: write \(a_{i}=a_{i,\mathrm{f}}a_{i,\mathrm{r}}a_{i,\mathrm{v}}\) where \(a_{i,\mathrm{f}}\), \(a_{i,\mathrm{r}}\), and \(a_{i,\mathrm{v}}\) denote the parts of \(a_{i}\) coming from the relevant sorts, and put \(a_{i}^{*}\coloneqq\mathcal{J}_{\partial}^{r}(a_{i,\mathrm{f}})a_{i,\mathrm{r}}a _{i,\mathrm{v}}\). We augment \(b\) to \(b^{*}\) similarly. Let \(x_{\mathrm{f}}^{*}\) be a field sort variable whose length is \(1+r\) times that of \(x_{\mathrm{f}}\), likewise for \(y_{\mathrm{f}}^{*}\), and let \(\phi^{*}\) be the \(\mathcal{L}^{\prime}\)-formula \[\phi^{*}(x^{*},y^{*})\ \coloneqq\ \psi\big{(}\operatorname{ac}(F_{1}(x_{ \mathrm{f}}^{*},y_{\mathrm{f}}^{*})),\dots,\operatorname{ac}(F_{s}(x_{\mathrm{ f}}^{*},y_{\mathrm{f}}^{*})),x_{\mathrm{r}},y_{\mathrm{r}},v(G_{1}(x_{\mathrm{f}}^{*},y_{ \mathrm{f}}^{*})),\dots,v(G_{t}(x_{\mathrm{f}}^{*},y_{\mathrm{f}}^{*})),x_{ \mathrm{v}},y_{\mathrm{v}}\big{)}.\] Then \(\mathcal{K}\models\phi^{*}(a_{i}^{*},b^{*})\) if and only if \(\mathcal{K}\models\phi(a_{i},b)\) (so if and only if \(i\) is even). Since the sequence \((a_{i})_{i<\omega}\) is \(\mathcal{L}_{3}\)-indiscernible, the augmented sequence \((a_{i}^{*})_{i<\omega}\) is \(\mathcal{L}^{\prime}\)-indiscernible, so this contradicts the fact that the theory of \(\mathcal{K}|_{\mathcal{L}^{\prime}}\) has NIP. **Remark 6.12**.: Instead of appealing to the theory \(T^{*}\) to show that \(T^{\mathrm{ac}}\) has NIP, and then using [15, Proposition 2.5] to deduce that the \(\mathcal{L}^{\prime}\)-reduct of \(\mathcal{K}\) has NIP, one could likely show that \(\mathrm{Th}(\mathcal{K}|_{\mathcal{L}^{\prime}})\) has NIP directly, by building an appropriate back-and-forth system between appropriate substructures of models of \(T^{\prime}\) and applying [15, Theorem 2.3]. From there, the rest of the proof would proceed as above. It seems likely that distality and \(\mathrm{NTP}_{2}\) also transfer from \(\mathrm{Th}(\boldsymbol{k},\Gamma;c)\) to \(\mathrm{Th}(\mathcal{K})\), but in both cases, certain aspects of the above proof do not work. ## 7. The model completion of monotone \(T\)-convex \(T\)-differential fields In this section, we return to the one-sorted setting. Let \(T_{\mathrm{mon}}\) be the theory of monotone \(T\)-convex \(T\)-differential fields, in the language \(\mathcal{L}^{\mathcal{O},\beta}\); note that unlike in \(T^{\mathcal{O},\beta}\), we do not require the \(T\)-convex valuation ring to be proper. Let \(T_{\mathrm{mon}}^{*}\) be the theory of \(T^{\flat}\)-henselian \(T\)-convex \(T\)-differential fields with many constants, nontrivial valuation, and generic \(T\)-differential residue field (that is, with differential residue field a model of \(T_{\mathcal{G}}^{\flat}\); see Section 2.3). Note that any model of \(T_{\mathrm{mon}}^{*}\) is monotone, as it has many constants and small derivation. The purpose of this section is to show that \(T_{\mathrm{mon}}^{*}\) is the model completion of \(T_{\mathrm{mon}}\). For this, we need the following general proposition on residue field extensions. In this section, \(K=(K,\mathcal{O},\mathfrak{d})\) is a \(T\)-convex \(T\)-differential field with small derivation. The only difference with the earlier standing assumption is that we may have \(\mathcal{O}=K\) (equivalently, \(\Gamma=\{0\}\)). **Proposition 7.1**.: _Let \(E\) be a \(T^{\flat}\)-extension of \(\boldsymbol{k}\). Then there is a \(T\)-convex \(T\)-differential field extension \(L\) of \(K\) with small derivation and the following properties:_ 1. \(\boldsymbol{k}_{L}\) _is_ \(\mathcal{L}^{\flat}(\boldsymbol{k})\)_-isomorphic to_ \(E\)_;_ 2. \(\Gamma_{L}=\Gamma\)_;_ 3. _If_ \(K^{*}\) _is any_ \(T^{\flat}\)_-henselian_ \(T^{\mathcal{O},\beta}\)_-extension of_ \(K\)_, then any_ \(\mathcal{L}^{\flat}(\boldsymbol{k})\)_-embedding_ \(\imath\colon\boldsymbol{k}_{L}\to\mathrm{res}(K^{*})\) _lifts to an_ \(\mathcal{L}^{\mathcal{O},\beta}(K)\)_-embedding_ \(\jmath\colon L\to K^{*}\)_._ Proof.: If \(\Gamma=\{0\}\), then we may take \(L=E\), so we will assume that \(\Gamma\neq\{0\}\). We may further assume that \(E=\boldsymbol{k}\langle\!\langle a\rangle\!\rangle\), where \(a\not\in\boldsymbol{k}\). First, we build the extension \(L\), and then we verify that it is indeed a \(T^{\mathcal{O},\beta}\)-extension \(L\) of \(K\) satisfying (i)-(iii). Consider the case that \(a\) is \(T^{\flat}\)-transcendental over \(\boldsymbol{k}\), so \(E=\boldsymbol{k}\langle a,a^{\prime},\ldots\rangle\). We define \(T^{\mathcal{O}}\)-extensions \(K=L_{0}\subseteq L_{1}\subseteq\cdots\) as follows: for each \(i\), let \(L_{i+1}\coloneqq L_{i}\langle b_{i}\rangle\), where \(b_{i}\) realizes the cut \[\{y\in L_{i}:y<\mathcal{O}_{L_{i}}\text{ or }y\in\mathcal{O}_{L_{i}}\text{ and } \bar{y}<a^{(i)}\},\] and equip \(L_{i+1}\) with the \(T\)-convex valuation ring \[\mathcal{O}_{L_{i+1}}\ \coloneqq\ \big{\{}y\in L_{i+1}:|y|<d\text{ for all }d\in L_{i}\text{ with }d>\mathcal{O}_{L_{i}}\big{\}}.\] By [7, Main Lemma 3.6], each \(L_{i+1}\) is indeed a \(T^{\mathcal{O}}\)-extension of \(L_{i}\). Let \(L\coloneqq\bigcup_{i}L_{i}\), and using Fact 2.5, extend \(\mathfrak{d}\) to a \(T\)-derivation on \(L\) such that \(\mathfrak{d}b_{n}=b_{n+1}\) for each \(n\). Now, consider the case that \(a\) is \(T^{\flat}\)-algebraic over \(\boldsymbol{k}\). Let \(r>0\) be minimal with \(E=\boldsymbol{k}\langle\!\langle\mathfrak{d}_{\mathfrak{d}}^{\tau-1}a\rangle\!\rangle\) and build \(T^{\mathcal{O}}\)-extensions \(K=L_{0}\subseteq L_{1}\subseteq\cdots\subseteq L_{r}\) as in the \(T^{\flat}\)-transcendental case, so \(L_{i+1}\coloneqq L_{i}\langle b_{i}\rangle\) for each \(i<r\), where \(b_{i}\) is a lift of \(a^{(i)}\). This time, we put \(L\coloneqq L_{r}=K\langle b_{0},\ldots,b_{r-1}\rangle\). Let \(K_{0}\subseteq K\) be a \(T\)-lift of \(\boldsymbol{k}\), so there is an \(\mathcal{L}\)-isomorphism \(K_{0}\langle b_{0},\ldots,b_{r-1}\rangle\to E\) which agrees with the residue map on \(K_{0}\) and sends \(b_{i}\) to \(a^{(i)}\) for each \(i<r\). Let \(b_{r}\in K_{0}\langle b_{0},\ldots,b_{r-1}\rangle\) be the unique element which maps to \(a^{(r)}\in E\), and again using Fact 2.5, extend \(\mathfrak{d}\) to a \(T\)-derivation on \(L\) such that \(\mathfrak{d}b_{i}=\mathfrak{d}b_{i+1}\) for \(i<r\). We claim that \(L\) has small derivation. Put \(b\coloneqq b_{0}\), so \(L=K\langle\!\langle b\rangle\!\rangle\). If \(a\) is \(T^{\flat}\)-algebraic over \(\boldsymbol{k}\), then let \(r\) be as above, and if \(a\) is \(T^{\flat}\)-transcendental over \(\boldsymbol{k}\), then let \(r\) be arbitrary. Let \(F\colon K^{r}\to K\) be an \(\mathcal{L}(K)\)-definable function with \(F(\mathfrak{d}_{\mathfrak{d}}^{\tau-1}b)\prec 1\). We need to verify that \(F(\mathfrak{d}_{\mathfrak{d}}^{\tau-1}b)^{\prime}\prec 1\). By [10, Lemma 2.12], there is some \(\mathcal{L}(K)\)-definable function \(F^{[\flat]}\colon K^{r}\to K\) such that \[F(\mathfrak{d}_{\mathfrak{d}}^{\tau-1}b)^{\prime}\ =\ F^{[\flat]}(\mathfrak{d}_{ \mathfrak{d}}^{\tau-1}b)+\frac{\partial F}{\partial Y_{0}}(\mathfrak{d}_{ \mathfrak{d}}^{\tau-1}b)b^{\prime}+\cdots+\frac{\partial F}{\partial Y_{r-1}}( \mathfrak{d}_{\mathfrak{d}}^{\tau-1}b)b^{(r)}.\] For each \(i<r\), we note that \(b^{(i)}\asymp 1\) and that \(b^{(i)}\not\sim g\) for any \(g\in K\langle(b^{(j)})_{j<r,j\neq i}\rangle\), so Fact 4.12 gives us that \(\frac{\partial F}{\partial Y_{i}}(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b)\prec F(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b)\prec 1\). Since \(b^{(i+1)}\prec 1\), we conclude that \(\frac{\partial F}{\partial Y_{i}}(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b)b^{(i +1)}\prec 1\), so it remains to show that \(F^{[\mathfrak{I}]}(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b)\prec 1\). Suppose that this is not the case, and let \(U\subseteq K^{r}\) be the \(\mathcal{L}^{\mathcal{O}}(K)\)-definable set \[U\ \coloneqq\ \Big{\{}u\in\mathcal{O}^{r}:F^{[\mathfrak{I}]}(u)\succcur 1\text{ and }F(u),\frac{\partial F}{\partial Y_{0}}(u),\ldots,\frac{ \partial F}{\partial Y_{r-1}}(u)\prec 1\Big{\}}.\] Note that \(U\) is nonempty, since \(L\) is an elementary \(T^{\mathcal{O}}\)-extension of \(K\) and \(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}(b)\in U^{L}\). Take a tuple \(u=(u_{0},\ldots,u_{r-1})\in U\). Then \(u_{0}^{\prime},\ldots,u_{r-1}^{\prime}\prec 1\) since \(K\) has small derivation. But then \[F(u)^{\prime}\ =\ F^{[\mathfrak{I}]}(u)+\frac{\partial F}{\partial Y_{0}}(u)u_{0}^ {\prime}+\cdots+\frac{\partial F}{\partial Y_{r-1}}(u)u_{r-1}^{\prime}\ \asymp\ F^{[\mathfrak{I}]}(u)\ \succcur 1,\] contradicting that \(K\) has small derivation. This concludes the proof of the claim. With the claim taken care of, we see that \(L\) is a \(T^{\mathcal{O},\mathfrak{I}}\)-extension of \(K\) and that the differential residue field \(\boldsymbol{k}_{L}\) is \(\mathcal{L}^{\mathfrak{I}}(\boldsymbol{k})\)-isomorphic to \(E\). By the Wilkie inequality, we have that \(\Gamma_{L}=\Gamma\). Let \(K^{*}\) be a \(T^{\mathfrak{I}}\)-henselian \(T^{\mathcal{O},\mathfrak{I}}\)-extension of \(K\) and let \(\imath\colon\boldsymbol{k}_{L}\to\operatorname{res}(K^{*})\) be an \(\mathcal{L}^{\mathfrak{I}}(\boldsymbol{k})\)-embedding. The argument that \(\imath\) lifts to an \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(K)\)-embedding \(L\to K^{*}\) is very similar to the proof of part (iii) of Proposition 3.10. If \(a\) is \(T^{\mathfrak{I}}\)-transcendental over \(\boldsymbol{k}\), then for any lift \(b^{*}\) of \(\imath(a)\), the unique \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(K)\)-embedding \(L\to K^{*}\) which sends \(b\) to \(b^{*}\) is a lift of \(\imath\). Suppose that \(a\) is \(T^{\mathfrak{I}}\)-algebraic over \(\boldsymbol{k}\). Then by construction \(b^{(r)}=F(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b)\) for some minimal \(r\) and some \(\mathcal{L}(K_{0})\)-definable function \(F\), where \(K_{0}\subseteq K\) is a lift of \(\boldsymbol{k}\). As \(K^{*}\) is \(T^{\mathfrak{I}}\)-henselian, we find \(b^{*}\in K^{*}\) which lifts \(\imath(a)\) and satisfies the identity \((b^{*})^{(r)}=F(\mathfrak{I}_{\mathfrak{I}}^{\tau-1}b^{*})\). Then the unique \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(K)\)-embedding \(L\to K^{*}\) which sends \(b\) to \(b^{*}\) is a lift of \(\imath\). **Corollary 7.2**.: _Let \(K\models T_{\mathrm{mon}}\). Then \(K\) has a \(T^{\mathcal{O},\mathfrak{I}}\)-extension \(L\models T_{\mathrm{mon}}^{*}\) which embeds over \(K\) into any \(|K|^{+}\)-saturated \(K^{*}\models T_{\mathrm{mon}}^{*}\) extending \(K\)._ Proof.: We construct \(L\) in three steps. First, we build a nontrivially valued \(T_{\mathrm{mon}}\)-model \(K_{1}\) extending \(K\). If \(K\) itself is nontrivially valued, then we take \(K_{1}\coloneqq K\). If \(K\) is trivially valued, then let \(K_{1}\coloneqq K\langle a\rangle\) where \(a>K\). In this case, we equip \(K_{1}\) with the convex hull of \(K\) as its \(T\)-convex valuation ring and, using Fact 2.5, we uniquely extend the \(T\)-derivation on \(K\) to a \(T\)-derivation on \(K_{1}\) such that \(\partial a=0\). Then \(\operatorname{res}(K_{1})=\boldsymbol{k}\), and it is fairly easy to check that \(K_{1}\) is monotone, arguing as in Proposition 4.13. In fact, one can build \(K_{1}\) explicitly using Proposition 4.13 by taking \(K\) in place of \(\boldsymbol{k}\), taking \(\Lambda\) in place of \(\Gamma\), and taking \(c\colon\Lambda\to K\) to be the zero map. Next we use Fact 2.6 to take a \(T^{\mathcal{O}}_{\mathfrak{I}}\)-model \(E\) extending \(\boldsymbol{k}\) with \(|E|=|\boldsymbol{k}|\). Take a \(T^{\mathcal{O},\mathfrak{I}}\)-extension \(K_{2}\) of \(K_{1}\) as in Proposition 7.1, so \(\Gamma_{K_{2}}=\Gamma_{K_{1}}\) and \(\operatorname{res}(K_{2})\) is \(\mathcal{L}^{\mathfrak{I}}(\boldsymbol{k})\)-isomorphic to \(E\). Finally, we use Fact 2.8 to take a spherically complete immediate \(T^{\mathcal{O},\mathfrak{I}}\)-extension \(L\) of \(K_{2}\). Then \(\Gamma_{L}=\Gamma_{K_{1}}\), so \(L\) is monotone by [1, Corollary 6.3.6]. Moreover, \(L\) is \(T^{\mathfrak{I}}\)-henselian by Corollary 3.15, so it has many constants by Corollary 3.3 (it follows easily from the axioms of \(T^{\mathfrak{I}}_{\mathfrak{I}}\) that \((\boldsymbol{k}_{L}^{\times})^{\dagger}=\boldsymbol{k}_{L}\)). Now let \(K^{*}\models T_{\mathrm{mon}}^{*}\) be a \(|K|^{+}\)-saturated \(T^{\mathcal{O},\mathfrak{I}}\)-extension of \(K\). In the case that \(K_{1}=K\langle a\rangle\) where \(a>K\) and \(a^{\prime}=0\), we take \(a^{*}\in C_{K^{*}}\) with \(a^{*}>\mathcal{O}_{K^{*}}\) and we let \(\imath_{1}\colon K_{1}\to K^{*}\) be the \(\mathcal{L}(K)\)-embedding which sends \(a\) to \(a^{*}\). By Fact 2.5 and [7, Corollary 3.7], \(\imath_{1}\) is even an \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}\)-embedding. If \(K_{1}=K\), then we just take \(\imath_{1}\) to be the identity on \(K\). Next, as \(\operatorname{res}(K^{*})\) is a \(|\boldsymbol{k}|^{+}\)-saturated model of \(T^{\mathfrak{I}}_{\mathfrak{I}}\), the inclusion \(\boldsymbol{k}\subseteq\operatorname{res}(K^{*})\) extends to an \(\mathcal{L}^{\mathfrak{I}}\)-embedding \(\operatorname{res}(K_{2})\to\operatorname{res}(K^{*})\) by Fact 2.6. By Proposition 7.1, we may lift this to an \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(K)\)-embedding \(\imath_{2}\colon K_{2}\to K^{*}\) which extends \(\imath_{1}\). Finally, note that \(K^{*}\) is \(|\Gamma_{K_{2}}|^{+}\)-saturated, since \(|\Gamma_{K_{2}}|=\max\{|\Gamma_{K}|,|\Lambda|\}\leqslant|K|\), so \(\imath_{2}\) extends to an \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(K)\)-embedding \(\jmath\colon L\to K^{*}\) by Corollary 4.7. **Theorem 7.3**.: \(T_{\mathrm{mon}}^{*}\) _is the model completion of \(T_{\mathrm{mon}}\). Consequently, \(T_{\mathrm{mon}}^{*}\) is complete, and if \(T\) has quantifier elimination and a universal axiomatization, then \(T_{\mathrm{mon}}^{*}\) has quantifier elimination._ Proof.: Let \(E\models T_{\mathrm{mon}}\), and let \(K\) and \(K^{*}\) be models of \(T_{\mathrm{mon}}^{*}\) extending \(E\). In light of Corollary 7.2, it is enough to show that \(K\) and \(K^{*}\) are \(\mathcal{L}^{\mathcal{O},\mathfrak{I}}(E)\)-elementarily equivalent. We may assume that both \(K\) and \(K^{*}\) are \(|E|^{+}\)-saturated. Let \(L\models T_{\mathrm{mon}}^{*}\) be the \(T^{\mathcal{O},\mathfrak{ \(T\) has quantifier elimination and a universal axiomatization. Then \(T_{\mathrm{mon}}\) has a universal axiomatization, so \(T_{\mathrm{mon}}^{*}\) has quantifier elimination. **Corollary 7.4**.: _For every \(\mathcal{L}^{\mathcal{O},\vartheta}\)-formula \(\varphi\) there is some \(r\) and some \(\mathcal{L}^{\mathcal{O}}\)-formula \(\tilde{\varphi}\) such that_ \[T_{\mathrm{mon}}^{*}\ \vdash\ \forall x\big{(}\varphi(x)\leftrightarrow\tilde{ \varphi}\big{(}\mathscr{G}_{\vartheta}^{r}(x)\big{)}\big{)}.\] Proof.: Argue as in [10, Lemma 4.11], using that every \(\mathcal{L}^{\mathcal{O}}\)-term is an \(\mathcal{L}\)-term. **Corollary 7.5**.: \(T_{\mathrm{mon}}^{*}\) _is distal._ Proof.: Argue as in [10, Theorem 4.15], using Corollary 7.4 and the fact that \(T^{\mathcal{O}}\) is distal. **Corollary 7.6**.: _Let \(K\models T_{\mathrm{mon}}^{*}\) and let \(C\) be the constant field of \(K\). For every \(\mathcal{L}^{\mathcal{O},\vartheta}(K)\)-definable set \(A\subseteq C^{n}\), there is an \(\mathcal{L}^{\mathcal{O}}(K)\)-definable set \(B\subseteq K^{n}\) such that \(A=B\cap C^{n}\). In particular, the induced structure on \(C\) is weakly o-minimal._ Proof.: Argue as in [10, Lemma 4.23], using Corollary 7.4. ## Acknowledgements Research for this paper was conducted in part at the Fields Institute for Research in Mathematical Sciences. This material is based upon work supported by the National Science Foundation under Grants DMS-2103240 (Kaplan) and DMS-2154086 (Pynn-Coates). Pynn-Coates was also supported by the Austrian Science Fund (FWF) under project ESP 450.
2309.07924
Solving the Problem of Induction
This article solves the Hume's problem of induction using a probabilistic approach. From the probabilistic perspective, the core task of induction is to estimate the probability of an event and judge the accuracy of the estimation. Following this principle, the article provides a method for calculating the confidence on a given confidence interval, and furthermore, degree of confirmation. The law of large numbers shows that as the number of experiments tends to infinity, for any small confidence interval, the confidence approaches 100\% in a probabilistic sense, thus the Hume's problem of induction is solved. The foundation of this method is the existence of probability, or in other words, the identity of physical laws. The article points out that it cannot be guaranteed that all things possess identity, but humans only concern themselves with things that possess identity, and identity is built on the foundation of pragmatism. After solving the Hum's problem, a novel demarcation of science are proposed, providing science with the legitimacy of being referred to as truth.
Xuezhi Yang
2023-09-08T11:13:33Z
http://arxiv.org/abs/2309.07924v1
# Solving the Problem of Induction ###### Abstract This article solves the Hume's problem of induction using a probabilistic approach. From the probabilistic perspective, the core task of induction is to estimate the probability of an event and judge the accuracy of the estimation. Following this principle, the article provides a method for calculating the confidence on a given confidence interval, and furthermore, degree of confirmation. The law of large numbers shows that as the number of experiments tends to infinity, for any small confidence interval, the confidence approaches 100% in a probabilistic sense, thus the Hume's problem of induction is solved. The foundation of this method is the existence of probability, or in other words, the identity of physical laws. The article points out that it cannot be guaranteed that all things possess identity, but humans only concern themselves with things that possess identity, and identity is built on the foundation of pragmatism. After solving the Hum's problem, a novel demarcation of science are proposed, providing science with the legitimacy of being referred to as truth. ## I Introduction Hume's problem, coined after the Scottish philosopher David Hume, represents a philosophical conundrum, which pertains to the rationality of induction and the foundation of science. ### _What is Hume's Problem?_ Hume pointed out that inductive reasoning involves making predictions about future events based on past experiences. However, this kind of reasoning can not logically ensure its accuracy because the future may differ from the past [1]. Hume argued that no matter how many white swans we have observed, we can not logically prove that "all swans are white" because future swans may have different colors. This raises the question that why we should believe in the effectiveness of inductive reasoning, especially considering its widespread use in scientific research. The problem of induction is also illustrated vividly by the story of Russell's turkey. A turkey in a village observes the same thing for many days: a hand comes to feed it in the morning. For the turkey, this experience accumulates into such a strong pattern that it develops a high degree of confidence in the proposition "it will be fed every morning", until Thanksgiving arrives and shatters its belief [2]. Closely related to induction is the concept of causality. Causality implies that one event occurs as a result of another. Hume argued that people tend to believe that if one event consistently follows another, the former is the cause of the latter. However, Hume contended that this view is not drawn through logical reasoning but rather is based on habit and psychological inclinations stemming from experience. Thus Hume believed that causality, which involves necessary connections between cause and effect, cannot be established through empirical observation or deduction. ### _Responses to Hume's Problem in History_ #### I-B1 Synthetic a priori To address Hume's problem, Kant argued in his famous work [3] that synthetic a priori knowledge is possible. Kant reversed the empiricist programme espoused by Hume and argued that experience only comes about through the 12 a priori categories of understanding, including the concept of causation. Causality becomes, in Kant's view, a priori and therefore universally necessary, thus sidestepping Hume's skepticism. However, if causality cannot be derived through induction, Kant's direct assertion of causality as an a priori category may seem more brutal. #### I-B2 Probabilistic Approaches The Bayes-Laplacian solution and Carnap's confirmation theory are probabilistic approaches to Hume's problem with the foundational ideas of which are essentially the same. Carnap's goal is to provide a more precise measure of confirmation for scientific theories to assess their credibility [4]. The measure he introduced is \[P(h|e)=\frac{P(e|h)\cdot P(h)}{P(e)}, \tag{1}\] where \(e\) is the observational evidence and \(h\) is a hypothesis. A hypothesis can achieve a higher probability when supported by evidence, making it more confirmed. If the evidence is inconsistent with a hypothesis, the degree of confirmation will be lower. Eq. (1) is actually a conditional probability expressed in the Bayesian form. Rule of Succession [5] is a formula introduced by Laplace when working on the "sunrise problem". The question here is, If event \(A\) occurred \(N_{A}\) times in \(N\) trials, how likely would it happen in the next trial? Laplace considers this problem based on such a random experiment of drawing ball randomly, with replacement, from an urn. At each draw, a random probability \(p\) is uniformly produced on the [0,1] interval, and then set \(p\) portion of the balls in the urn black, and then a ball is drawn. Event \(A\) is the draw is black. In Carnap's language, the evidence \(e\)= "event \(A\) occurred \(N_{A}\) times in \(N\) trials", and the hypothesis \(h\)= "event \(A\) will happen next time", then the degree of conformation for \(h\) based on \(e\) is \(P(h|e)\). Laplace worked out the result as \[P(h|e)=\frac{N_{A}+1}{N+2}, \tag{2}\] which is widely recognized for it satisfies people's expectation. For the sunrise problem, \(N_{A}=N\), then \(P(h|e)=(N+1)/(N+2)\), which is smaller than 1 and approaches 1 when \(N\) approaches infinity. This is exactly the expectation in people's mind that finite evidences don't conform a hypothesis, but more evidences improve the degree of conformation. Although promising and influential, Carnap's confirmation theory encountered a wide range of criticisms from scholars among which Popper, Quine and Putnam were the most prominent figures. The author will not elaborate on their controversies but point out the essential problem with this approach. Carnap and Laplace's theory is based on a random experiment with an a priori probability distribution. But are we really doing such an experiment? If so, it is easy to calculate that \(e\) is an event with a very small probability. If the sun rises with a random probability \(p\) which is uniformly distributed on [0,1], it will be very rare to see 100 successive sunrises. However, the sun has kept rising every day for millions of years. So the essential problem with this approach is the model of the random experiment is wrong. That's why Carnap couldn't work out a clear and cogent theory of confirmation. #### I-3 Falsificationism Popper proposed "falsifiability" as the demarcation of science from pseudoscience [6]. In Popper's view, a scientific theory is a conjecture or hypothesis that can never be finally confirmed but can be falsified at any time. Propositions like "all swans are white" are not inductively derived but are conjectures. If all observed swans so far are white, the conjecture is provisionally accepted. However, once a black swan is discovered, the proposition is falsified. From a logical perspective, if a theory \(h\) implies a specific observable statement \(e\), verification asserts that "if \(e\) then \(h\)", which commits the fallacy of affirming the consequent. On the other hand, falsificationism claims that "if \(\neg e\), then \(\neg h\)", which is logically sound. This logical rigor has contributed to the widespread influence of falsificationism. The idea that "science can only be falsified, not verified" has become something of a doctrine. However, falsificationism has its problems, with a major critique coming from Quine's holism [7]. Holism suggests that when scientists design an experiment to test a scientific theory, they rely on various experimental equipments, which themselves embody a set of theories. For example, observing the trajectories of planets requires telescopes, which involve optical theories. In Popper's view, these theories used in experiments are treated as background knowledge and assumed to be correct. But from Quine's perspective, background knowledge can be wrong. If experimental results don't agree with theoretical predictions, it might be due to problems with the experimental equipment rather than the theory. Thus, using experiments to falsify a theory becomes problematic. In addition to the critique from holism, falsificationism has deeper weaknesses: how do you prove counterexamples are true? For instance, to falsify the proposition "all swans are white", you only need to find one black swan. Assume you observed a black swan, how can you be sure that the statement "this is a black swan" is true? Popper didn't realize the correctness of this statement relies on the universal proposition "my senses are always correct", therefore we have no way to evade Hume and ultimately have to face the problem of induction. This article uses modern probability theory to address the problem of induction and provides a method for calculating the confidence of a proposition over a given confidence interval, thereby solving the Hume's problem. The article emphasizes that identity is the foundation of this solution. Identity cannot be proven but is established on the basis of pragmatism. After addressing the problem of induction, the article proposes a novel demarcation criteria of science based on probabilistic verification. ## II Solving the Hume's Problem Carnap attempted to solve the Hume's problem using a probabilistic approach, which was a step in the right direction. Unfortunately, he failed to establish a viable inductive logic because he got the problem wrong in the first place. ### _The Concept of Probability_ From a pure mathematical perspective, probability theory is an axiomatic system. In 1933, Soviet mathematician Kolmogorov first provided the measure-theoretic definition of probability [8]. Axiomatic Definition of Probability:Let \(\Omega\) be the sample space of a random experiment, assign a real number \(P(A)\) to every event \(A\). \(P(A)\) is the probability of event \(A\) if \(P(\cdot)\) satisfies the following properties: * Non-negativity: \(\forall A,P(A)\geq 0\); * Normalization: \(P(\Omega)=1\); * Countable Additivity: If \(A_{m}\bigcap A_{n}=\emptyset\) for \(m\neq n\), \(P\left(\bigcup\limits_{n=1}^{+\infty}A_{n}\right)=\sum\limits_{n=1}^{+\infty} P(A_{n})\). Although the debate on the interpretation of probability is still going on in the area of philosophy [9], probability theory has evolved into a rigorous branch of mathematics and has served as the foundation of information theory, which has been guiding the development of modern communication ever since its birth. When using the language of probability, a universal proposition is transformed into the probability of a singular proposition. For example, "all swans are white" is expressed in probability as "the probability that a swan is white is 100%". The central task of induction from the probability perspective is to estimate this probability and evaluate the accuracy of the estimation. ### _How to estimate a probability?_ If the evidence \(e\) is "event \(A\) occurs \(N_{A}\) times in an \(N\)-fold random experiment", our task is to estimate the probability of event \(A\), which is \(p\). Maximum likelihood algorithm has been a well accepted criterion for optimal estimation. In this criterion, we choose the estimation to maximize the probability of \(e\), which is expressed as \[P(e)=\left(\begin{matrix}N\\ N_{A}\end{matrix}\right)p^{N_{A}}(1-p)^{N-N_{A}}. \tag{3}\] Let \[\frac{dP(e)}{dp}=0, \tag{4}\] we get the maximum likelihood estimation of \(p\), denoted as \[\hat{p}=\frac{N_{A}}{N}. \tag{5}\] To assess the accuracy of this estimation, we need the concept of confidence on confidence interval. ### _Confidence on Confidence Interval_ Because \(p\) is a real number, there is no chance the estimation is exactly the same as \(p\). Therefore, confidence should be defined on a confidence interval with a non-zero width. Confidence is the probability that the true value falls within this interval. From the perspective of maximum likelihood estimation, the optimization goal of this problem should be, given a width of the confidence interval, find a confidence interval that maximize the confidence. Earlier, we have obtained the maximum likelihood estimation of \(p\) as \(N_{A}/N\). In this paper, we just simplify this problem to set the confidence interval as \(D\), so that \(N_{A}/N\in D\). To calculate the confidence, we first need to make an assumption that in absence of any observed facts, \(p\) is uniformly distributed on \([0,1]\). Under this assumption, for each possible probability, we calculate the probability of \(e\) to form a curve. The ratio of the area under the curve within \(D\) to the total area under the curve is the probability that the true value falls within \(D\), which is the confidence. Then, the confidence is given by \[c=\frac{\int_{D}x^{N_{A}}(1-x)^{N-N_{A}}dx}{\int_{0}^{1}x^{N_{A}}(1-x)^{N-N_{A }}dx}. \tag{6}\] Although the uniform prior over \(p\) is the same as the Principle of Indifference in Rule of Succession, the ideology is different. Unlike Laplace's solution, the random experiment in this approach is, event \(A\) happens with a fixed probability rather than a random one. It is just that, we don't know the true value of \(p\). The task here is to use evidence \(e\) to estimate \(p\). The uniform prior is used here to define confidence, which is a subjective criterion of evaluation. In contrast, it is constitutive of the random experiment in Laplace's approach [10]. Let's take a simple example. Based on the fact that "all \(N\) swans were observed to be white", calculate the confidence of "the probability of a white swan being greater than 90%", where the confidence interval is \([0.9,1]\). The expression for this confidence is \[c=\frac{\int_{0.9}^{1}x^{N}dx}{\int_{0}^{1}x^{N}dx}=1-0.9^{N+1}. \tag{7}\] As shown in Fig. 1, it can be seen that when \(N=10\), the confidence is approximately 68.6%. When \(N=30\), the confidence increases to 96.2%. If the confidence interval is reduced to \([0.99,1]\), then \(N=300\) is required to achieve a confidence of 95.1%. With the concepts of confidence and confidence interval, we can make a response to Russell's turkey. After two months of feeding by the farmer, the conclusion drawn by this turkey should have been "the confidence of 'the probability of feeding is greater than 99%' is \(1-0.99^{61}=45.8\%\)", so it is not surprising that it was killed by the farmer on Thanksgiving. Besides, for a cow who has lived on the farm for 10 years, it is normal to conclude that a turkey has a high possibility to be slaughtered on Thanksgiving. A person around 30 years old who has seen the sun rise 10000 times can conclude that the confidence of "the probability of sunrise is greater than 99.9%" is \(1-0.999^{10000}=99.995\%\). If he also believes in historical records that humans have seen sunrise for over a million years, then the confidence interval can continue to narrow and the confidence is closer to 100%. ### _The Law of Large Numbers_ The law of large numbers is the fundamental law of probability theory, which comes in various forms, such as Bernoulli, Sinchin, Chebyshev's law of large numbers, and the strong law of large numbers. Let's take a look at the most fundamental form, Bernoulli's law of large numbers. Bernoulli's Law of Large Numbers: If a random experiment is conducted \(N\) times, event \(A\) occurs \(N_{A}\) times, and \(p\) is the probability of \(A\), then for any \(\varepsilon>0\) \[\lim_{N\rightarrow\infty}P\left(\left|\frac{N_{A}}{N}-p\right|<\varepsilon \right)=1. \tag{8}\] Fig. 1: Confidence on \([0.9,1]\) with respect to \(N\) The proof of this theorem can be found in any textbook of probability theory. Bernoulli's theorem of large numbers states that when the number of trials is sufficiently large, the value of \(N_{A}/N\) approaches \(p\) infinitely in probability. From another perspective, \(P\left(\left|N_{A}/N-p\right|<\varepsilon\right)\) is the confidence of \(p\) being located in the confidence interval \(\left[N_{A}/N-\varepsilon,N_{A}/N+\varepsilon\right]\). Bernoulli's law of large numbers states that for any small confidence interval, when \(N\) is large enough, the confidence can be infinitely close to 1. ### _Degree of Conformation_ We have discussed the concept of confidence on confidence interval. If the two parameters still seem too complicated, we can further simplify them to one parameter. Suppose the width of the confidence interval is \(d\), then we can introduce a metric, degree of conformation, as \[C=\max_{d}(1-d)\cdot c. \tag{9}\] For the sunrise problem, \(N_{A}=N\), and the confidence on \([x,1]\) is \(1-x^{N+1}\), By simple inference we get \[C=\frac{1}{N+1}\frac{N+1}{N+2}. \tag{10}\] When \(N=2\), \(C=0.47247\), and when \(N=10000\), \(C=0.9990\), while the results of Rule of Succession are 0.75 and 0.9999, respectively. Though it is of little meaning to compare the detailed numbers, the conformation provided by this solution is more conservative than that of the Rule of Succession. ## III Identity The basis for solving the Hume's problem is the existence of probability. So let's further ask, does probability exist? This involves the concept of identity. The so-called identity means invariance. Only things with identity can humans understand and use the acquired knowledge about them to guide practices. If a thing changes after people understand it, then the previous understanding becomes useless. We acknowledge that there are things in the real world that do not have identity. For example, a shooting star in the night sky disappears after a few seconds of brilliance. You see this scene and tell your friend, 'Look, meteor!'. When your friend looks up, it's already gone. Although your friend may not think you are lying, he has no evidence to acknowledge you because this phenomenon is non-repeatable, and no one will care anymore after its disappearence. What humans are concerned about are things with identity. For example, if you see an apple in front of you, you look at it once, you look at it again, you look at it ten times, it will keep being an apple in your eyes. If you let your friend look at it, he will also see an apple. So, the existence of the apple has identity. For this apple, you can say to your friend, "Do you want to eat this apple?", your friend might say, "That's great!" and then pick it up and eat it. Humans are able to communicate and cooperate on objects with identity. Hume's questioning of induction and causality is essentially a negation of identity. Russell's turkey was killed on Thanksgiving, so how can we guarantee that the apple in front of us won't suddenly turn into a rabbit? We successfully responded to this challenge using a probabilistic approach. That is to say, our assumption of identity is not that the apple has been and will always be an apple, but rather that there is a probability that it is an apple, without limiting this probability to be 100%, which preserves the possibility of "an apple suddenly becomes a rabbit" to avoid dogmatism to the utmost extent. The notion of identity assumes there is an invariant probability, which is the basis for our argument for all other propositions. It is a presupposition and cannot be proven because there is no more fundamental propositions to prove it. ## IV Practicality is the Foundation of Identity If \(p\) exists, then according to the law of large numbers, \(N_{A}/N\) approaches \(p\) infinitely after \(N\) is sufficiently large. If we conduct experiments, such as flipping a coin, we do observe a phenomenon where the ratio of the number of heads (\(N_{A}\)) to the total number (\(N\)) gradually approaches a constant value. However, this does not prove the existence of probability. Because according to Hume's query, although \(N_{A}/N\) tends to be a constant value, it is only a case of finite number of trials and cannot be extrapolated to an infinite conclusion. This is also the reason why we say identity cannot be proven. Let's do a thought experiment. Suppose the demon of Descartes want to subvert our belief in identity and manipulate our experiment of flipping coins. The devil's purpose is to make \(N_{A}/N\) has no limit by letting \(N_{A}/N\) oscillates in constant amplitude. The devil first makes the coin flip normally, so \(N_{A}/N\) will be around 0.5, and then manipulates the probability of heads be 0.6 until \(N_{A}/N\) reaches 0.55, and then manipulates the probability of heads be 0.4 until \(N_{A}/N\) reaches 0.45 and cycles like this, then \(N_{A}/N\) oscillates between 0.45 and 0.55 without convergence, meaning there is no limit for \(N_{A}/N\). So can the demon's approach overturn our belief in the existence of probability? However, it cannot. Because \(N\) increases with each cycle, so changing the overall statistical characteristics in the next cycle will require more trials, which results in an exponential increase in the number of trials per cycle. At the beginning, due to the short cycle, the results are not reliable, so these experimental results could not guide human activities and no one would care. Later on, as the cycle becomes longer, for those who make short-term predictions, they will find that the current probability is 0.6 or 0.4, and they can arrange their practical activities based on this result and achieve success. It cannot be ruled out that all the physical laws currently mastered by humans are only invariants arranged by demons within a cycle. For long-term observers, it is easy to detect the oscillation pattern of \(N_{A}/N\), which is a broader sense of identity. This is actually the same as how we handle our daily lives. We believe an apple remains unchanged in the short term, and this identity can guide short-term activities, such as dealing with questions such as "Do you want to eat the apple?". But in the long cycle, apples will go through the process of freshness, loss of luster, wrinkling, decay and blackening. Although the apple has changed, the process of change follows the same pattern and remains unchanged, which is a long-term identity. Based on this identity, humans can derive the most favorable principles of action for themselves, such as eating apples before they become stale. Of course, the demon can also come up with more complex ways to subvert the assumption of identity, but this is not of much use. Because in the face of a constantly changing world, humans always extract those unchanging characteristics, identify their patterns, and use them to guide their practical activities. For those changes that cannot be mastered, the human approach is to ignore them and let them go. So, the assumption of identity is based on pragmatism, which is the final foundation of human knowledge [11]. ## V Demarcation of Science The demarcation problem is the fundamental issue in the philosophy of science. The most influential demarcation criterion is Popper's falsification principle. According to Popper's theory, a proposition is scientific if it has falsifiability. The problem with Popper's standard is that it is too loose so that many nonsense propositions will also be considered scientific. Such as astrology, which was rightly taken by Popper as an example of pseudoscience, DOES have falsifiability, and has in fact been tested and refuted. Popper's falsification principle is a stopgap measure before the Hume's problem is solved, after which we can work out a precise definition of science. The definition of science is shown in Figure 2. The left half plane represents the part of the world with identity, while the right half plane represents the subject, which is human. The third quadrant is miscellaneous phenomena that trigger our sensations and perceptions, while behind the phenomenon are physical laws, located in the second quadrant. Plato believed that there exists a world of ideas, and objects in the real world are copies of ideas. In our model, there exists a world of physical laws. Unlike Plato, physical laws are not blueprints of phenomena, but rather to constrain and regulate them. Popper believes scientific theories are conjectures of scientists on what laws are, which is in the first quadrant. Induction is not a logical method, but a way of conjecture. The results of conjecture appear in the form of axioms, becoming the starting point of reasoning. Conjectures should obey the Occam's Razor Principle, also known as the economic principle of thinking, by cutting off the unverifiable parts of them. Axioms are usually universal propositions and cannot be directly verified. Then it is necessary to combine actual scenarios to deduce a proposition that can be empirically confirmed or falsified. For example, we can conjecture "the sun rises every day" and use this proposition as an axiom. Fig. 2: Definition of science This axiom cannot be directly verified, and verifiable propositions need to be derived from it through logic, such as "the sun rose yesterday", "the sun will rise tomorrow", "the sun will rise the day after tomorrow", and so on. If all and large enough amount of propositions derived from this axiom are verified to be correct, then "the sun rises every day" is confirmed with high degree of confirmation. Newton's laws, theory of relativity, and quantum mechanics are also judged according to this standard. For example, Newton's laws have many application scenarios, such as free falling bodies, the trajectory of cannonballs and planets, and so on. In each scenario, a series of verifiable propositions are derived. If these propositions are experimentally confirmed, then Newton's laws are verified with high conformation in these scenarios. In scenarios in which Newton's laws are yet to be verified, we can conjecture that they will be still valid. But experiments have shown that Newton's laws do not hold true in scenarios close to the speed of light and atomic scale, so are falsified in these scenarios. But that does not affect conclusions in low-speed macroscopic scenarios. Once a theory is conformed with high degree of confirmation in a scenario, it is only theoretically possible and practically impossible to falsify it again under that scenario. Therefore, Newton's laws deserve the title of TRUTH in confirmed scenarios. From the above discussions, we can summarize the four elements of science: Conjecture, Logic, empirical Verification, and Economics. The use of probability methods to address the Hume's problem is reflected in the verification part. This standard of demarcation is actually the revival and development of logical positivism or logical empiricism, and is much stricter than the falsification criterion. Probabilistic verification requires a large number of experiments to obtain a high degree of confirmation, so any establishment on luckiness cannot pass the verification criteria. In addition, according to this standard, not only Newton's laws, theory of relativity, and quantum mechanics are science, but common senses of life such as "the sun rises every day" and "apple is eatable" are also science. This way, the notion of science will enter the everyday lives of ordinary people and can play a role in improving the scientific literacy of the whole people. ## VI Conclusions The Hume's problem is a fundamental problem that runs through epistemology. Kant and Popper were unable to solve this problem and adopted an evasive attitude. Carnap's attempt was an endeavor in the right direction, but ended up in failure. The goal achieved in this article is exactly what Carnap attempted to achieve. For three hundred years, the Hume's problem has been a dark cloud over science and relegated science to the position of "can never be confirmed and wait to be falsified". After the solution of the Hume problem, science is confirmed to deserve the title of truth in a probabilistic sense.
2309.10280
Crowdotic: A Privacy-Preserving Hospital Waiting Room Crowd Density Estimation with Non-speech Audio
Privacy-preserving crowd density analysis finds application across a wide range of scenarios, substantially enhancing smart building operation and management while upholding privacy expectations in various spaces. We propose a non-speech audio-based approach for crowd analytics, leveraging a transformer-based model. Our results demonstrate that non-speech audio alone can be used to conduct such analysis with remarkable accuracy. To the best of our knowledge, this is the first time when non-speech audio signals are proposed for predicting occupancy. As far as we know, there has been no other similar approach of its kind prior to this. To accomplish this, we deployed our sensor-based platform in the waiting room of a large hospital with IRB approval over a period of several months to capture non-speech audio and thermal images for the training and evaluation of our models. The proposed non-speech-based approach outperformed the thermal camera-based model and all other baselines. In addition to demonstrating superior performance without utilizing speech audio, we conduct further analysis using differential privacy techniques to provide additional privacy guarantees. Overall, our work demonstrates the viability of employing non-speech audio data for accurate occupancy estimation, while also ensuring the exclusion of speech-related content and providing robust privacy protections through differential privacy guarantees.
Forsad Al Hossain, Tanjid Hasan Tonmoy, Andrew A. Lover, George A. Corey, Mohammad Arif Ul Alam, Tauhidur Rahman
2023-09-19T03:08:20Z
http://arxiv.org/abs/2309.10280v2
Crowodetic: A Privacy-Preserving Hospital Waiting Room Crowd Density Estimation with Non-speech Audio ###### Abstract. Privacy-preserving crowd density analysis finds application across a wide range of scenarios, substantially enhancing smart building operation and management while upholding privacy expectations in various spaces. We propose a non-speech audio-based approach for crowd analytics, leveraging a transformer-based model. Our results demonstrate that non-speech audio alone can be used to conduct such analysis with remarkable accuracy. To the best of our knowledge, this is the first time when non-speech audio signals are proposed for predicting occupancy. As far as we know, there has been no other similar approach of its kind prior to this. To accomplish this, we deployed our sensor-based platform in the waiting room of a large hospital with IRB approval over a period of several months to capture non-speech audio and thermal images for the training and evaluation of our models. The proposed non-speech-based approach outperformed the thermal camera-based model and all other baselines. In addition to demonstrating superior performance without utilizing speech audio, we conduct further analysis using differential privacy techniques to provide additional privacy guarantees. Overall, our work demonstrates the viability of employing non-speech audio data for accurate occupancy estimation, while also ensuring the exclusion of speech-related content and providing robust privacy protections through differential privacy guarantees. occupancy estimation, neural networks, audio processing, machine learning + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: ## 1. Introduction Estimating the number of people present within a specific area over a period of time has many applications. These include management of public spaces and events (Wolf et al., 2017), surveillance and security (Sandag et al., 2018), preventing the spread of infectious diseases (Mohammad et al., 2018). Quantification of crowd density empowers relevant authorities to make well-informed decisions, enabling smart building management to optimize safety, enhance comfort, minimize energy usage, and efficiently allocate resources for the management of diverse indoor and outdoor spaces. In addition to ensuring the accuracy and reliability, it is crucial to address the potential concerns related to the intrusiveness and privacy implications of the technology used for such crowd analytic system. To instill public trust and confidence in the smart building management framework, it is crucial to address potential privacy concerns, particularly in indoor spaces where privacy expectations are higher. Achieving such highly accurate and privacy preserving system requires the thoughtful selection of suitable sensing modalities and robust data processing methods, and minimizing the amount of information that is retained for future analysis. This emphasis on privacy-preserving practices contributes to the broader goal of integrating crowd analytics into smart buildings while respecting individuals' privacy rights. Most of the current research and and deployed methods primarily utilize computer vision-based techniques for the task (Sandag et al., 2018, 2019). However, there remains significant concern regarding the use of different types of cameras and vision sensors, as they may fail to conform to the privacy expectation of users. Additionally, these techniques have several technical limitations such as susceptibility to occlusions, limited field of view and lighting conditions. Thus many research works investigated the use of other sensing modalities and their fusion for this task. These modalities include radio frequency (RF) (Sandag et al., 2018), acoustics (Sandag et al., 2018), environmental sensors (e.g. \(CO_{2}\), temperature, light, motion) (Sandag et al., 2018; Wang et al., 2019). However, all of them have been shown to be location/environment dependent (\(CO_{2}\), thermal, motion, vision, rf), with some also posing potential privacy concerns (vision, audio). The acoustic signatures generated by crowds contain rich information that can be used to extract useful information, such as occupancy. However, one of the main challenges in using audio-based analysis is ensuring user privacy since the audio may include speech. In order to ensure privacy, it is necessary to record and analyze the audio in a way that makes it impossible to discern any speech content and prevents the identity of the speakers from being revealed. In this work, we deploy a sensor-based platform with an on-device machine model to filter speech and only capture non-speech audio. Our model is able to distinguish speech and non-speech audio with high accuracy (Deng et al., 2017) and we only use the non-speech signal for our analysis. We demonstrate that it is possible to reliably estimate crowd statistics in a coarse time scale using only non-speech, ensuring privacy. We use a microphone array to capture the audio, which captures acoustic information in multiple channels and angle of arrival information. Compared to vision-based estimation using a thermal camera, we obtain improved accuracy with non-speech audio. We run a three-month-long study by deploying the platform to capture non-speech audio in the waiting room of a large hospital. We collected a large-scale dataset of non-speech audio in a realistic environment. We show that it is possible to estimate the occupancy with a high degree of accuracy using only non-speech audio. We compared the results from our non-speech-based models to the thermal camera-based model. Our results showed that the non-speech audio-based models outperformed the thermal-based model, especially for shorter time windows. We used a transformer-based model for the non-speech-based occupancy estimation. With this method, we have successfully designed an occupancy estimation system that harnesses the power of modern deep-learning techniques without requiring complex on-device deep learning models. This approach prioritizes privacy by excluding speech content and implementing differential privacy techniques. As a result, our system strikes a balance between accuracy, efficiency, and safeguarding sensitive information. The main contributions in this paper are as follows- 1. We propose a privacy-preserving non-speech audio-based crowd estimation approach and show that it is possible to obtain high levels of accuracy using only non-speech audio. To the best of our knowledge, this is the first-ever attempt to accomplish occupancy counting from non-verbal audio. 2. We collected a large dataset of non-speech audio by deploying our sensor-based platform in the waiting room of a large hospital and show that our model works robustly with real-life audio data. 3. We compare our non-speech approach with thermal camera-based occupancy estimation and other baseline approaches and obtain better results. 4. In addition to using the privacy-preserving non-speech modality for analysis, we further enhance privacy preservation by employing differential privacy techniques. ## 2. Related Work Multiple applicable methods can be applied for the estimation of occupancy in a particular location. Traditional methods use a variety of sensing methods, including vision-based sensors, \(CO_{2}\) detection, EM waves, PIR sensors, sounds; including indirect methods such as computer activity or energy consumption. Some of the most popular methods are : ### \(Co_{2}\) based occupancy counting As every occupant in a place exhales carbon-di-oxide, the concentration of carbon-di-oxide can be used as an indicator of occupancy. Although it showed highly accurate results for occupancy detection, to estimate occupancy, the results are in general poor. To estimate occupancy either physics-based models (Dong et al., 2017), (Dong et al., 2017) or data-driven methods (Dong et al., 2017)(Dong et al., 2018), (Dong et al., 2019) besides CO2 or even a hybrid approach utilizing both physics-based and data-based models (Dong et al., 2019) are utilized. Usually, physics-based models suffer from errors due to real-world complexities not accounted for in the ideal physics equation, including ventilation, exhaust fans, air conditioner, or frequency of door opening or closing. On the other hand, data-driven methods either perform poorly compared to physics-based models due to a lack of relevant data or require multiple sensor integration (thermal, light, humidity, etc.), which can be costly. ### RF signal based occupancy counting Different electromagnetic spectrum bands with different properties are usually used for RF-based occupancy counting. Works in (Dong et al., 2017), (Dong et al., 2017), (Dong et al., 2017) tries to measure the occupancy based on the devices carried by the users. Several other works used device-free schemes to estimate occupancy from wifi signals, either using RSSI information (Dong et al., 2018) or by using CSI property (Dong et al., 2019), (Dong et al., 2019). However, these methods are not well suited for different environments and get easily affected by environmental factors. Another popular method for people is to use an IR-UWB radar ((Dong et al., 2017), (Dong et al., 2019), (Dong et al., 2019)). This kind of system usually consists of one or multiple pairs of transmitter-receiver pairs. The transmitter sends an impulse, and the antenna receives the reflected signal. The same signal gets reflected from different objects at different distances. The difference between two successive frames indicates movement, which can be indicative of human presence. However, as it detects movement, there is a good chance of getting interfered with by any movement, which complicates the detection and counting of humans. ### Sound-wave/audio-based occupancy estimation The usual audio-based occupancy estimation method utilizes speaker-dization techniques ((Dong et al., 2017), (Dong et al., 2019)) to distinguish individual speakers to estimate the occupancy. However, the downside of such approaches is that it is assumed that everyone present in the room is talking, which might not be true in all settings, for example, say in a waiting room or a bus stop or in general, any public place where there is no social connection between different people occupying the area. Also, these techniques are privacy insensitive when the data needs to be sent to a backend to further processing. Another active sound-based method for counting occupancy uses ultrasonic-chirps to detection motion from people ((Dong et al., 2017), (Dong et al., 2017)). In this approach, a transmitter outputs a chirp, and a microphone listens for reverberation to detection. This technique is similar to IR-UWB based RF methods and, similar to them, also suffer from distinguishing real humans from other motion sources like air-conditioning or exhaust fans. ### Video-based method Video-based occupancy measurements systems use cameras to count and track the number of people in indoor settings. These methods either utilizes RGB video stream (Dong et al., 2017), (Dong et al., 2017), (Dong et al., 2017), (Dong et al., 2017) or thermal camera based methods (Dong et al., 2019), (Dong et al., 2019). There are also other group of methods that measures occupancy by counting and tracking the people that crosses the entrance and exit of indoor locations ((34), (46), (41), (5), (23)) to estimate occupancy of a location. All vision-based methods, while being highly accurate when the camera field of view covers the whole location. However, they become unreliable when the location cannot be monitored with a single camera. In those settings, multiple cameras and algorithms to merge various video streams are needed to count the total number of people. So it becomes more and more complicated and costly to monitor space with compartments such as cubicles or walls. Also, a place might have multiple exits and entrance locations and monitoring each entrance exit requires multiple camera streams. Besides that, camera-based occupancy counting is highly undesirable from a privacy perspective. Video contains numerous unnecessary pieces of information that can be used for action recognition or person identification. As the state-of-the-art methods for people counting from video require computation-heavy deep learning models counting the number of people, the data usually needs to go to a backend to effectively merge and compute the count of people from the video streams. Indeed, because of the above-mentioned concerns, there has been a movement to ban video-based surveillance (2) ### Multimodal fusion-based method In addition to using single sensing modality, many works focus of fusing multiple sensing modalities. These approaches include fusion of environmental sensors such as (16; 47), and inclusion of image and audio modalities (53). These methods similar limitations associated with each sensing modality as discussed and also require higher overhead due to increased points of failure and need for synchronization. ### Approaches for privacy preservation In addition to utilizing less intrusive sensing modalities such as PIR sensors (52), approaches such as federated learning has been used to ensure privacy for crowd estimation tasks (32). Other approaches include perturbing audio data to prevent identification of speech and speaker (60). However, these approaches have several limitations such as the use of simulated data that may not capture the challenging conditions associated with realistic data and fail to preserve privacy aspects such as speaker identity. ## 3. Dataset ### Data collection platform We collected data for our non-speech dataset from an on-the-edge device consisting of a thermal camera, a microphone array and a neural computing stick attached to a Raspberry Pi platform. Following is a short description of the components of the platform, illustrated in Figure 1: * Microphone array: We used a ReSpeaker Microphone array (44) to collect audio data. The ReSpeaker microphone array has four microphone arrays that collect audio streams. Then it produces a cleaner single-channel audio stream using on-device audio beamforming. * Thermal camera: A SeekCompact Pro (4) thermal camera was used to take thermal images. It has a resolution of 320x240 pixels. * Neural Computing Stick: An Intel Neural Computing Stick (3) was used to accelerate deep learning computations on the device. * Raspberry pi: A Raspberry Pi was used to manage all the other sensors and store the data on a hard drive. We have an on-device speech recognition model to eliminate any speech data from the audio stream. We built our speech-recognizer using TIMIT dataset (24) as a speech data source. This dataset contains speech 630 American English speakers with all eight major dialects in the USA. Each speaker speaks eight major dialects. We also utilized a custom-labeled google audio set (25) as a negative example source of speech. We use both of these datasets to develop a custom CNN-based speech-detection model. If our speech recognizer model determines that an audio snippet has a probability of being speech content greater than 0.5, we do not store that snippet. We only retrain audio snippets with a speech probability less than 0.5, and we encrypt the data before storing. All of our collected data were stored on a local hard drive after a two-stage encryption scheme. In the first stage, the original data gets encrypted with a randomly generated AES key. Then, in the second stage, that randomly-generated AES key was encrypted and stored with a public key. It is only possible to decrypt the stored AES key using a private key that is only available to the researchers. ### Deployment Figure 1. Front view and the components of the sensor platform. Figure 2. Waiting rooms and the placement of our sensor system We deployed our device to collect data in a large public university's University Health Services (UHS) clinic building. The university has a student, faculty, and staff size of more than 30,000 people, and the UHS facility serves the basic health needs of all of the people in that university. We deployed our device in the main waiting room for that university health services building. We started deploying our device on December 10, 2018 and finished our deployment on July 12, 2019. In the end, we collected around 4 million audio seconds of data from our deployment ## 4. Audio Processing and Modeling ### Multi-Channel Audio Processing We used single-channel audio captured by the Respeker microphone by on-device beamforming (Shen et al., 2018). This single channel audio stream was produced by applying 4-channel Generalized Cross-Correlation Phase Transformation (GCC-PHAT) algorithm (Zhou et al., 2018). In this algorithm, at first, the time delay of arrival (TDOA) is estimated by computing cross-correlation between different multi-channel audio streams captured by multiple microphones. With this information, audio from different channels are beam-formed to produce a single-channel audio stream. ### Handling Missing Audio Segments due to Presence of Speech Our system explored two different schemes for audio processing, as illustrated in Figure 3. #### 4.2.1. Scheme 1 In this scheme, the transformer model processes the audio stream data in 1-second chunks continuously. To remove speech-related information, a speech detection model is employed. Chunks identified as containing speech (with a probability greater than 0.5) are discarded. Non-speech audio chunks are accumulated until the count reaches 60. Afterwards, these 60 non-speech data points are fed into the transformer model as a sequence, enabling occupancy estimation. #### 4.2.2. Scheme 2 In this scheme, a different approach was used for speech and non-speech chunks. For every second of audio data, if the chunk is identified as speech, the input spectrogram is replaced with all-zero values. In that case, only the speech probability obtained from the speech activity detection model is utilized as an input. On the other hand, for non-speech chunks, both the input spectrogram and the corresponding speech probability are used as inputs to the transformer model. For example: Consider an audio stream from a hospital waiting room area with the following sequence: [n, n, s, n, s, n, n, n], where n denotes non-speech chunks, and s denotes audio chunks with speech. Using Scheme 1, the chunks identified as speech (with probabilities above 0.5) are discarded, resulting in the following sequence of non-speech chunks: [n, n, n, n, n]. In comparison, using Scheme 2, this sequence will be converted to [n, n, z, n, z, n, n] where z represents all zero spectrograms. Besides these spectrograms, the speech probabilities are also fed into the model using this scheme. ### Audio Embedding To preprocess spectrograms for transformer models, we investigated two CNN encoder models: TRILL (Shen et al., 2018) and VGGish (Vogish, 2018). **TRILL**: TRILL is a deep-learning model that can be used for extracting representation from non-semantic audio. Before feeding our data to the transformer model, for each second of audio, we extract 512-dimensional embedding for the audio snippet. This embedding is used as a representation of the non-speech audio snippet. **VGGish**: VGGish is another deep-learning model that can be used to extract 128-dimensional representation from audio. However, unlike the TRILL model, we attached VGGish network to the input layers of our transformer and trained the whole model in an end to end fashion. This particular method ensures that besides optimizing the transformer model, we also optimize the VGGish model for generating suitable representation of audio. In both cases, when the speech probabilities are used as input to the models (4.2.2), we append the speech probabilities, and the end of these 512 (for TRILL) and 128 (for VGGish) dimensional representations before feeding the embedding representation sequence to the transformer model. ### Transformer model A transformer(Wang et al., 2019) is a time-series model that consists of several multi-head self-attention layers. A single head in a layer takes a sequence of \(T\) embeddings of dimension \(d_{emb}\). For our case \(d_{emb}=128\) and projects each of the elements in the sequence to query (\(Q\)), key (\(K\)), and value (\(V\)) embeddings. For a single embedding \(e\), the query, key, and value can be computed as follows: \(Q=W_{Q}e\), \(K=W_{K}e\), and \(V=W_{V}e\). Here, \(W_{Q}\), \(W_{K}\), and \(W_{V}\) are projection Figure 3. Schemes for dealing with missing audio due to speech segments matrices. For our particular setting, we project 128-dimensional embeddings to a 16-dimensional space. We calculate the values (\(V\)) by aggregating them using the attention score (computed using queries and keys) as defined in the following equation: \[\overline{V}=\sum_{j=1}^{T}\left(\frac{\exp(q_{i}^{T}k_{j})}{\sum_{t=1}^{T}\exp( q_{i}^{T}k_{t})}\right)V_{j}\] From each head, for each embedding position, we produce these 16-dimensional embeddings. Finally, we concatenate the embeddings from 8 different heads, resulting in a 128-dimensional embedding output. The original 128-dimensional embedding values are added to this output via a residual connection[(27)]. The subsequent encoder layer takes this output after applying a ReLU activation function to it. For our models, we used four such encoder layers. The output from the final layer goes through a linear layer to produce T outputs, which are our prediction values for occupancy in each second. Intuitively, each transformer encoder layer model head finds the relationship between the input sequence elements passed from the previous layer. The value(V) represents an extracted feature from each of the elements of the sequence. These features are aggregated according to the self-attention score to create a richer set of representations for the sequence. With multiple heads, different sets of features are extracted. The final concatenated embeddings represent the aggregated version of all the features for that element in the sequence. Transformer-based models can take variable length sequence input similar to RNN and LSTM, and has several key advantages such as faster training and inference in comparison [(57)]. Other advantages such as the use of residual connections, not requiring backpropagation through time enable transformer models to be more efficient. ### Model Training #### 4.5.1. Ground Truth Collection In this section, we outline the process of ground truth data collection for our study. We extract ground truth data from the entry-exit data of the hospital waiting room, obtained on a daily basis. To accurately determine the number of individuals present in the waiting room at any given time, we created a people counting time series dataset for each day. The time series dataset was constructed by initializing the patient count to 0 at the beginning of each day. The count was incremented as individuals entered the waiting room and reduced when they left. This approach provided a reliable ground truth dataset, allowing accurate assessment of the occupancy in the hospital waiting room at any given time. #### 4.5.2. Model Setup and Training Our model is based on a 4-layer transformer encoder. We explore two different approaches for embedding: TRILL-based and the VGGish-based model. For the TRILL-based model, we extract a 128-dimensional embedding and utilize it as input for the transformer model. On the other hand, for the VGGish-based model, we incorporate the VGGish Figure 4. Overview of transformer model: (a) Multi-head attention encoder (b) Our four-layer transformer model model as the input for the transformer embeddings. We train this model in conjunction with our transformer model. At each layer of the transformer, we employ 8 transformer heads to process the time series data. This allows for effective information processing at each layer. In the final layer, a linear layer converts the time series into a series of single numbers. As a result, for a 60-second audio input, we obtain an output of size \(60\times 1\). We define this output as representing the occupancy at any given time within the 60-second input. To evaluate the performance of our model, we employ the Mean Squared Error (MSE) loss function, which measures the discrepancy between the ground truth and the output. During the training phase, we use the Adam Optimizer with a learning rate of 0.001. We train the model for 30 epochs, allowing it to learn and optimize its performance over time. ## 5. Results We develop a baseline model which predicts the mean occupancy, and train models using other data modalities, thermal image, and speech probability (no speech audio used) for every minute. We train the models with our collected waiting room data and leave out the last ten days of data for testing. We also perform cross validation by leaving out the data for entire months. ### Performance using different data modalities Table 1 shows the performance of different minute-level waiting room occupancy prediction models trained with different modalities including thermal camera, speech and non-speech. The first key takeaway is that the non-speech audio-based models outperform the thermal-based and speech-probability based models. While the thermal camera-based Faster RCNN model (Wang et al., 2017) achieves a Person correlation coefficient (\(\rho\)) of 0.11 and a Mean Average Error (MAE) of 3.90, the non-speech audio based Transformer model with Trill embedding (scheme 1) achieves significantly better results with a \(\rho\) of 0.74 and MAE of 2.18. Similarly, the non-speech audio based Transformer model with Trill embedding outperforms model that leverages speech probability information by a significant margin (i.e. \(\rho\) of 0.34 for speech probabilities vs. \(\rho\) of 0.74 for non-speech). Overall, this results clearly highlight that non-speech-based models contain information about waiting room occupancy which can be used to develop a privacy-aware occupancy system that solely relies on non-speech audio (and avoids analyzing any speech audio). This type of waiting room occupancy estimation system is particularly well-suited for clinical/hospital environment, as well as other public spaces that have higher level of privacy expectations. We experimented with different schemes (section 4.2.1, 4.2.2) and embeddings as we develop our non-speech audio-based model. As can be seen in Table 1, scheme 1 that uses 60 seconds of non-speech audio performs better than scheme 2 that leverages the extra information of speech probability (Note that only speech probability was used and no speech audio signal was used). For example, Model 4 with non-speech audio data achieves a \(\rho\) of 0.74 and MAE of 2.18. Compared to this, Model 7 that uses both speech probability and non-speech audio as input to the same model architecture of Model 4 achieves a slightly lower performance with \(\rho\) of 0.69 and MAE of 2.46. Overall, with the addition of speech probability in scheme 2, we get a slightly worse performance. It shows that the results of both of the schemes are comparable and after using non-speech sound adding speech probabilities does not contain that much information. \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline Model & Modality & Model Description & MAE & RMSE & \(\rho\) \\ \hline 1 & Baseline(Average prediction) & Predict mean & 3.73 & 4.54 & 0.0 \\ \hline 2 & Thermal image & Faster RCNN & 3.90 & 4.26 & 0.11 \\ \hline 3 & Speech probabilities & Random Forest & 3.23 & 4.16 & 0.34 \\ \hline 4 & 1 channel(PHAT) Non-speech audio & Trill-embedding + Transformer & 2.18 & 3.00 & 0.74 \\ \hline 5 & 1 channel(PHAT) Non-speech audio & VGGish-CNN + Transformer & 2.23 & 3.00 & 0.73 \\ \hline 6 & Speech probability + 1 channel(PHAT) Non-speech audio & VGGish-CNN + Transformer & 2.46 & 3.21 & 0.69 \\ \hline 7 & Speech probability + 1 channel(PHAT) Non-speech audio & Trill + Transformer & 2.63 & 3.21 & 0.63 \\ \hline \end{tabular} \end{table} Table 1. Mean Average Error (MAE), Root Mean Squared Error (RMSE), Correlation Coefficient (\(\rho\)) between the occupancy prediction and ground truth for different models Figure 5. (a) Plot showing average RMSE by occupancy (b) Total samples(in seconds) in the dataset by occupancy Figure 6. (a) Plot showing average RMSE by occupancy (b) Total samples(in seconds) in the dataset by occupancy In Figure 6, we can observe that the majority of the collected real life data consists of non-speech snippets for the entire minute. Having a substantial amount of non-speech data available enhances the performance of our models. Conversely, when the amount of non-speech snippets is lower, the performance remains relatively unchanged and does not drop. In Figure 5, we observe that we have more data for lower occupancy setting and our model has lowest error for such scenarios. The performance of our model decreases slightly for higher occupancy setting by negligible amount. ### Time scale aggregation We aggregate the predictions from our minute-level occupancy models and compare the correlation coefficient with the ground truth data in Table 3. It can be seen that \(\rho\) improves if a longer aggregation window is used for all modalities. However, the key takeaway is that non-speech audio-based model outperforms the models using other modalities by substantial margins for time aggregated results (\(\rho\geq 0.90\) for non speech audio compared to \(\rho<0.25\) using thermal image and \(\rho<0.50\) using speech probability). ### Cross validation We also conducted leave-one-month-out cross validation, where the entire data from a specific month was used as test data. The results are shown in Table 2. Consistent with the results in the previous section, the cross-validation results demonstrate superior performance using the non-speech modality. Using scheme 1, non-speech audio using TRILL embedding obtains the best performance for all three of the months compared to using other modalities and audio processing schemes in terms of MAE, RMSE and \(\rho\). Using scheme 2, we also obtain better performance than using the other modalities. ## 6. Privacy Preserving Inference ### Information linkage attacks and data privacy Linkage attacks, exemplified by the case of Netflix (Bahdan et al., 2015), pose a significant security threat by linking new data with existing public data, potentially compromising individuals' privacy. To combat this risk, differential privacy has emerged as a powerful framework. It provides a rigorous mathematical guarantee to protect individual privacy while enabling meaningful analysis of sensitive data. By injecting controlled noise into data analysis, differential privacy prevents specific information extraction and safeguards privacy against linkage attacks. It strikes a balance between privacy preservation and data utility, allowing valuable insights while upholding individuals' privacy rights. Adopting differential privacy techniques is crucial to address privacy concerns and ensure responsible use of data in the face of evolving security challenges. As our system processes non-speech audio data, we can implement differential privacy by adding noise to the embeddings generated by our CNN model (which is then later fed into the transformer model). As the transformer model can be computationally \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**MAE**} & \multicolumn{3}{c}{**RMSE**} & \multicolumn{3}{c}{\(\rho\)} \\ \cline{3-10} & **Month** & February & March & April & February & March & April & February & March & April \\ **Model** & **Modality** & & & & & & & & & \\ \hline 1 & Baseline & 3.18 & 2.68 & 3.93 & 4.02 & 3.21 & 4.74 & 0.00 & 0.00 & 0.00 \\ 2 & Thermal image & 3.34 & 2.94 & 3.57 & 4.39 & 3.44 & 4.49 & 0.14 & 0.19 & 0.14 \\ 3 & Speech probability & 2.62 & 2.81 & 3.61 & 3.62 & 3.41 & 4.75 & 0.50 & 0.32 & 0.32 \\ - & Speech probability + loudness & 2.50 & 2.32 & 3.58 & 3.50 & 3.35 & 4.71 & 0.53 & 0.34 & 0.41 \\ 4 & Non-speech Audio (TRILL) & 1.82 & 2.00 & 2.65 & 2.65 & 2.70 & 3.48 & 0.72 & 0.61 & 0.69 \\ 5 & Non-speech Audio (VGGish pretrained) & 1.86 & 2.21 & 2.87 & 3.01 & 2.86 & 3.80 & 0.61 & 0.56 & 0.61 \\ 6 & Speech probability + VGGish & 1.95 & 2.13 & 2.82 & 2.90 & 2.77 & 3.88 & 0.70 & 0.63 & 0.68 \\ 7 & Speech probability + Trill & 1.95 & 2.14 & 2.96 & 2.90 & 2.80 & 3.89 & 0.70 & 0.54 & 0.61 \\ \hline \hline \end{tabular} \end{table} Table 2. Results for different cross-validation months in terms of MAE (Mean Average Error), RMSE(Root Mean Squared Error), Correlation coefficient(\(\rho\)) \begin{table} \begin{tabular}{c c c c} \hline \hline Data & 30 min \(\rho\) & 1 hour \(\rho\) & 2 hour \(\rho\) \\ \hline Thermal image & 0.23 & 0.24 & 0.24 \\ Speech probability & 0.44 & 0.47 & 0.48 \\ Non-speech Audio & 0.90 & 0.91 & 0.93 \\ \hline \hline \end{tabular} \end{table} Table 3. Correlation coefficient of average aggregation results for different modalities for different timescales Figure 7. (a) Plot showing distribution of visits by patients (b) Plot showing distribution of total duration of visits by patients heavy, while the CNN features can be computed in real-time on-device, this methodology ensures that we only send differentially private information to the backend for further processing. ### Implementation of Differential Privacy guarantees for our system To ensure our system outputs data with differential privacy requirements, we use Laplace mechanism to add noise to the output of our CNN based audio embedding generation system (Kang et al., 2017). To do that, at first we define the sensitivity of any function \(f\) as \[\Delta f=\max_{\begin{subarray}{c}x,y\in\mathbb{N}^{k}\\ \|x-y\|_{1}=1\end{subarray}}\|f(x)-f(y)\|_{1}\] where \(f:\mathbb{N}^{|X|}\rightarrow\mathbb{R}^{k}\). With this, definition, one can prove that, with the Laplace mechanism on the output of \(f(x)\) defined as: \[\mathcal{M}_{L}(f(x))=f(x)+(Y_{1},Y_{2}...Y_{k})\] where each of \(Y_{i}\) are i.i.d variables drawn from Laplace distribution of \(L(\frac{\Delta f}{\epsilon})=\frac{\epsilon}{2\Delta f}\exp(-\frac{\|\epsilon |}{\Delta f})\); is an (\(\epsilon\),0) differential private algorithm. Based on this analysis; for our modeling scenario if we extract \(d\) dimensional embeddings from \(t\) timesteps (from \(t\) seconds of audio clips); our embedding generation function will produce \(t\times d\) dimensional output. If we clip the output embedding for each \(i\)th second as: \[\hat{e_{i}}=\frac{e_{i}}{\max(1,\frac{\|e_{i}\|_{1}}{C})}\] Where \(C\) is the clipping parameter; which is a hyperparameter. From this formulation we can conclude that: \[\|\hat{e}\|_{1}\leq C\] So, for two \(t\times d\) embedding sets where in one embedding set the embedding in some particular second \(i\) is hidden (which means not using that particular second of audio); say \(e_{-i}\) by setting those positions with zeros compared to another embedding set where the embedding from \(i\)th second is present (\(e\)), we can express the relation as: \(e=e_{-i}+e_{i}\) where \(e_{i}\) a zero vector of size \(t\times d\) except from positions \((i-1)*d+1\) to \(i*d\); where it contains clipped embeddings from \(i\)th second. So from the above, we can conclude: \[\|e-e_{-i}\|_{1}=\|e_{i}\|_{1}\leq C\] Thus the embedding space has a sensitivity of \(C\). From that, we can conclude for per clip, \((\epsilon,0)\) differential privacy, all we need to do is perturb the embeddings with Laplace noise sampled from \(L(\frac{C}{\epsilon})\) and add it to our embedding. This procedure ensures adding each second of audio and sending it to a remote backend (where the transformer-based occupancy detection model might run) has an at-most \(1+\epsilon\) diminishing effect return on any future utility (Kang et al., 2017). In this procedure, if a user decides to stay in the place \(T\) seconds, the total differential privacy loss will be \(\epsilon T\) because of the composition theorem of differential privacy mentioned at (Kang et al., 2017). ### Experimental Results for our differential privacy formulation To make our models compatible with differential privacy, we first trained our model where CNN embeddings were clipped as described in 6.2. We also added noise according to the Laplace mechanism during the training to make the model prediction more robust with differential privacy noise. Table 4 shows the effect on the occupancy estimation algorithm for different \(\epsilon\) differential privacy targets: From table 4, we can see that our algorithm even performs well for \(\epsilon\) values as small as 1. For the other values \(>0.5\), we can see that the MAE remains consistently low compared to the baseline. With these small \(\epsilon\) values, we can ensure that using our methodology reduces the possibility of future linkage attacks for each additional audio snippet we use for occupancy detection with our model. ## 7. Discussion and Conclusion In this work, we demonstrate the feasibility of estimating occupancy using non-speech audio, presenting superior results compared to other modalities, including thermal cameras. Compared to techniques that aim to preserve privacy while still using speech data, our results show that non-speech audio contains sufficient information to estimate occupancy reliably. Our non-speech-based method not only protects speech content but also renders speaker identification extremely difficult. Our analysis with differential privacy techniques shows that we can provide additional privacy guarantees on top of the privacy preserving data modality that we use. Our approach incorporates a transformer-based model which has proven to be effective across various tasks and domains. By leveraging this non-speech based method, we enable the application of such models even in situations where deploying such models on edge-devices might not be feasible. One notable advantage of employing non-speech audio lies in the richness of information that is contained, resulting in more reliable occupancy estimates. While alternative modalities, such as \(CO_{2}\) sensor and PIR sensors,may be effective in detecting the presence of humans, they often fall short in providing accurate occupancy information, as evident from the existing literature. Furthermore, our analysis incorporates differential privacy techniques, adding an additional layer of privacy guarantees to the already privacy-preserving data modality we use. By incorporating differential privacy safeguards, we have not only ensured the present security of our system but also fortified it against potential future linkage attacks. This robust approach guarantees the protection of sensitive data, reinforcing the trustworthiness of our system for long-term use. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Mechanism & \(\epsilon\) & MAE & RMSE & \(\rho\) \\ \hline Laplace & 5 & 2.74 & 3.80 & 0.69 \\ \hline Laplace & 2 & 2.87 & 3.97 & 0.66 \\ \hline Laplace & 1 & 3.03 & 4.18 & 0.62 \\ \hline Laplace & 0.5 & 3.18 & 4.34 & 0.55 \\ \hline Laplace & 0.25 & 3.60 & 4.80 & 0.41 \\ \hline Laplace & 0.10 & 4.12 & 5.31 & 0.19 \\ \hline \end{tabular} \end{table} Table 4. Results for epsilon-delta differential privacy settings One of the limitations of our current approach is that we have only deployed our sensors in a hospital waiting room. To confirm the general applicability of our method, further investigations and evaluations in other scenarios may be required. We argue that our method is highly suitable for similar indoor environments with higher privacy expectation. Our findings have a significant impact on the design and implementation of smart technology for homes, public spaces, and workplaces. Our approach could help to insill public trust by providing privacy protection while providing valuable insight beyond occupancy such as syndromic surveillance [6]. To conclude, our study demonstrates the feasibility and advantages of estimating occupancy using non-speech audio. Our approach surpasses other modalities in terms of privacy preservation and accuracy of occupancy estimation.
2309.00093
Boundary Control and Observer Design Via Backstepping for a Coupled Parabolic-Elliptic System
Stabilization of a coupled system consisting of a parabolic partial differential equation and an elliptic partial differential equation is considered. Even in the situation when the parabolic equation is exponentially stable on its own, the coupling between the two equations can cause instability in the overall system. A backstepping approach is used to derive a boundary control input that stabilizes the coupled system. The result is an explicit expression for the stabilizing control law. The second part of the paper involves the design of exponentially convergent observers to estimate the state of the coupled system, given some partial boundary measurements. The observation error system is shown to be exponentially stable, again by employing a backstepping method. This leads to the design of observer gains in closed-form. Finally, we address the output-feedback problem by combining the observers with the state feedback boundary control. The theoretical results are demonstrated with numerical simulations.
Ala' Alalabi, Kirsten Morris
2023-08-31T19:24:06Z
http://arxiv.org/abs/2309.00093v1
# Boundary Control and Observer Design Via Backstepping for a Coupled Parabolic-Elliptic System ###### Abstract Stabilization of a coupled system consisting of a parabolic partial differential equation and an elliptic partial differential equation is considered. Even in the situation when the parabolic equation is exponentially stable on its own, the coupling between the two equations can cause instability in the overall system. A backstepping approach is used to derive a boundary control input that stabilizes the coupled system. The result is an explicit expression for the stabilizing control law. The second part of the paper involves the design of exponentially convergent observers to estimate the state of the coupled system, given some partial boundary measurements. The observation error system is shown to be exponentially stable, again by employing a backstepping method. This leads to the design of observer gains in closed-form. Finally, we address the output-feedback problem by combining the observers with the state feedback boundary control. The theoretical results are demonstrated with numerical simulations. B + Footnote †: footnote]This work was supported by NSERC and the Faculty of Mathematics, University of Waterloo B]Ala' Alalabi u]Kirsten Morris oundary control, observer design, parabolic-elliptic systems, backstepping approach, exponential stability. ## 1 Introduction The coupling of parabolic partial differential equations with elliptic partial differential equations appears in a number of physical applications, including electrochemical models of lithium-ion cells [34, 9], biological transport networks [17], chemotaxis phenomena [22, 2] and the thermistor [19]. Parabolic-elliptic systems are thus an important class of partial differential-algebraic equations (PDAEs). Well-posedness of linear PDAEs has been addressed in the literature [26, 28, 30, 12]. There have also been research efforts dedicated to investigate the well-posdeness of particular linear parabolic-elliptic systems [2, 7]. In addition, the well-posedness of quasilinear and nonlinear parabolic-elliptic systems have received a recent attention; see [17, 18, 22, 33, 8]. Parabolic-elliptic systems can exhibit instability, leading to several complications in real-life applications. One example can be found in the context of chemotaxis phenomena. In this case the parabolic equation describes the diffusion of cells while the elliptic equation models the concentration of a chemical attractant. In [27] authors showed that the aforementioned model can manifest unstable dynamics, leading to spatially complex patterns in cell density as well as the concentration of the chemical; see [24]. These patterns will in turn impact many biological processes including tissue development, tumor growth and wound healing. A more recent illustration for the instability in coupled parabolic-elliptic systems can be found in [23]. The findings in this paper demonstrated that even when the parabolic equation is exponentially stable, coupling with the elliptic equation can lead to an unstable system. Stabilization through boundary control for coupled linear parabolic partial differential equations has been studied in the literature. In [6], the backstepping approach was used to stabilize the dynamics of a linear coupled reaction-diffusion systems with constant coefficients. An extension of this work to systems with variable coefficients was presented in [31]. Koga et al. [14] described boundary control of the one phase Stefan problem, modeled by a diffusion equation coupled with an ordinary differential equation. Feedback stabilization of a PDE-ODE system was also studied in [11]. There is less research tackling the stabilization of coupled parabolic-elliptic systems. [15, 16, 23]. In [16, chap. 10], stabilization of parabolic-elliptic systems arose in the context of stabilizing boundary control of linearized Kuramoto-Sivashinsky and Korteweg de Vries equations. The controller required the presence of two Dirichlet control inputs. A similar approach was also followed in [15] where a hyperbolic-elliptic system arose in the control of a Timoshenko beam. More recently, Parada et al. [23] considered the boundary control of unstable parabolic-elliptic system through Dirichlet control with input delay. The first main contribution of this paper is design of a single feedback Neumann control input that exponentially stabilizes the dynamics of the two coupled equations. The control input is directly designed on the system of partial differential equations without approximation by finite-dimensional systems. This will be done by a backstepping approach [16]. Backstepping is one of the few methods that yields an explicit control law for PDEs without first approximating the PDE. These transformations are generally formulated as a Volterra operator, which guarantees under weak conditions the invertibility of the transformation. When using backstepping, it is typical to determine the destabilizing terms in the system, find a suitable exponentially stable target system where the destabilizing terms are eliminated by state transformation and feedback control. Then, look for an invertible state transformation of the original system into the exponentially stable target system. This requires finding a kernel of the Volterra operator and also showing that the kernel is well-defined as the solution of an auxilary PDE. One possible approach for stabilization of a parabolic-elliptic system is to convert the coupled system into one equation in terms of the parabolic state. However, this will result in the presence of a Fredholm operator that makes it difficult to establish a suitable kernel for the backstepping transformation. Another approach would be a vector-valued transformation for both of the parabolic and the elliptic states but this is quite complex. A different approach is taken here. We use a single transformation previously used for a parabolic equation [16]. Properties of the kernel of this transformation have already been established. This leads to an unusual target system in a parabolic-elliptic form. As in other backstepping designs, an explicit expression for the controller is obtained as a byproduct of the transformation. Then the problem is to establish the stability of the target system obtained from the transformation, which will imply the stability of the original coupled system via the invertible transformation. Explicit calculation of the eigenfunctions is not required. In many dynamical systems, the full state is not available. This issue motivated the study of constructing an estimate of the state by designing an observer. The literature on observer design for coupled systems appears to focus on the observer synthesis of systems governed by coupled hyperbolic PDEs or coupled parabolic PDEs; see for instance [1, 5, 21, 32]. There have been few papers addressing observer design problem for partial differential equations coupled with an elliptic equation [13, 15]. In [13] authors designed a state observer for a coupled parabolic-elliptic system by requiring a two-sided boundary input for the observer. Also, in the same work mentioned earlier for Timoshenko beam [15], authors studied its observer design by using a hyperbolic-elliptic system and requiring two control inputs. In [25] an observer is designed for a parabolic equation that includes a Volterra term. We design an observer using two measurements, and also several observers that require only one measurement. The exponential stability of the observation error dynamics is achieved by means of designing suitable output injections, also known as filters or observer gains. In parallel to our approach for the boundary controller design, we derive observer gains by using backstepping transformations that are well-established in the literature. The transformations result in target error systems whose stability is studied. We finally combine the state feedback and observer designs to obtain an output feedback controller for the coupled parabolic-elliptic system. Numerical simulations are presented that illustrate the theoretical findings for each of the objectives. To our knowledge, this paper is the first to designing a state estimator for a coupled parabolic-elliptic system with partial boundary measurements, and only one control input. This paper is structured as follows. Section 2 presents the well-posedness of the parabolic-elliptic systems under consideration. Stability analysis for the uncontrolled system is also described. Section 3 includes the first main result which is the use of a backstepping method to design a boundary controller for the coupled system. A preliminary version of section 3, on stabilization, is included in the proceedings of the 2023 Conference on Decision and Control [3]. The design of a state observer for the coupled system is given in Section 4. Different designs are proposed based on the available measurements. The output feedback problem is described in Section 5. Conclusions and some discussion of possible extensions are given in Section 6. ## 2 Well-posedness and stability of system We study parabolic-elliptic systems of the form \[w_{t}(x,t)= w_{xx}(x,t)-\rho w(x,t)+\alpha v(x,t), \tag{1}\] \[0= v_{xx}(x,t)-\gamma v(x,t)+\beta w(x,t),\] (2) \[w_{x}(0,t)= 0,\quad w_{x}(1,t)=u(t),\] (3) \[v_{x}(0,t)= 0,\quad v_{x}(1,t)=0, \tag{4}\] where \(x\in[0,1]\) and \(t\geq 0\). The parameters \(\rho,\ \alpha,\ \beta,\ \gamma\) are all real, with \(\alpha\), \(\beta\) both nonzero. With the notation \(\Delta v(x)=\frac{d^{2}v}{dx}\), define the operator \(A^{\gamma}:D(A^{\gamma})\to L^{2}(0,1)\) \[A^{\gamma}v(x)=(\gamma I-\Delta)v(x)=w(x),\] \[D(A^{\gamma})=\{v\in H^{2}(0,1),\ v^{\prime}(0)=v^{\prime}(1)=0\}.\] For values of \(\gamma\) that are not eigenvalues of \(A^{\gamma}\), that is \(\gamma\neq-n\pi^{2}\) with \(n=0,\dots\), the inverse operator of \(A^{\gamma}\) (\(A^{\gamma}\))\({}^{-1}:L^{2}(0,1)\to D(A^{\gamma})\) exists. In this situation, the uncontrolled system (1)-(4) is well-posed [12]. Alternatively, define \[A=\Delta-\rho I+\alpha\beta(\gamma I-\Delta)^{-1}, \tag{5}\] \[D(A)=\{w\in H^{2}(0,1),\ w^{\prime}(0)=w^{\prime}(1)=0\}.\] **Theorem 1**.: _If \(\gamma\neq-(n\pi)^{2}\) the operator \(A\) generates a \(C_{0}\)- semigroup and the control system (1)-(4) with observation \(w(0,t)\) is well-posed on the state-space \(L^{2}(0,1)\). It is similarly well-posed with control instead at \(x=0\), and/or observation at \(x=1\)._ **Proof.** With \(\alpha=0\) the control system is the heat equation with Neumann boundary control. This control system is well-known to be well-posed on \(L^{2}(0,1)\). Since the operator \(A\) is a bounded perturbation of \(\Delta\), then the conclusion of the theorem follows. \(\Box\) It will henceforth be assumed that \(\gamma\neq-(n\pi)^{2}\). **Theorem 2**.: _Let \(u(t)\equiv 0\). The eigenvalues of system (1)-(4) operator \(A\) are_ \[\lambda_{n}=-\rho+\frac{\alpha\beta}{\gamma+(n\pi)^{2}}-(n\pi)^{2},\ \ n=0,\ 1,\ \dots \tag{6}\] **Proof.** The analysis is standard but given for completeness. Let \(\{\phi_{j}\}_{j\geq 0}\subset{\cal C}^{4}(0,1)\) be the eigenfunctions of the operator \(A\) corresponding to the eigenvalues \(\lambda_{j}\), then setting \(\beta(\gamma I-\partial_{xx})^{-1}\phi_{j}=e_{j}\), \[\lambda_{j}\phi_{j}(x)=\phi_{j}^{{}^{\prime\prime}}(x)-\rho\phi_{ j}(x)+\alpha e_{j}(x) \tag{7}\] \[0=e_{j}^{{}^{\prime\prime}}(x)-\gamma e_{j}(x)+\beta\phi_{j}(x)\] (8) \[\phi_{j}^{{}^{\prime}}(0)=\phi_{j}^{{}^{\prime}}(1)=0\] (9) \[e_{j}^{{}^{\prime}}(0)=e_{j}^{{}^{\prime}}(1)=0. \tag{10}\] Solving (7) for \(e_{j}(x)\) \[e_{j}(x)=\frac{\rho+\lambda_{j}}{\alpha\beta}\phi_{j}(x)-\frac{1}{\alpha\beta }\phi_{j}^{{}^{\prime\prime}}(x). \tag{11}\] Substituting for \(e_{j}(x)\) in (8), we obtain the fourth-order differential equation \[\phi_{j}^{{}^{\prime\prime\prime}}(x)-(\lambda_{j}+\rho+\gamma) \phi_{j}^{{}^{\prime\prime}}(x)\] \[+(\gamma(\lambda_{j}+\rho)-\alpha\beta)\phi_{j}(x)=0, \tag{12}\] with the boundary conditions \[\phi_{j}^{{}^{\prime}}(0)=\phi_{j}^{{}^{\prime}}(1)=\phi_{j}^{{}^{\prime \prime\prime}}(0)=\phi_{j}^{{}^{\prime\prime\prime}}(1)=0. \tag{13}\] Solving system (12)-(13) for \(\phi_{j}\) yields that \(\phi_{j}=\cos(j\pi x)\) for \(j=0,\ 1,\dots\). Subbing \(\phi_{j}\) in (12) and solving for \(\lambda_{j}\) leads to (6). \(\Box\) **Corollary 3**.: _System (1)- (4) is exponentially stable if and only if_ \[\rho>\frac{\alpha\beta}{\gamma}, \tag{14}\] _and the decay rate in that case is bounded by the maximum eigenvalue \(\rho-\frac{\alpha\beta}{\gamma}\)._ **Proof.** Since \(\Delta\) with domain \(D(A)\) is a a Riesz-spectral operator, then since \(A\) is a bounded perturbation, it is also a spectral operator. Alternatively, we note that \(A\) is a self-adjoint operator with a compact inverse and hence it is Riesz-spectral [10, section 3]. Thus, \(A\) generates a \(C_{0}\)-semigroup with growth bound determined by the eigenvalues. \(\Box\) Thus, even in the case when the parabolic equation is exponentially stable, coupling with the elliptic system can cause the uncontrolled system to be unstable. ## 3 Stabilization To design a stabilizing control input, a backstepping approach will be used. Unlike the work done in [16, Chap.10] where two control inputs are used, we stabilize the dynamics of the coupled parabolic-elliptic equations (1)-(4) using a single control signal. The following lemma will be used. **Lemma 4**.: _[_16_, chap. 4]_ _For any \(c_{2}>0\) the hyperbolic partial differential equation_ \[k_{yy}^{a}(x,y)-k_{xx}^{a}(x,y)+c_{2}k^{a}(x,y)=0,\quad 0<y<x<1 \tag{15a}\] \[k_{y}^{a}(x,0)=0,\quad k^{a}(x,x)=-\frac{1}{2}c_{2}x, \tag{15b}\] _has a continuous unique solution_ \[k^{a}(x,y)=-c_{2}x\frac{I_{1}\left(\sqrt{c_{2}(x^{2}-y^{2})}\right)}{\sqrt{c_{ 2}(x^{2}-y^{2})}}, \tag{16}\] _where \(I_{1}(\cdot)\) is the modified Bessel function of first order defined as_ \[I_{1}(x)=\sum_{m=0}^{\infty}\frac{(x/2)^{2m+1}}{m!(m+1)!}.\] We apply the invertible state transformation \[\tilde{w}(x,t)= w(x,t)-\int_{0}^{x}k^{a}(x,y)w(y,t)dy, \tag{17}\] on the parabolic state \(w(x,t)\), while the elliptic state \(v(x,t)\) is unchanged. Here the kernel of the transformation \(k^{a}(x,y)\) is given by (16). The inverse transformation of (17) was given in [16]. **Lemma 5**: _[_16_, chap. 4]_ _The inverse transformation of (17) is_ \[w(x,t)= \tilde{w}(x,t)+\int_{0}^{x}\ell^{a}(x,y)\tilde{w}(y,t)dy, \tag{18}\] _where \(l(x,y)\) is the solution of the system_ \[\ell^{a}_{xx}(x,y)-\ell^{a}_{yy}(x,y)+c_{2}\ell^{a}(x,y)=0, \tag{19a}\] \[\ell^{a}_{y}(x,0)=0,\quad\ell^{a}(x,x)=-\frac{1}{2}c_{2}x, \tag{19b}\] _that is_ \[\ell^{a}(x,y)=-c_{2}x\frac{J_{1}\left(\sqrt{c_{2}(x^{2}-y^{2})} \right)}{\sqrt{c_{2}(x^{2}-y^{2})}}, \tag{20}\] _where \(J_{1}(\cdot)\) is the Bessel function of first order defined as_ \[J_{1}(x)=\sum_{m=0}^{\infty}(-1)^{m}\frac{(x/2)^{2m+1}}{m!(m+1)!}.\] In what follows we set \(c_{2}=c_{1}-\rho\) with \(c_{2}>0\). **Theorem 6**: _If the control signal \(u(t)\) is given by_ \[u(t)= \int_{0}^{1}k_{x}^{a}(1,y)w(y,t)dy+k^{a}(1,1)w(1,t), \tag{21}\] _then transformation (17), with \(k^{a}(x,y)\) given by system (15), converts the parabolic-elliptic system (1)-(4) into the target system_ \[\tilde{w}_{t}(x,t)= \tilde{w}_{xx}(x,t)-(c_{2}+\rho)\tilde{w}(x,t)+\alpha v(x,t)\] \[-\alpha\int_{0}^{x}k^{a}(x,y)v(y,t)dy, \tag{22}\] \[0= v_{xx}(x,t)-\gamma v(x,t)+\beta\tilde{w}(x,t)\] \[+\beta\int_{0}^{x}\ell^{a}(x,y)\tilde{w}(y,t)dy,\] (23) \[\tilde{w}_{x}(0,t)= 0,\quad\tilde{w}_{x}(1,t)=0,\] (24) \[v_{x}(0,t)= 0,\quad v_{x}(1,t)=0. \tag{25}\] **Proof.** It will prove useful to rewrite (17) as \[w(x,t)= \tilde{w}(x,t)+\int_{0}^{x}k^{a}(x,y)w(y,t)dy. \tag{26}\] We differentiate (26) with respect to \(x\) twice \[w_{xx}(x,t)=\tilde{w}_{xx}(x,t)+\int_{0}^{x}k_{xx}^{a}(x,y)w(y,t)dy\] \[+k_{x}^{a}(x,x)w(x,t)+\frac{d}{dx}k^{a}(x,x)w(x,t)\] \[+k^{a}(x,x)w_{x}(x,t), \tag{27}\] and with respect to \(t\) \[w_{t}(x,t)=\tilde{w}_{t}(x,t)+\int_{0}^{x}k^{a}(x,y)w_{t}(y,t)dy\] \[=\tilde{w}_{t}(x,t)+k^{a}(x,x)w_{x}(x,t)-\int_{0}^{x}k_{y}^{a}(x, y)w_{y}(y,t)dy\] \[-\rho\int_{0}^{x}k^{a}(x,y)w(y,t)dy+\alpha\int_{0}^{x}k^{a}(x,y) v(y,t)dy\] \[=\tilde{w}_{t}(x,t)+k^{a}(x,x)w_{x}(x,t)-k_{y}^{a}(x,x)w(x,t)\] \[+k_{y}^{a}(x,0)w(0,t)+\int_{0}^{x}k_{yy}^{a}(x,y)w(y,t)dy\] \[-\rho\int_{0}^{x}k^{a}(x,y)w(y,t)dy+\alpha\int_{0}^{x}k^{a}(x,y) v(y,t)dy.\] Here \[k_{x}^{a}(x,x)=\frac{\partial}{\partial x}k^{a}(x,y)|_{x=y},\ k _{y}^{a}(x,x)=\frac{\partial}{\partial y}k^{a}(x,y)|_{x=y},\] \[\frac{d}{dx}k^{a}(x,x)=k_{x}^{a}(x,x)+k_{y}^{a}(x,x).\] Substituting (27) and (28) in (1), \[\tilde{w}_{t}(x,t)+k^{a}(x,x)w_{x}(x,t)-k_{y}^{a}(x,x)w(x,t)\] \[+k_{y}^{a}(x,0)w(0,t)+\int_{0}^{x}k_{yy}^{a}(x,y)w(y,t)dy\] \[-\rho\int_{0}^{x}k^{a}(x,y)w(y,t)dy+\alpha\int_{0}^{x}k^{a}(x,y)v(y,t)dy\] \[=\tilde{w}_{xx}(x,t)+\int_{0}^{x}k_{xx}^{a}(x,y)w(y,t)dy+k_{x}^{a} (x,x)w(x,t)\] \[+\frac{d}{dx}k^{a}(x,x)w(x,t)+k^{a}(x,x)w_{x}(x,t)-\rho w(x,t)\] \[+\alpha v(x,t). \tag{29}\] Since \(k_{y}^{a}(x,0)=0\), then adding and subtracting \(\rho)\tilde{w}(x,t)\) to the right-hand-side of (29) \[\tilde{w}_{t}(x,t)=\tilde{w}_{xx}(x,t)-(c_{2}+\rho)\tilde{w}(x,t)+ \alpha v(x,t)\] \[-\alpha\int_{0}^{x}k^{a}(x,y)v(y,t)dy+(2\frac{d}{dx}k^{a}(x,x)+c_{ 2})w(x,t)\] \[+\int_{0}^{x}[k^{a}_{xx}(x,y)-k^{a}_{yy}(x,y)-c_{2}k^{a}(x,y)]w(y, t)dy.\] Since \(k^{a}(x,y)\) is given by (15), the previous equation reduces to (22). Also, \[\tilde{w}_{x}(0,t)=w_{x}(0,t)-k^{a}(0,0)w(0,t)=0,\] and the other boundary condition on \(w(x,t)\) holds by using (21). Equation (23) can be obtained by referring to (18). \(\Box\) Next, we provide conditions that ensure the exponential stability of the target system. First, we need the following lemma, which provides bounds on the induced \(L^{2}\)-norms of the kernel functions \(k^{a}(x,y)\) and \(\ell^{a}(x,y)\). **Lemma 7**: _The \(L^{2}\)-norms of \(k^{a}(x,y)\) and \(\ell^{a}(x,y)\) are bounded by_ \[\|k^{a}\|\leq \sqrt{\frac{c_{2}\pi}{8}}\;\left(erfi(\sqrt{\frac{c_{2}}{2}})erf( \sqrt{\frac{c_{2}}{2}})\right)^{\frac{1}{2}}, \tag{30}\] \[\|\ell^{a}\|\leq \sqrt{\frac{c_{2}\pi}{8}}\;\left(erfi(\sqrt{\frac{c_{2}}{2}})erf( \sqrt{\frac{c_{2}}{2}})\right)^{\frac{1}{2}}, \tag{31}\] _where \(erfi(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{\ell^{2}}d\xi\), \(erf(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-\ell^{2}}d\xi\)._ **Proof.** To prove relation (30), we recall the expression for the kernel \(k^{a}(x,y)\) given in (16). We set \(z=\sqrt{c_{2}(x^{2}-y^{2})}\), then \[k^{a}(x,y)= \frac{-c_{2}}{z}x\sum_{m=0}^{\infty}\left(\frac{z}{2}\right)^{2m+ 1}\frac{1}{m!m+1!}\] \[= \frac{-c_{2}}{2}x\sum_{m=0}^{\infty}\frac{(z^{2}/4)^{m}}{m!}\frac {1}{m+1!}\] \[\leq \frac{-c_{2}}{2}x\sum_{m=0}^{\infty}\frac{(z^{2}/4)^{m}}{m!}.\] Thus the induced \(L_{2}\)- norm of \(k^{a}(x,y)\) is bounded by \[\|k^{a}(x,y)\|\leq\frac{c_{2}}{2}\|x\|\|e^{\frac{c_{2}\pi^{2}}{4}}\|\|e^{ \frac{-c_{2}\pi^{2}}{4}}\|\] \[\leq\sqrt{\frac{c_{2}\pi}{8}}\;\left(erfi(\sqrt{\frac{c_{2}}{2}}) erf(\sqrt{\frac{c_{2}}{2}})\right)^{\frac{1}{2}}.\] Similarly, one can prove (31) by referring back to (20). \[\ell^{a}(x,y) =\frac{-c_{2}}{z}x\sum_{m=0}^{\infty}(-1)^{m}\left(\frac{z}{2} \right)^{2m+1}\frac{1}{m!m+1!}\] \[\leq\frac{c_{2}}{z}x\sum_{m=0}^{\infty}\left(\frac{z}{2}\right)^ {2m+1}\frac{1}{m!m+1!},\] and the \(L_{2}\)-norm of \(l(x,y)\) is bounded by \[\|\ell^{a}(x,y)\|\leq\sqrt{\frac{c_{2}\pi}{8}}\;\left(erfi(\sqrt{\frac{c_{2}}{2 }})erf(\sqrt{\frac{c_{2}}{2}})\right)^{\frac{1}{2}}.\] \(\Box\) The following lemma will be needed to show stability of the target system. **Lemma 8**: _Let \(\gamma>0\). The states of the target system (22)-(25) satisfy_ \[\|v(x,t)\|\leq\frac{|\beta|}{\gamma}(1+\|\ell^{a}\|)\|\tilde{w}\|. \tag{32}\] **Proof.** Multiply equation (23) with \(v(x,t)\) and integrate from \(0\) to \(1\), \[0 =\int_{0}^{1}v_{xx}(x,t)v(x,t)dx-\gamma\int_{0}^{1}v^{2}(x,t)dx\] \[+\beta\int_{0}^{1}\tilde{w}(x,t)v(x,t)dx+\beta\int_{0}^{1}v(x,t) \int_{0}^{x}\ell^{a}(x,y)\] \[\times\tilde{w}(y,t)dydx.\] Thus \[\gamma\int_{0}^{1}v^{2}(x,t)dx\leq\beta\int_{0}^{1}\tilde{w}(x,t)v( x,t)dx\] \[+\beta\int_{0}^{1}v(x,t)\int_{0}^{x}\ell^{a}(x,y)\tilde{w}(y,t)dydx. \tag{33}\] Bounding the terms on the right-hand side of inequality 33 using Cauchy-Schwartz leads to (32). \(\Box\) **Theorem 9**: _The target system \((\ref{eq:22})-(\ref{eq:25})\) is exponentially stable if_ \[c_{2}+\rho> \frac{|\alpha\beta|}{\gamma}(1+\|\ell^{a}\|)(1+\|k^{a}\|). \tag{34}\] **Proof.** Define the Lyapunov function candidate, \[V(t)= \frac{1}{2}\int_{0}^{1}\tilde{w}^{2}(x,t)dx=\frac{1}{2}\|\tilde{w}(x,t) \|^{2}.\] Taking the time derivative of \(V(t)\), \[\dot{V}(t)=\int_{0}^{1}\tilde{w}(x,t)\tilde{w}_{t}(x,t)dx\] \[\leq-(c_{2}+\rho)\int_{0}^{1}\tilde{w}^{2}(x,t)dx+\alpha\int_{0}^{1 }\tilde{w}(x,t)v(x,t)dx\] \[-\alpha\int_{0}^{1}\tilde{w}(x,t)\int_{0}^{x}k^{a}(x,y)v(y,t)dydx. \tag{35}\] Using Cauchy-Schwartz inequality, we estimate the term of the right-hand-side of inequality (35) as follows. \[\alpha\int_{0}^{1}\tilde{w}(x,t)v(x,t)dx\leq |\alpha|\|\tilde{w}|\|v\|\] \[\leq \frac{|\alpha||\beta|}{\gamma}(1+\|\ell^{a}\|)\|\tilde{w}\|^{2}, \tag{36}\] and \[-\alpha\int_{0}^{1}\tilde{w}(x,t)\int_{0}^{x}k^{a}(x,y)v(y,t)dydx\] \[\leq|\alpha|\|k^{a}\|\|\tilde{w}\|\|v\|\] \[\leq\frac{|\alpha\beta|}{\gamma}\|k^{a}\|(1+\|\ell^{a}\|)\|\tilde {w}\|^{2}. \tag{37}\] Subbing (36) and (37) in (35), \[\dot{V}(t)\leq -\left((c_{2}+\rho)-\frac{|\alpha\beta|}{\gamma}(1+\|\ell^{a}\|)( 1+\|k^{a}\|)\right)\|\tilde{w}\|^{2}. \tag{38}\] Setting \[c_{3}= (c_{2}+\rho)-\frac{|\alpha\beta|}{\gamma}(1+\|\ell^{a}\|)(1+\|k^{ a}\|), \tag{39}\] then inequality (38) implies that \(V(t)\leq e^{-2c_{3}t}V(0)\). If the parameter \(c_{2}\) is chosen such that (34) is satisfied, then \(V(t)\) decays exponentially as \(t\rightarrow\infty\), and so does \(\|\tilde{w}(x,t)\|\). By means of lemma (8), the state \(v(x,t)\) is asymptotically stable. Recalling that the operator \((\partial_{xx}-\gamma I)\) is boundedly invertible, then the elliptic equation (23) implies that \[v(x,t)\] \[=(\gamma I-\partial_{xx})^{-1}\left(\beta\tilde{w}(x,t)+\beta\int _{0}^{x}\ell^{a}(x,y)\tilde{w}(y,t)dy\right).\] Substituting for \(v(x,t)\) in the parabolic equation (22) leads to a system described by the \(\tilde{w}(x,t)\) only. Hence, the exponential stability of the coupled system follows from the exponential stability of the state \(\tilde{w}(x,t)\). \(\Box\) The decay rate of the target system is bounded by (39). The following theorem is now immediate. **Theorem 10**: _System (1)-(4) is exponentially stable if the control signal is_ \[u(t)= \int_{0}^{1}k_{x}^{a}(1,y)w(y,t)dy+k^{a}(1,1)w(1,t), \tag{40}\] _with \(k^{a}(x,y)\) as in lemma4, and parameter \(c_{2}\) satisfies_ \[c_{2}+\rho\] \[>\frac{|\alpha\beta|}{\gamma}\left[1+\sqrt{\frac{c_{2}\pi}{8}} \;(erfi(\sqrt{\frac{c_{2}}{2}}))^{\frac{1}{2}}(erf(\sqrt{\frac{c_{2}}{2}}))^{ \frac{1}{2}}\right]^{2}. \tag{41}\] **Proof.** Since \(c_{2}\) is given by (41), it follows from (9) and (7) that the target system (22)-(25) is exponentially stable. It follows from Theorem 6 that with \(u(t)\) given as in (40), there is an invertible state transformation between system (1)-(4) and the exponentially stable target system (22)-(25). The conclusion is now immediate. \(\Box\) Figure1 illustrates the restrictiveness of the criterion (41). This figure gives a comparison between the right-hand-side of inequality (41) and different straight lines \(c_{2}+\rho\) for various values of \(\rho\) while setting \(\gamma=\beta=1\) and \(\alpha=0.5\). The dashed line describes the right-hand-side of inequality (41), whereas the straight lines present straight lines \(c_{2}+\rho\), for different values of \(\rho\). For some \(\rho\), if values of \(c_{2}\) are such that the dashed line (- - -) is below the straight line \(c_{2}+\rho\), bound (41) is fulfilled, and hence stability of the target system \((\ref{eq:1})-(\ref{eq:2})\) follows. **Illustration of the restriction (41) on \(c_{2}\)** Figure 1: A comparison between the right-hand-side of (41) as a function of \(c_{2}\) against several straight lines \(c_{2}+\rho\) for different values of \(\rho\), where the other parameters are fixed as \(\beta=\gamma=1\), \(\alpha=0.5\). The right-hand-side of (41) is described using a dashed line(- - -). Target system (22)-(25) is exponentially stable for values of \(c_{2}\) at which the straight line \(c_{2}+\rho\), for some \(\rho\), is above the dashed line(- - -). The figure showcases the restrictive nature associated with condition (41). The parameter \(\rho\) has to be large or the coupling factor \(\alpha\beta\) has to be small allowing for the inequality (41) to be fulfilled. ### Numerical simulations The solutions of system (1)-(4), both controlled and uncontrolled, were simulated numerically using a finite-element approximation in COMSOL Multiphysic software. The finite-element method (FEM) with linear splines was used to approximate the coupled equations by a system of DAEs. The spatial interval was divided into 27 subintervals. Also, time was discretized by a time-stepping algorithm called generalized alpha with time-step\(=0.2\). We set \(\gamma=\frac{1}{4}\), \(\rho=\frac{1}{3}\), \(\alpha=\frac{1}{4}\) and \(\beta=\frac{1}{2}\). For these parameter values, the system is unstable. Figure2 presents the dynamics of the states \(w(x,t)\) and \(v(x,t)\) in the absence of the control with initial condition \(w(0)=\sin(\pi x)\). The system was first controlled with the controller resulting from the choice of parameter \(c_{2}=1.2-\rho\) which satisfies inequality (41) and thus stability of the controlled system is guaranteed. This is illustrated in Figure2. As predicted by the theory, the dynamics of the system decay to zero with time. A comparison between the \(L_{2}\)-norm of both states \(w(x,t)\) and \(v(x,t)\) before and after applying the control input is given in Figure 3. ## 4 Observer design In the previous section, the control input was designed based on the assumption that the state of system (1)-(4) is known. The focus of this section is to design an observer that estimates the state of the parabolic-elliptic system. We first design an observer using two measurements \(v(1,t)\), and \(w(1,t)\). As for the situation when two controls can be used, the design is fairly straightforward. The situation becomes more intricate with only one measurement is available. Then, as for the single control situation covered in the previous section, a bound must be satisfied. ### Both \(w(1,t)\) and \(v(1,t)\) are available The objective is to design an observer when the available measurements of system (1)-(4) are \(w(1,t)\) and \(v(1,t)\). We propose the following observer for system (1)-(4) \[\hat{w}_{t}(x,t)= \hat{w}_{xx}(x,t)-\rho\hat{w}(x,t)+\alpha\hat{v}(x,t)\] \[+\eta_{1}(x)[w(1,t)-\hat{w}(1,t)], \tag{42a}\] \[0= \hat{v}_{xx}(x,t)-\gamma\hat{v}(x,t)+\beta\hat{w}(x,t)\] \[+\eta_{2}(x)[v(1,t)-\hat{v}(1,t)],\] (42b) \[\hat{w}_{x}(0,t)= 0,\quad\hat{w}_{x}(1,t)=u(t)+\eta_{3}[w(1,t)-\hat{w}(1,t)],\] (42c) \[\hat{v}_{x}(0,t)= 0,\quad\hat{v}_{x}(1,t)=\eta_{4}[v(1,t)-\hat{v}(1,t)], \tag{42d}\] Two in-domain output injection functions \(\eta_{1}(x)\) and \(\eta_{2}(x)\), and two boundary injections values \(\eta_{3}\) and \(\eta_{4}\) are to be designed. We will need the following lemma. Figure 3: A comparison between the \(L_{2}\)-norm of the solutions \(w(x,t)\) and \(v(x,t)\) for the uncontrolled and controlled systems with initial condition \(w_{0}=\sin(\pi x)\). The system parameters are \(\gamma=\frac{1}{4}\), \(\rho=\frac{1}{3}\), \(\alpha=\frac{1}{4}\), \(\beta=\frac{1}{2}\) so the uncontrolled system is unstable. In the absence of control, the \(L_{2}\)-norm of the states grows. The backstepping state-feedback control gain is given by \(c_{2}=1.2-\rho\); stability condition (41) is satisfied. The \(L_{2}\)-norm of the solution of the controlled system decays to zero. Figure 2: A 3D landscape of the dynamics of a coupled parabolic-elliptic system (1)-(4) with initial condition \(w_{0}=\sin(\pi x)\), without and with control. The parameters of the system are \(\gamma=\frac{1}{4}\), \(\rho=\frac{1}{3}\), \(\alpha=\frac{1}{4}\), \(\beta=\frac{1}{2}\). The uncontrolled system is unstable with this choice of parameters. The backstepping state-feedback control gain is \(c_{2}=1.2-\rho\) which meets the stability condition (41). The control causes the solutions of the system to decay to the steady-state solution as \(t\to\infty\). **Lemma 11**: _[_25_]_ _The hyperbolic partial differential equation_ \[k_{xx}^{b}(x,y)-k_{yy}^{b}(x,y)+o_{2}k^{b}(x,y)=0,\quad 0<x<y<1 \tag{43a}\] \[k_{x}^{b}(0,y)=0,\quad k^{b}(x,x)=-\frac{1}{2}o_{2}x, \tag{43b}\] _has a continuous unique solution. Here \(o_{2}=o_{1}-\rho\), and \(o_{2}>0\)._ Define the error states \[e^{w}(x,t)=w(x,t)-\hat{w}(x,t), \tag{44a}\] \[e^{v}(x,t)=v(x,t)-\hat{v}(x,t). \tag{44b}\] The observer error dynamics satisfy \[e_{t}^{w}(x,t)= e_{xx}^{w}(x,t)-\rho e^{w}(x,t)+\alpha e^{v}(x,t)\] \[-\eta_{1}(x)e^{w}(1,t), \tag{45a}\] \[0= e_{xx}^{v}(x,t)-\gamma e^{v}(x,t)+\beta e^{w}(x,t)\] \[-\eta_{2}(x)e^{v}(1,t),\] (45b) \[e_{x}^{w}(0,t)= 0,\quad e_{x}^{w}(1,t)=-\eta_{3}e^{w}(1,t),\] (45c) \[e_{x}^{v}(0,t)= 0,\quad e_{x}^{v}(1,t)=-\eta_{4}e^{v}(1,t), \tag{45d}\] A backstepping approach is used to select \(\eta_{1}(x),\ \eta_{2}(x),\)\(\eta_{3},\ \eta_{4}\) so that the error system (45d) is exponentially stable. We introduce the target system \[e_{t}^{\tilde{w}}(x,t)= e_{xx}^{\tilde{w}}(x,t)-(o_{2}+\rho)e^{\tilde{w}}(x,t)+\alpha e^{ \tilde{v}}(x,t), \tag{46a}\] \[0= e_{xx}^{\tilde{v}}(x,t)-(o_{2}+\gamma)e^{\tilde{v}}(x,t)+\beta e^{ \tilde{w}}(x,t),\] (46b) \[e_{x}^{\tilde{w}}(0,t)= 0,\qquad e_{x}^{\tilde{w}}(1,t)=0, \tag{46c}\] A pair of state transformations \[e^{w}(x,t)= e^{\tilde{w}}(x,t)-\int_{x}^{1}k_{1}(x,y)e^{\tilde{w}}(y,t)dy, \tag{47a}\] \[e^{v}(x,t)= e^{\tilde{v}}(x,t)-\int_{x}^{1}k_{2}(x,y)e^{\tilde{v}}(y,t)dy, \tag{47b}\] that transform the target system (46) into (45) are needed. **Theorem 12**: _If \(k_{1}(x,y)=k_{2}(x,y)=k^{b}(x,y)\) where \(k^{b}(x,y)\) satisfies (43), and if the output injections are_ \[\eta_{1}(x)=\eta_{2}(x)=-k_{y}^{b}(x,1), \tag{48}\] \[\eta_{3}=\eta_{4}=-k^{b}(1,1), \tag{49}\] _then transformations (47a) and (47b) convert the target system (46) into the original error dynamics (45)._ **Proof.** We first take the spatial derivatives of (47a). \[e_{x}^{w}(x,t)=e_{x}^{\tilde{w}}(x,t)-\int_{x}^{1}k_{1x}(x,y)e^ {\tilde{w}}(y,t)dy\] \[+k_{1}(x,x)e^{\tilde{w}}(x,t) \tag{50}\] \[e_{xx}^{w}(x,t)=e_{xx}^{\tilde{w}}(x,t)-\int_{x}^{1}k_{1xx}(x,y) e^{\tilde{w}}(y,t)dy\] \[+k_{1x}(x,x)e^{\tilde{w}}(x,t)+\frac{d}{dx}k_{1}(x,x)e^{\tilde{w} }(x,t)\] \[+k_{1}(x,x)e^{\tilde{w}}(x,t). \tag{51}\] Taking the time derivative of (47a) and integrating by parts \[e_{t}^{w}(x,t)=e_{t}^{\tilde{w}}(x,t)-\int_{x}^{1}k_{1}(x,y)e_{t }^{\tilde{w}}(y,t)dy\] \[=e_{t}^{\tilde{w}}(x,t)+(o_{2}+\rho)\int_{x}^{1}k_{1}(x,y)e^{ \tilde{w}}(y,t)dy-k_{1}(x,1)\] \[\times e_{x}^{\tilde{w}}(1,t)-\alpha\int_{x}^{1}k_{1}(x,y)e^{ \tilde{v}}(y,t)dy+k_{1}(x,x)e_{x}^{\tilde{w}}(x,t)\] \[+k_{1y}(x,1)e^{\tilde{w}}(1,t)-k_{1y}(x,x)e^{\tilde{w}}(x,t)\] \[-\int_{x}^{1}k_{1yy}(x,y)e^{\tilde{w}}(y,t)dy. \tag{52}\] We rewrite the right-hand-side of the parabolic equation (45a) of the error dynamics as \[e_{t}^{w}(x,t)-e_{xx}^{w}(x,t)+\rho e^{w}(x,t)-\alpha e^{v}(x,t)\] \[+\eta_{1}(x)e^{w}(1,t)=0. \tag{53}\] Substituting (51) and (52) in (53), then the left-hand-side of (53) is \[\left(\text{L.H.S}\right)_{1}=e_{t}^{\tilde{w}}(x,t)-e_{xx}^{\tilde {w}}(x,t)-\int_{x}^{1}k_{1yy}(x,y)\] \[\times e^{\tilde{w}}(y,t)dy+(o_{2}+\rho)\int_{x}^{1}k_{1}(x,y)e^{ \tilde{w}}(y,t)dy\] \[+\int_{x}^{1}k_{1xx}(x,y)e^{\tilde{w}}(y,t)dy-k_{1y}(x,x)e^{\tilde {w}}(x,t)\] \[-k_{1x}(x,x)e^{\tilde{w}}(x,t)-\frac{d}{dx}k_{1}(x,x)e^{\tilde{w}}( x,t)-k_{1}(x,x)\] \[\times e_{x}^{\tilde{w}}(x,t)+k_{1}(x,x)e_{x}^{\tilde{w}}(x,t)+\rho e ^{\tilde{w}}(x,t)-\alpha e^{\tilde{w}}(x,t)\] \[-\rho\int_{x}^{1}k_{1}(x,y)e^{\tilde{w}}(y)dy+\eta_{1}(x)e^{\tilde {w}}(1,t)+k_{1y}(x,1)\] \[\times e^{\tilde{w}}(1,t)-k_{1}(x,1)e_{x}^{\tilde{w}}(1,t)-\alpha \int_{x}^{1}k_{1}(x,y)e^{\tilde{w}}(y,t)dy\] \[+\alpha\int_{x}^{1}k_{2}(x,y)e^{\tilde{w}}(y,t)dy, \tag{54}\] \[\left(\text{R.H.S}\right)_{1}=0. \tag{55}\] Adding and subtracting the term \((o_{2}+\rho)e^{\tilde{w}}(x,t)\) to the right-hand-side of (54) \[\left({\rm L.H.S}\right)_{1}=e_{t}^{\tilde{w}}(x,t)-e_{xx}^{\tilde{w }}(x,t)+o_{2}e^{\tilde{w}}(x,t)-\alpha e^{\tilde{v}}(x,t)\] \[-\int_{x}^{1}[-o_{2}k_{1}(x,y)-k_{1xx}(x,y)+k_{1yy}(x,y)]e^{\tilde{ w}}(y,t)dy\] \[-k_{1}(x,1)e_{x}^{\tilde{w}}(1,t)-(2\frac{d}{dx}k(x,x)+o_{2})e^{ \tilde{w}}(x,t)\] \[+(\eta_{1}(x)+k_{1y}(x,1))e^{\tilde{w}}(1,t)-\alpha\int_{x}^{1}k _{1}(x,y)e^{\tilde{v}}(y,t)dy\] \[+\alpha\int_{x}^{1}k_{2}(x,y)e^{\tilde{v}}(y,t)dy. \tag{56}\] Using the boundary condition \(e_{x}^{\tilde{w}}(1,t)=0,\) if \(k_{1}(x,y)=k_{2}(x,y)=k^{b}(x,y)\) then equation (56) reduces to \[\left({\rm L.H.S}\right)_{1}=e_{t}^{\tilde{w}}(x,t)-e_{xx}^{\tilde {w}}(x,t)+(o_{2}+\gamma)e^{\tilde{w}}(x,t)\] \[-\alpha e^{\tilde{v}}(x,t)-\int_{x}^{1}[-o_{2}k^{b}(x,y)-k_{xx}^{ b}(x,y)+k_{yy}^{b}(x,y)]\] \[\times e^{\tilde{w}}(y,t)dy-(2\frac{d}{dx}k^{b}(x,x)+o_{2})e^{ \tilde{w}}(x,t)\] \[+(\eta_{1}(x)+k_{y}^{b}(x,1))e^{\tilde{w}}(1,t). \tag{57}\] If \(k^{b}(x,y)\) satisfies (43) and \(\eta_{1}(x)=k_{y}^{b}(x,1),\) then (57) becomes \[\left({\rm L.H.S}\right)_{1}=e_{t}^{\tilde{w}}(x,t)-e_{xx}^{\tilde {w}}(x,t)+(o_{2}+\gamma)e^{\tilde{w}}(x,t)\] \[-\alpha e^{\tilde{v}}(x,t).\] Referring to (55) and (46a), \[\left({\rm L.H.S}\right)_{1}=0=\left({\rm R.H.S}\right)_{1}.\] Hence the state transformation (47) transforms the parabolic equation (46a) into (45a). Referring to (50), we apply transformation (47a) to the boundary conditions (46c) \[e_{x}^{w}(0,t)= e_{x}^{\tilde{w}}(0,t)-\int_{0}^{1}k_{x}^{b}(0,y)e^{\tilde{w}}(y,t )dy\] \[+k^{b}(0,0)e^{\tilde{w}}(0,t)\] \[= 0,\] where the previous step was obtained by using (43b) and \(e^{\tilde{w}}(0,t)=e^{w}(0,t).\) Thus we obtain the boundary condition in (45c) at \(x=0.\) Similarly, \[e_{x}^{w}(1,t)= e_{x}^{\tilde{w}}(1,t)-\int_{1}^{1}k_{x}^{b}(1,y)e^{\tilde{w}}(y,t )dy\] \[+k^{b}(1,1)e^{\tilde{w}}(1,t)\] \[= k^{b}(1,1)e^{\tilde{w}}(1,t)=k^{b}(1,1)e^{w}(1,t).\] If \(\eta_{3}=-k^{b}(1,1)\) then we obtain the boundary condition in (45c) at \(x=1.\) We perform similar calculations on the elliptic equation (46c). First, we take the spatial derivative of (47b), \[e_{xx}^{v}(x,t)=e_{xx}^{\tilde{v}}(x,t)-\int_{x}^{1}k_{2xx}(x,y)e ^{\tilde{v}}(y,t)dy\] \[+k_{2x}(x,x)e^{\tilde{v}}(x,t)+\frac{d}{dx}k_{2}(x,x)e^{\tilde{v} }(x,t)\] \[+k_{2}(x,x)e^{\tilde{v}}(x,t). \tag{58}\] Subbing (58) in the right-hand-side of elliptic equation (45b), \[\left({\rm R.H.S}\right)_{2}=e_{xx}^{\tilde{v}}(x,t)-\int_{x}^{1}k _{2xx}(x,y)e^{\tilde{v}}(y,t)dy\] \[+k_{2x}(x,x)e^{\tilde{v}}(x,t)+\frac{d}{dx}k_{2}(x,x)e^{\tilde{v} }(x,t)+k_{2}(x,x)\] \[\times e_{x}^{\tilde{v}}(x,t)-\gamma e^{\tilde{v}}(x,t)+\gamma \int_{x}^{1}k_{2}(x,y)e^{\tilde{v}}(y,t)dy\] \[+\beta e^{\tilde{w}}(x,t)-\beta\int_{x}^{1}k_{1}(x,y)e^{\tilde{w} }(y,t)dy-\eta_{2}(x)e^{\tilde{v}}(1,t) \tag{59}\] \[\left({\rm L.H.S}\right)_{2}=0. \tag{60}\] Rewriting the last term of (59) as follows \[\beta\int_{x}^{1}k_{1}(x,y)e^{\tilde{w}}(y,t)dy=-\int_{x}^{1}k_{1} (x,y)e_{yy}^{\tilde{v}}(y,t)dy\] \[+(o_{2}+\gamma)\int_{x}^{1}k_{1}(x,y)e^{\tilde{v}}(y,t)dy,\] which can be obtained by referring to the elliptic equation of (46b), then (59) gives \[\left({\rm R.H.S}\right)_{2}=e_{xx}^{\tilde{v}}(x,t)-\int_{x}^{1} k_{2xx}(x,y)e^{\tilde{v}}(y,t)dy\] \[+k_{2x}(x,x)e^{\tilde{v}}(x,t)+\frac{d}{dx}k_{2}(x,x)e^{\tilde{v}}(x,t)+k_{2}(x,x)\] \[\times e_{x}^{\tilde{v}}(x,t)-\gamma e^{\tilde{v}}(x,t)+\gamma \int_{x}^{1}k_{2}(x,y)e^{\tilde{v}}(y,t)dy\] \[+\beta e^{\tilde{w}}(x,t)+\int_{x}^{1}k_{1}(x,y)e_{yy}^{\tilde{v} }(y,t)dy-\eta_{2}(x)e^{\tilde{v}}(1,t)\] \[-(o_{2}+\gamma)\int_{x}^{1}k_{1}(x,y)e^{\tilde{v}}(y,t)dy. \tag{61}\] Since \(k_{1}(x,y)=k_{2}(x,y)=k^{b}(x,y)\), then (61) leads to \[\left(\mbox{R.H.S}\right)_{2}=e_{xx}^{\tilde{v}}(x,t)-\int_{x}^{1}k _{xx}^{b}(x,y)e^{\tilde{v}}(y,t)dy\] \[+k_{x}^{b}(x,x)e^{\tilde{v}}(x,t)+\frac{d}{dx}k^{b}(x,x)e^{\tilde{v }}(x,t)+k^{b}(x,x)\] \[\times e_{x}^{\tilde{v}}(x,t)-\gamma e^{\tilde{v}}(x,t)+\gamma\int _{x}^{1}k^{b}(x,y)e^{\tilde{v}}(y,t)dy\] \[+\beta e^{\tilde{w}}(x,t)-(o_{2}+\gamma)\int_{x}^{1}k^{b}(x,y)e^{ \tilde{v}}(y,t)dy-\eta_{2}(x)\] \[\times e^{\tilde{v}}(1,t)+e_{x}^{\tilde{v}}(1,t)k^{b}(x,1)-e_{x} ^{\tilde{v}}(x,t)k^{b}(x,x)-e^{\tilde{v}}(1,t)\] \[\times k_{y}^{b}(x,1)+e^{\tilde{v}}(x,t)k_{y}^{b}(x,x)+\int_{x}^{ 1}k_{yy}^{b}(x,y)e^{\tilde{v}}(y,t)dy.\] Adding and subtracting the term \(o_{2}e^{\tilde{v}}(x,t)\) and incorporating \(e_{x}^{\tilde{v}}(1,t)=0\), \[\left(\mbox{R.H.S}\right)_{2}=e_{xx}^{\tilde{v}}(x,t)-(o_{2}+ \gamma)e^{\tilde{v}}(x,t)+\beta e^{\tilde{v}}(x,t)\] \[+\int_{x}^{1}[-k_{xx}(x,y)+k_{yy}(x,y)-o_{2}k(x,y)]e^{\tilde{v}}(y,t)dy\] \[+(2\frac{d}{dx}k^{b}(x,x)+o_{2})e^{\tilde{v}}(x,t)-(k_{y}^{b}(x,1 )+\eta_{2}(x))\] \[\times e^{\tilde{v}}(1,t). \tag{62}\] Since \(k^{b}(x,y)\) is given by (43) and \(\eta_{2}(x)=-k_{y}^{b}(x,1)\), then referring to (46b) and (60) \[\left(\mbox{L.H.S}\right)_{2}=0=\left(\mbox{R.H.S}\right)_{2}.\] Thus the state transformation (47) transforms the elliptic equation (46b) into (45b). We apply the transformation to the boundary conditions (46d), \[e_{x}^{v}(0,t)= e_{x}^{\tilde{v}}(0,t)-\int_{0}^{1}k_{x}^{b}(0,y)e^{\tilde{v}}(y,t)dy\] \[+k^{b}(0,0)e^{\tilde{v}}(0,t)=0,\] by means of using (43b) and that \(e_{x}^{\tilde{v}}(0,t)=0\). We obtain the boundary condition at \(x=0\) in (45d). Similarly, \[e_{x}^{v}(1,t)= e_{x}^{\tilde{v}}(1,t)-\int_{1}^{1}k_{x}^{b}(1,y)e^{\tilde{v}}(y,t)dy+k^{b}(1,1)\] \[e^{\tilde{v}}(1,t)\] \[= k^{b}(1,1)e^{\tilde{v}}(1,t)=k^{b}(1,1)e^{v}(1,t),\] where the previous step was obtained by noting that \(e^{\tilde{v}}(0,t)=e^{v}(0,t)\). If \(\eta_{4}=-k^{b}(1,1)\) then we obtain the second boundary condition in (45d) at \(x=1\). The conclusion of the theorem follows. \(\Box\) The next theorem follows from Theorem 12. **Theorem 13**: _Let \(k(x,y)\) be the solution of system (43). The error dynamics (45) with output injections \(\eta_{j},\ j=1,\ldots,4\) defined as given in (48)-(49) are exponentially stable if and only if the parameter \(o_{2}\) satisfies_ \[(o_{2}+\rho)(o_{2}+\gamma)>\alpha\beta, \tag{63}\] _and \(o_{2}+\gamma\neq-(n\pi)^{2}\)._ **Proof.** If \(o_{2}\) is given by (63) such that \(o_{2}+\gamma\neq-(n\pi)^{2}\), the target system (46) has a unique solution and is exponentially stable due to the criteria for stability of parabolic-elliptic systems established previously in Corollary 2 (of a previous draft). Finally, the exponential stability of the error dynamics (45) follows by referring to Theorem 12 and using the invertiblity of transformation (47). This concludes the proof. \(\Box\) ### Only \(w(1,t)\) is available The objective of this subsection is to design an exponentially convergent observer for (1)-(4) given only a single measurement \(w(1,t)\). We propose the following observer \[\hat{w}_{t}(x,t)= \hat{w}_{xx}(x,t)-\rho\hat{w}(x,t)+\alpha\hat{v}(x,t)\] \[+\eta_{1}(x)[w(1,t)-\hat{w}(1,t)], \tag{64a}\] \[0= \hat{v}_{xx}(x,t)-\gamma\hat{v}(x,t)+\beta\hat{w}(x,t),\] (64b) \[\hat{w}_{x}(0,t)= 0,\quad\hat{w}_{x}(1,t)=u(t)+\eta_{2}[w(1,t)-\hat{w}(1,t)],\] (64c) \[\hat{v}_{x}(0,t)= 0,\quad\hat{v}_{x}(1,t)=0, \tag{64d}\] where \(\eta_{1}(x)\) and \(\eta_{2}\) are output injections to be designed. Defining the states of the error dynamics as in (44), the system describing the observation error satisfies \[e_{t}^{w}(x,t)= e_{xx}^{w}(x,t)-\rho e^{w}(x,t)+\alpha e^{v}(x,t)\] \[-\eta_{1}(x)e^{w}(1,t), \tag{65a}\] \[0= e_{xx}^{v}(x,t)-\gamma e^{v}(x,t)+\beta e^{w}(x,t),\] (65b) \[e_{x}^{w}(0,t)= 0,\quad e_{x}^{w}(1,t)=-\eta_{2}e^{w}(1,t),\] (65c) \[e_{x}^{v}(0,t)= 0,\quad e_{x}^{v}(1,t)=0. \tag{65d}\] Both of \(\eta_{1}(x)\) and \(\eta_{2}\) have to be chosen so that exponential stability of error dynamics is achieved. Following a backstepping approach, we define the transformation \[e^{\tilde{v}}(x,t)=e^{w}(x,t)-\int_{0}^{x}k^{a}(x,y)e^{w}(y,t)dy, \tag{66}\] where \(k^{a}(x,y)\) is given by (16) with \(c_{2}\) replaced by \(o_{2}\). The inverse transformation [16, Chap. 4, section 5] is \[e^{w}(x,t)=e^{\tilde{w}}(x,t)+\int_{0}^{x}\ell^{a}(x,y)e^{\tilde{w}}(y,t)dy, \tag{67}\] where \(\ell^{a}\) satisfies system (19). **Theorem 14**: _If the output injections are_ \[\eta_{1}(x)=0, \tag{68}\] \[\eta_{2}=-k^{a}(1,1), \tag{69}\] _where \(k^{a}(x,y)\) is given in (16) with \(c_{2}\) being replaced by \(o_{2}\) then transformation (71) converts the error dynamics (65) into the target system_ \[e_{t}^{\tilde{w}}(x,t)=e_{xx}^{\tilde{w}}(x,t)-(o_{2}+\rho)e^{ \tilde{w}}(x,t)+\alpha e^{v}(x,t)\] \[-\alpha\int_{0}^{x}k^{a}(x,y)e^{v}(y,t)dy, \tag{70a}\] \[0=e_{xx}^{v}(x,t)-\gamma e^{v}(x,t)+\beta e^{\tilde{w}}(x,t)\] \[+\beta\int_{0}^{x}\ell^{a}(x,y)e^{\tilde{w}}(y,t)dy,\] (70b) \[e_{x}^{\tilde{w}}(1,t)=-\int_{0}^{1}k_{x}^{a}(1,y)e^{\tilde{w}}( y,t)dy\] \[-\int_{0}^{1}k_{x}^{a}(1,y)\int_{0}^{y}\ell^{a}(y,z)e^{\tilde{w}} (z,t)dzdy,\] (70c) \[e_{x}^{\tilde{w}}(0,t)=0,\quad e_{x}^{v}(0,t)=0,\quad e_{x}^{v}( 1,t)=0. \tag{70d}\] **Proof.** It will be useful to rewrite (66) as \[e^{w}(x,t)=e^{\tilde{w}}(x,t)+\int_{0}^{x}k^{a}(x,y)e^{w}(y,t)dy. \tag{71}\] We take the spatial and the time derivatives of (71) we have \[e_{xx}^{w}(x,t)=e_{xx}^{\tilde{w}}(x,t)+\int_{0}^{x}k_{xx}^{a}(x, y)e^{w}(y,t)dy\] \[+k_{x}^{a}(x,x)e^{w}(x,t)+\frac{d}{dx}k^{a}(x,x)e^{w}(x,t)\] \[+k^{a}(x,x)e_{x}^{w}(x,t), \tag{72}\] \[e_{t}^{w}(x,t)=e_{t}^{\tilde{w}}(x,t)+\int_{0}^{x}k^{a}(x,y)e_{t} ^{w}(y,t)dy\] \[=e_{t}^{\tilde{w}}(x,t)-\rho\int_{0}^{x}k^{a}(x,y)e^{w}(y,t)dy+ \alpha\int_{0}^{x}k^{a}(x,y)\] \[\times e^{v}(y,t)dy+k^{a}(x,x)e_{x}^{w}(x,t)-k^{a}(x,0)e_{x}^{w}( 0,t)\] \[-k_{y}^{a}(x,x)e^{w}(x,t)+k_{y}^{a}(x,0)e^{w}(0,t)+\int_{0}^{x}k_{ yy}^{a}(x,y)\] \[\times e^{w}(y,t)dy-e^{w}(1,t)\int_{0}^{x}k^{a}(x,y)\eta_{1}(y)dy. \tag{73}\] Substituting (72) and (73) in the parabolic equation (65a), and using \(e_{x}^{w}(0,t)=0\) and \(k_{y}^{a}(x,0)=0\), \[e_{t}^{w}(x,t)=e_{xx}^{w}(x,t)+\alpha e^{v}(x,t)+\left(k_{y}^{a} (x,x)+k_{x}^{a}(x,x)\right)\] \[\times e^{w}(x,t)+\frac{d}{dx}k^{a}(x,x)e^{w}(x,t)-\rho e^{w}(x, t)+k^{a}(x,x)\] \[\times e_{x}^{w}(x,t)-k^{a}(x,x)e_{x}^{w}(x,t)+\int_{0}^{x}[k_{ xx}^{a}(x,y)\] \[-k_{yy}^{a}(x,y)+\rho k^{a}(x,y)]e^{w}(y,t)dy-\alpha\int_{0}^{x}k^ {a}(x,y)\] \[\times e^{v}(y,t)dy+e^{w}(1,t)\int_{0}^{x}k^{a}(x,y)\eta_{1}(y)dy\] \[-\eta_{1}(x)e^{w}(1,t). \tag{74}\] Adding and subtracting the term \((o_{2}+\rho)e^{w}(x,t)\) to the right-hand-side of equation (6.2), and noting that \(k^{a}(x,y)\) is given by (15), \[e_{t}^{w}(x,t)=e_{xx}^{w}(x,t)-\rho e^{w}(x,t)+\alpha e^{v}(x,t)\] \[-\alpha\int_{0}^{x}k^{a}(x,y)e^{v}(y,t)dy-e^{w}(1,t)\int_{0}^{x} k^{a}(x,y)\] \[\times\eta_{1}(y)dy-\eta_{1}(x)e^{w}(1,t).\] If \(\eta_{1}(x)=0\), we obtain the parabolic equation (70a). We now apply transformation (71) on the boundary conditions (65c), using lemma 4 \[e_{x}^{\tilde{w}}(0,t) =e_{x}^{w}(0,t)-\int_{0}^{0}k_{x}^{a}(1,y)w(y)dy\] \[-k^{a}(0,0)w(1,t)=0,\] by virtue of referring to the boundary conditions of system (15). \[e_{x}^{\tilde{w}}(1,t)=e_{x}^{w}(1,t)-\int_{0}^{1}k_{x}^{a}(1,y )e^{w}(y,t)dy\] \[-k^{a}(1,1)e^{w}(1,t)\] \[=-\left(\eta_{2}+k^{a}(1,1)\right)e^{w}(1,t)-\int_{0}^{1}k_{x}^{a} (1,y)e^{w}(y,t)dy\] \[=-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dy.\] \[=-\int_{0}^{1}k_{x}^{a}(1,y)e^{\tilde{w}}(y,t)dy-\int_{0}^{1}k_{x} ^{a}(1,y)\int_{0}^{y}\ell^{a}(y,z)\] \[\quad\times e^{\tilde{w}}(z,t)dzdy.\] The previous equation holds true via using (69) and using the inverse transformation (67). The elliptic equation (70b) can be obtained via using the inverse transformation (67). \(\Box\) **Theorem 15**: _The target system (70) is exponentially stable if \(o_{2}\) is chosen such that_ \[o_{2}+\rho >\frac{|\alpha||\beta|}{\gamma}(1+\|k^{a}\|)(1+\|\ell^{a}\|)\] \[+\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2}. \tag{75}\] **Proof.** We define the Lyapunov function candidate, \[V(t)= \frac{1}{2}\int_{0}^{1}(e^{\tilde{w}}(x,t))^{2}dx=\frac{1}{2}\|e^{ \tilde{w}}(x,t)\|^{2}.\] Taking the time derivative of \(V(t)\), \[\dot{V}(t)=\int_{0}^{1}e^{\tilde{w}}(x,t)e_{t}^{\tilde{w}}(x,t)dx\] \[= \int_{0}^{1}e^{\tilde{w}}(x,t)e_{xx}^{\tilde{w}}(x,t)dx-(o_{2}+ \rho)\int_{0}^{1}(e^{\tilde{w}}(x,t))^{2}dx\] \[+\alpha\int_{0}^{1}e^{\tilde{w}}(x,t)e^{v}(x,t)dx-\alpha\int_{0}^ {1}e^{\tilde{w}}(x,t)\int_{0}^{x}k^{a}(x,y)\] \[\times e^{v}(y,t)dydx. \tag{76}\] Integrating the term \(\int_{0}^{1}e^{\tilde{w}}(x,t)e_{xx}^{\tilde{w}}(x,t)dx\) by parts, and using the boundary conditions (70c)-(70d) \[\int_{0}^{1}e^{\tilde{w}}(x,t)e_{xx}^{\tilde{w}}(x,t)dx=e^{ \tilde{w}}(1,t)e_{x}^{\tilde{w}}(1,t)\] \[-e^{\tilde{w}}(0,t)e_{x}^{\tilde{w}}(0,t)-\|e_{x}^{\tilde{w}}\|^ {2}\] \[=e^{\tilde{w}}(1,t)e_{x}^{\tilde{w}}(1,t)-\|e_{x}^{\tilde{w}}\|^ {2}\] \[=-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dy\,e^{\tilde{w}}(1,t)-\|e_ {x}^{\tilde{w}}\|^{2}. \tag{77}\] To bound the term \(-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dy\,e^{\tilde{w}}(1,t)\) in (77), we use Cauchy-Schwartz, \[-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dye^{\tilde{w}}(1,t)\] \[\leq\|\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dy\|\max_{x\in[0,1]}e^ {\tilde{w}}(x,t)\] \[\leq\|k_{x}^{a}(1,y)\|\|e^{w}\|\|e^{\tilde{w}}\|_{\infty}.\] Invoking Agmon's inequality [29] on the right-hand-side of the previous inequality leads to \[-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dye^{\tilde{w}}(1,t)\] \[\leq\|k_{x}^{a}(1,y)\|\|e^{w}\|\|e^{\tilde{w}}\|^{1/2}\|e^{\tilde{ w}}\|_{H^{1}}^{1/2}.\] Using Young's inequality on the right-hand-side of the previous inequality, \[-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dye^{\tilde{w}}(1,t)\] \[\leq\frac{\|k_{x}^{a}(1,y)\|^{2}}{2}\|e^{w}\|^{2}+\frac{1}{2}(\| e^{\tilde{w}}\|\|e^{\tilde{w}}\|_{H^{1}})\] \[\leq\frac{\|k_{x}^{a}(1,y)\|^{2}}{2}\|e^{w}\|^{2}+\frac{1}{2}\|e ^{\tilde{w}}\|^{2}+\frac{1}{2}\|e^{\tilde{w}}\|\|e_{x}^{\tilde{w}}\|.\] Estimating the term \(\frac{1}{2}\|e^{\tilde{w}}\|\|e_{x}^{\tilde{w}}\|\) on the right-hand-side of the previous inequality using Young's inequality, then \[-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dye^{\tilde{w}}(1,t)\leq \frac{\|k_{x}^{a}(1,y)\|^{2}}{2}\|e^{w}\|^{2}\] \[+\|e^{\tilde{w}}\|^{2}+\frac{1}{4}\|e_{x}^{\tilde{w}}\|^{2}. \tag{78}\] Referring to the inverse transformation (67), \[-\int_{0}^{1}k_{x}^{a}(1,y)e^{w}(y,t)dye^{\tilde{w}}(1,t)\] \[\leq\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2}\|e^{ \tilde{w}}\|^{2}+\frac{1}{4}\|e_{x}^{\tilde{w}}\|^{2}. \tag{79}\] Combining (79) and (77), we get \[\int_{0}^{1}e^{\tilde{w}}(x,t)e_{xx}^{\tilde{w}}(x,t)dx\] \[\leq\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2}\|e^{ \tilde{w}}\|^{2}-(1-\frac{1}{2})\|e_{x}^{\tilde{w}}\|^{2}\] \[\leq\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2}\|e^{ \tilde{w}}\|^{2}. \tag{80}\] Bounding the terms on the right-hand side of (76) using (80), Cauchy-Schwartz inequality and lemma 8, we arrive to \[\dot{V}(t)\leq -(o_{2}+\rho-\frac{|\alpha||\beta|}{\gamma}(1+\|k^{a}\|)(1+\|\ell^ {a}\|)\] \[-\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2})\|e^{ \tilde{w}}\|^{2}. \tag{81}\] Setting \[o_{3}= o_{2}+\rho-\frac{|\alpha||\beta|}{\gamma}(1+\|k^{a}\|)(1+\|\ell^{a}\|)\] \[-\frac{(1+\|\ell^{a}\|)^{2}\|k_{x}^{a}(1,y)\|^{2}+2}{2},\] then inequality (81) implies that \(V(t)\leq e^{-2o_{3}t}V(0)\). If the parameter \(o_{2}\) is chosen such that (75) is satisfied, then \(V(t)\) decays exponentially as \(t\to\infty\). Thus, \(\|e^{\tilde{w}}(x,t)\|\) decays exponentially. Referring to the elliptic equation of system (65) and recalling lemma 8, the state \(e^{v}(x,t)\) is asymptotically stable. Exponential stability of system (70) follows by noting that \(\partial_{xx}-\gamma I\) is boundedly invertible, and using a similar argument as the one given in the last part in the proof of Theorem 9. \(\Box\) The following lemma, which establishes a bound on \(\|k_{x}^{a}(1,y)\|\), will be needed. **Lemma 16**: _Consider system (15) with \(c_{2}\) being replaced by \(o_{2}\). The \(L^{2}\)-norm of \(k_{x}^{a}(1,y)\) is bounded by_ \[\|k_{x}^{a}(1,y)\|\leq\frac{o_{2}}{2}(1+\frac{o_{2}}{2})e^{\frac{ \alpha_{2}}{2}}\left(\sqrt{\frac{\pi}{2o_{2}}}erf(\sqrt{\frac{o_{2}}{2}}) \right)^{\frac{1}{2}}. \tag{82}\] **Proof.** The relation (82) can be shown by noting that the solution of system (15) is \[k^{a}(x,y)=-o_{2}x\frac{I_{1}(\sqrt{o_{2}(x^{2}-y^{2})})}{\sqrt{o_{2}(x^{2}-y^ {2})}}.\] After straightforward mathematical steps, we arrive to \[k_{x}^{a}(x,y)= -o_{2}\frac{I_{1}(\sqrt{o_{2}(x^{2}-y^{2})})}{\sqrt{o_{2}(x^{2}- y^{2})}}\] \[-o_{2}x\frac{I_{2}(\sqrt{o_{2}(x^{2}-y^{2})}}{(x^{2}-y^{2})}, \tag{83}\] where we have used that \(\frac{d}{dx}I_{1}(x)=\frac{I_{1}(x)}{x}+I_{2}(x)\). Setting \(z=\sqrt{o_{2}(x^{2}-y^{2})}\) and using the definition of Bessel function, (83) can be written as \[k_{x}^{a}(x,y)=-o_{2}\frac{I_{1}(z)}{z}-o_{2}^{2}x\frac{I_{2}(z) }{z^{2}}\] \[=-\frac{o_{2}}{z}\sum_{m=0}^{\infty}\left(\frac{z}{2}\right)^{2m +1}\frac{1}{m!m+1!}-\frac{o_{2}^{2}}{z^{2}}x\sum_{m=0}^{\infty}\left(\frac{z} {2}\right)^{2m+2}\] \[\times\frac{1}{m!m+2!}. \tag{84}\] To find a bound on the induced \(L_{2}\)- norm of \(k_{x}^{a}(1,y)\), with \(x=1\) the variable \(z\) becomes \(z=\sqrt{o_{2}(1-y^{2})}\) where \(0<y<1\). Equation (84) leads to \[\|k_{x}^{a}(1,y)\|\leq\frac{o_{2}}{2}\sum_{m=0}^{\infty}\frac{(z^ {2}/4)^{m}}{m!}+\frac{o_{2}^{2}}{4}x\sum_{m=0}^{\infty}\frac{(z^{2}/4)^{m}}{ m!}\] \[\leq\frac{o_{2}}{2}\|e^{\frac{x^{2}}{4}}\|+\frac{o_{2}^{2}}{4}\| e^{\frac{x^{2}}{4}}\|=\frac{o_{2}}{2}(1+\frac{o_{2}}{2})\|e^{\frac{x^{2}}{4}}\|\] \[\leq\frac{o_{2}}{2}(1+\frac{o_{2}}{2})e^{\frac{\alpha_{2}}{4}}\| e^{-\frac{\alpha_{2}\beta^{2}}{4}}\|. \tag{85}\] Since \(erf(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-\xi^{2}}d\xi\), inequality (85) leads to (82). \(\Box\) Using lemma 7 and lemma 16, the next corollary to Theorem 15 is now immediate. **Corollary 17**: _The observation error dynamics (70) is exponentially stable if_ \[o_{2}+\rho>\left(\frac{|\alpha||\beta|}{\gamma}+\frac{\|k_{x}^{ a}(1,y)\|^{2}}{2}\right)\] \[\times\left[1+\sqrt{\frac{o_{2}\pi}{2}}\ \left(erfi(\sqrt{\frac{o_{2}}{2}})\right)^{\frac{1}{2}}\left( erf(\sqrt{\frac{o_{2}}{2}})\right)^{\frac{1}{2}}\right]^{2}+1. \tag{86}\] The following result now follows from Theorem 14 and Theorem 15. **Theorem 18**: _Let \(k^{a}(x,y)\) be the solution of system (15). The error dynamics (65) with output injection \(\eta_{2}=\frac{1}{2}o_{2}\) defined as given in (69) are exponentially stable if \(o_{2}\) is given by (75)._ Condition (86) for stability of the observation error dynamics, imposes restrictions on the permissible choices of system parameters. This observation is demonstrated in Figure4, where we present a comparison between the right-hand-side of inequality (86) as a function of \(o_{2}\), and several straight lines \(o_{2}+\rho\) varying with different \(\rho\). The other parameters are fixed as \(\beta=\gamma=1,\ \alpha=0.5\). The dashed line in 4 represents the right-hand-side of (86). The observation error dynamics (65) is exponentially stable if the values of \(o_{2}\) are such that the dashed line in Figure4 is beneath the straight lines, for different \(\rho\). **Illustration of the restriction (86) on \(o_{2}\)** Figure 4: A comparison between the right-hand-side of inequality (86) as a function of \(o_{2}\) against several straight lines \(o_{2}+\rho\) for different values of \(\rho\), while the other parameters are fixed as \(\beta=\gamma=1\), \(\alpha=0.5\). The right-hand-side of (86) is described using a dashed line(- - -). For any \(\rho\), the error dynamics (70) is exponentially stable if \(o_{2}\) is such that the dashed line(- - -) is beneath the straight line \(o_{2}+\rho\). This illustrates the constraints associated with condition (86). ### Numerical simulations We conducted numerical simulations for the dynamics of both the coupled system (1)-(4) and the state observer (42) in the situation where two measurements, \(w(1,t)\) and \(v(1,t)\), are available. The simulations were performed using COMSOL Multiphysic software using linear splines to approximate the coupled equations by a system of DAEs. The spatial interval was divided into 27 subintervals. Time was discretized by a time-stepping algorithm called generalized alpha with time-step\(=0.1\). Observer designs were done for system (1)-(4) with \(u(t)\equiv 0\). The chosen parameters were \(\gamma=1\), \(\rho=0.5\), \(\alpha=1\) and \(\beta=1\). With these parameters, the system is unstable. With \(o_{2}=5\), the sufficient condition (63) for the error dynamics to be exponentially stable is satisfied. In Figure5 the true and estimated states at \(x=0.56\) are shown. Figure6 illustrates the \(L_{2}-\)norm error dynamics, which converge to zero as predicted by theory. **True and estimated states at \(x=0.56\) using observer** (42) \(L_{2}\)**-norm of the error dynamics** (45) **using two measurements \(w(1,t),\ v(1,t)\) Numerical simulations were also conducted to study the observer (64) when a single measurement \(w(1,t)\) available. The simulations were carried out using the parameter values \(\gamma=1\), \(\rho=1\), \(\alpha=0.5\), \(\beta=0.5\). With these parameter values, the system is stable. Also, we set \(o_{2}=0.5\) so the stability condition for the observation error dynamics (i.e. (86)) is satisfied. The control input \(u(t)\) was set as stated in (40), with control gain \(c_{2}=0.5\). The initial conditions were \(w_{0}=\sin(\pi x)\) and \(\hat{w}_{0}=\sin(2\pi x)\). The true and estimated states at \(x=0.56\) are given in Figure7. The \(L_{2}\)-norms of the error dynamics are presented in Figure8. **True and estimated states at \(x=0.56\) using observer** (64) \(L_{2}\)**-norm of the error dynamics** (65)**, using one measurements \(w(1,t)\) ## 5 Output feedback In general, the full state is not available for control. Output feedback is based on using only the available measurements to stabilize the system. A common approach to output feedback is to combine a stabilizing state feedback with an observer. The estimated state from the ob Figure 5: A comparison between the states of the coupled system (1)-(4) versus the estimated states using observer (42) at \(x=0.56\). System parameters are \(\gamma=1\), \(\rho=0.5\), \(\alpha=1\), \(\beta=1\), \(o_{2}=5\) with initial conditions \(w_{0}=\sin(\pi x)\) and \(\hat{w}_{0}=\cos(\pi x)\). Figure 6: \(L_{2}\)-norms of the error dynamics of the estimates of the parabolic and elliptic states, using two measurements \(w(1,t),\ v(1,t)\). System parameters are \(\gamma=1\), \(\rho=0.5\), \(\alpha=1\), \(\beta=1\), \(o_{2}=5\) with initial conditions \(w_{0}=\sin(\pi x)\) and \(\hat{w}_{0}=\cos(\pi x)\). The estimation error tends to \(0\) as \(t\to\infty\). Figure 7: A comparison between the states of the coupled system (1)-(4) versus the estimated states using observer (64) at \(x=0.56\). Here \(\gamma=1\), \(\rho=1\), \(\alpha=0.5\), \(\beta=0.5\), \(o_{2}=c_{2}=0.5\) with \(w_{0}=\sin(\pi x)\) and \(\hat{w}_{0}=\sin(2\pi x)\). server is used to replace the state feedback \(Kz\) by \(K\hat{z}\) where \(z\) is the true state and \(\hat{z}\) the estimated state. In the situation considered here, if there are two measurements, this leads to the output feedback controller consisting of the observer (42) combined with the state feedback \[u(t)=\int_{0}^{1}k_{x}^{a}(1,y)\hat{w}(y,t)dy+k^{a}(1,1)\hat{w}(1,t) \tag{87}\] where \(k^{a}(x,y)\) is the solution of system (15) with \(c_{2}\) satisfying the bound (41). Since the original system (1)-(4) is a well-posed control system (Theorem 1)and also the observer combined with the state feedback is a well-posed system, the following result follows immediately from the results in [20, section 3]. **Theorem 19**: _The closed-loop system consisting of (1)-(4), together with the observer dynamics (42) and control input (87), is well-posed and exponentially stable._ Note that when using output feedback, the parameters have to satisfy both of the stability condition associated with the control problem (41), and also the bound of the stability of the observation error (63). ### Numerical simulations Numerical simulations, again using a finite-element approximation in the COMSOL Multiphysics software, were performed to study the solutions of system (1)-(4) with output feedback (87). The parameter values were \(\gamma=\frac{1}{4}\), \(\rho=\frac{1}{3}\), \(\alpha=\frac{1}{4}\), and \(\beta=\frac{1}{2}\). The system's initial condition was \(w_{0}=\sin(\pi x)\).With these parameters, the uncontrolled system is unstable. To achieve stability, the control gain \(c_{2}=1.2-\rho\), ensuring that the stability condition (41) is satisfied. Additionally, to ensure exponential stability of the error dynamics \(o_{2}=3\), which satisfies inequality (63). However, after applying control, the state of the coupled system converges to the steady-state solution; see Figure9. This convergence is clearly depicted in the comparison shown in Figure10, where we compare the \(L_{2}\)- norm of the controlled and uncontrolled states. **Dynamics of closed-loop system with output feedback** **Comparison of open-loop and closed-loop \(L_{2}\)-norms with output feedback** ## 6 Conclusion Stabilization of systems composed of coupled parabolic and elliptic equations presents considerable challenges. The first part of this paper considers the boundary stabilization of a linear coupled parabolic-elliptic system. Previous literature has shown that coupling between the equations can result in an unstable system. Previous work on stabilization via two boundary control inputs was used in [16]. In this paper we used a single control input to stabilize both equations. Deriving one control law that stabilizes the system posed several challenges. One approach is to rewrite the coupled system into one equation in terms of the parabolic state. But due to the appearance of a Fredholm operator, this makes it difficult to establish a suitable kernel for the backstepping transformation. Using separate transformations for each of the parabolic and the elliptic states would be quite complicated. In this paper we transformed only the parabolic part of the system, which simplified the calculations. This enabled reuse of a previously calculated Figure 10: A comparison between the \(L_{2}\)-norms of \(w(x,t)\) and \(v(x,t)\) for the uncontrolled system and the system controlled with output feedback. Without control, the norms of both states grow, due to instability. However, with output feedback control, both states decay to zero. Figure 9: A 3D landscape of the dynamics of a controlled coupled parabolic-elliptic system (1)-(4) after applying the output feedback control input (87). The uncontrolled system is unstable, but the use of output feedback leads to an exponentially stable closed-loop system. transformation. However, the price is that this transformation mapped the original coupled system into a complicated target system which further complicated showing stability of the target system. Lyapunov theory was used to obtain a sufficient condition for stability of the target system. The second part of the paper focused on the observer design problem. Several synthesises are proposed depending on the available measurements. Output injections were chosen so that the exponential stability of the observation error dynamics is ensured. Again, instead of looking for a new state transformation that maps the original error dynamics into an exponentially stable target system, well-known transformations in the literature are employed. Then, the exponential stability of the original error dynamics is shown by establishing suitable sufficient conditions for stability. The key to obtaining a stability condition was again to use Lyapunov theory. As for controller design, the technical conditions for observer design depend on the number of the available measurements. When measurements for both states were provided, two transformations were applied to both parabolic and elliptic states of the error dynamics. A total of four filters, two throughout the domain and two at the boundary, were needed. On the other hand, when a single measurement for the parabolic state is given, one boundary filter was designed for the parabolic equation. However, in the latter case, a more restrictive condition for stability of the error dynamics was obtained. Observer design with a single sensor parallels to a great extent that of stabilization via one control signal. Showing exponential stability of the target error dynamics, in the situation when one measurement is provided, results in a very restrictive constraint on the parameters of the coupled system. In a final section, control and observation were combined to obtain an output feedback controller. The results were again illustrated with simulations. In this paper, the control as well as the observer gains were designed at \(x=1\). The situation is similar when the control and the observer gains are placed at \(x=0\), although different backstepping transformations are needed. This is covered in detail in [4]. Open questions include finding weaker sufficient conditions for stability of both the controller and the observer. Future work is aimed at improving the conditions on the control parameter \(c_{2}\) concerning the boundary stabilization problem. Similarly, relaxing stability condition on the parameters for the observer design problem with a single measurement is of interest. In many problems, the equations are nonlinear and design of a boundary control for a nonlinear coupled parabolic-elliptic equations will also be studied.
2309.09909
Radio multifrequency observations of Abell~781 with the WSRT
The `Main' galaxy cluster in the Abell 781 system is undergoing a significant merger and accretion process with peripheral emission to the north and southeastern flanks of the merging structure. Here we present a full polarimetric study of this field, using radio interferometric data taken at 21 and 92 cm with the Westerbork Synthesis Radio Telescope, to a sensitivity better than any 21 cm (L-band) observation to date. We detect evidence of extended low-level emission of 1.9 mJy associated with the Main cluster at 21 cm, although this detection necessitates further follow-up by modern instruments due to the limited resolution of the Westerbork Synthesis Radio Telescope. Our polarimetric study indicates that, most likely, the peripheral emission associated with this cluster is not a radio relic.
B. V. Hugo, G. Bernardi, O. M. Smirnov, D. Dallacasa, T. Venturi, M. Murgia, R. F. Pizzo
2023-09-18T16:18:34Z
http://arxiv.org/abs/2309.09909v1
# Radio multifrequency observations of Abell 781 with the WSRT ###### Abstract The 'Main' galaxy cluster in the Abell 781 system is undergoing a significant merger and accretion process with peripheral emission to the north and southeastern flanks of the merging structure. Here we present a full polarimetric study of this field, using radio interferometric data taken at 21 and 92 cm with the Westerbork Synthesis Radio Telescope, to a sensitivity better than any 21 cm (\(L\)-band) observation to date. We detect evidence of extended low-level emission of 1.9 mJy associated with the Main cluster at 21 cm, although this detection necessitates further follow-up by modern instruments due to the limited resolution of the Westerbork Synthesis Radio Telescope. Our polarimetric study indicates that, most likely, the peripheral emission associated with this cluster is not a radio relic. keywords: galaxies: clusters - galaxies: haloes - galaxies: evolution - radio continuum: galaxies - techniques: polarimetric ## 1 Introduction Galaxy clusters are some of the largest-scale structures in the Universe, typically spanning a few Mpc. They have masses ranging from \(\sim 10^{14}\) up to \(\sim 10^{15}\) M\({}_{\sun}\)(Van Weeren et al., 2019, and references therein), of which only \(\sim 3-5\%\) can be associated with luminous matter in constituent galaxies, while \(\sim 15-17\%\) in the form of hot ionized gas is detectable through thermal Bremsstrahlung emission in the X-ray regime. The majority (\(\sim 80\%\)) takes the form of dark matter (Feretti et al., 2012, and references therein). The shape and curvature in the jets and lobes of Active Galactic Nuclei (AGN) found among galaxy members can be used to infer the motion of constituent galaxies within the cluster, while the study of the Intra-Cluster Medium (ICM) provides insight into the large-scale magnetic fields and physical forces at play during mergers. It is typical to find constituent AGN displaying head-tail, wide- and narrow-angle tail jet morphology. Enabled by multi-wavelength observations, our understanding of cluster evolution has increased dramatically in the past few decades. For instance, radio observations have shown that often there is a significant non-thermal diffuse emission component in merging cluster systems that are sufficiently heated, (therefore detectable in X-rays with integrated energy releases of \(10^{63}\) to \(10^{64}\)erg, Venturi et al., 2011). Such emission has a very low surface brightness between \(\sim 1\) to 0.1 \(\mu\)Jy arcsec\({}^{-2}\) at 1.4 GHz (Feretti et al., 2012), and takes the form of broad diffuse emission on scales spanning 100s kpc up to \(\sim 1-2\) Mpc (Feretti et al., 2012, and references therein). It also has a strong morphological correspondence to the emission detectable in X-rays: round in shape and roughly centred at the peak of X-ray luminosity. These are referred to as _radio haloes_. Such radio haloes are detected in roughly 30% of clusters with integrated X-ray luminosity of \(L_{x}>5\times 10^{44}\) erg s\({}^{-1}\)(Feretti et al., 2012) and are mostly associated to clusters with merger activity. Smaller _mini haloes_ can also be found in the less energetic environments of cool-core clusters, closely related to the core region of such clusters and typically have sizes less than 0.5 Mpc (Van Weeren et al., 2019, and references therein). The existence of radio emission on such scales is puzzling. The integrated spectra1 of radio haloes are in the range \(\alpha=-1.2\) to \(-1.7\) in the 0.3 to 1.4 GHz range (Feretti et al., 2012)2. Current estimates on the radiative lifetime of relativistic electrons due to synchrotron and Inverse Compton (IC) energy losses are on the order of \(10^{8}\) years at most (Sarazin, 1999). This is roughly up to 2 orders of magnitude lower than the expected electron diffusion time, assuming an electron diffusion velocity of \(\approx 100\) km s\({}^{-1}\)(Feretti et al., 2012). The prevailing theory to their origin suggests the presence of local re-acceleration mechanisms within the ICM through both first and second-order Fermi processes. First-order processes refer to shock acceleration created in disturbed cluster environments, driving diffuse particle scatter from heterogeneous magnetic fields in both the shock upstream and downstream regions. In contrast, second-order processes refer to energy gains from turbulence in the ICM. The physical extent over which the diffuse emission is located also precludes that their origin is based in individual galaxy processes. Detailed gamma-ray studies of the Coma cluster suggest that hadronic interactions with Cosmic Ray (CR) protons in the ICM are not the main origin of such diffuse emission -- at least not in the case of the giant haloes seen in strong merging clusters. In general, the radio emission from radio haloes does not show significant polarization. Footnote 1: Throughout this paper it is assumed that flux density follows a power law of the form \(S(\nu)\propto\nu^{\alpha}\) Footnote 2: We note the spectral index convention we follow is negated with respect to Feretti et al. (2012) Radio haloes are not the only large-scale diffuse emission that can be associated with merging clusters. _Radio relics_ are diffuse sources often seen on the outskirts of clusters (again in a merging state), and they can be found more than \(\sim 1\) Mpc away from cluster centres (Feretti et al., 2012). In general, radio relics have elongated morphologies, and they can themselves be 100s kpc to \(>1\)Mpc in size (Van Weeren et al., 2019). These sources do not have any direct optical or emitting X-ray counterparts but are often discovered in discontinuities in the X-ray brightness (Feretti et al., 2012). They provide perhaps the best evidence for the presence of relativistic particles and strong magnetic fields in very low-density ICM environments, where X-ray sensitivity often precludes a detailed direct study of thermal gas dynamics. Relics provide evidence of radiative ageing; their radio spectra show clear steepening in the direction of the cluster centre. An excellent example is the 2 Mpc elongated relic on the outskirts of CIZA J2242.8+5301, ranging from \(\alpha\approx-0.6\) to \(-2.0\). There is also a high degree of polarization across the relic (\(50-60\%\)) with magnetic field vectors aligned with the relic edge, which is evidence of a well-ordered magnetic field (Van Weeren et al., 2010). This suggests that relics may be driven by shocks and turbulence from merger events, where the shock front compresses the ICM, ordering/amplifying the magnetic field and accelerating relativistic particles. These elongated relics typically have integrated spectra in the range \(\alpha=-1.0\) to \(-1.6\) due to low Mach numbers; in agreement with the relic shock model (Feretti et al., 2012). The reader is referred to Van Weeren et al. (2019) and Feretti et al. (2012) for reviews on the topic. ## 2 The curious case of a781 Abell 781 consists of 4 clusters visible in the X-rays, at least one of them shows clear merger activity. The 'Middle' and 'Main' clusters, as indicated in Fig. 1, represent an interacting pair, the former showing signs of interaction with smaller structures. Conversely, the 'East' and 'West' clusters are at different redshifts and might themselves be a pair of objects in a possible long-range interaction. In this work we are focusing on the merging 'Main' cluster system (see Fig. 2). The mass of the Main cluster is \(M_{500}=(6.1\pm 0.5)\times 10^{14}M_{\odot}\)(Ade et al., 2016). Various previous works targeted this Main cluster in Abell 781 in the radio and X-ray. The compact cluster AGN are well studied. Arcesecond scale images at 1.4 GHz of the brightest radio galaxies are presented in Govoni et al. (2011). There is a diffuse source in the southeastern part of the Main cluster, as well as two more diffuse sources whose origin is not yet well established. The former has two possible interpretations, namely being either a relic source (e.g., Venturi et al., 2011) or a head-tail radio source (Botteon et al., 2019). The presence of a giant radio halo has been reported by Govoni et al. (2011) in their analysis of low-resolution JVLA data at 1.4 GHz. However, deep LOw Frequency AR-ray (LOFAR) and (upgraded) Giant Metrewave Radio Telescope (uGMRT) studies at lower frequencies presented by Botteon et al. (2019) and Venturi et al. (2011) point to Figure 1: _XMM-Newton_ MOS1+MOS2 X-ray image. Observation id 0401170101. Here the same labels for the 4 clusters and the merging clusters are used as in previous literature (e.g., Govoni et al., 2011). The image was convolved with a normalized circular Gaussian of \(\sigma=8\) arcsec (2 Skypixels). The contours starts from 7 cts Skypixel\({}^{-1}\) in steps of a factor of \(\sqrt{2}\). Scale bar drawn the cluster redshift of \(z=0.3004\) of the ‘Main’ cluster — the cluster of interest in this work. the contrary. Botteon et al. (2019) place a 50 mJy upper bound on the halo flux density at 143 MHz, while Venturi et al. (2011) place an upper bound of \(S_{\rm 325~{}MHz}<40\)mJy. These bounds indicate that Abell 781 is an example of one of the high-mass disturbed cluster environments that lack extended radio halo emission -- at least when compared to typical haloes discussed in the literature. The X-ray luminosity of the cluster is \(L_{\rm 0.1-2.4keV}=1.722\times 10^{45}\) erg s\({}^{-1}\)(Ebeling et al., 1998). A detailed study of the X-ray emission and its discontinuities is presented by Botteon et al. (2019). The study suggests that the Main A781 cluster is undergoing a merger between three smaller clumps; two in the north-south axis and one responsible for the western bulge in the hot X-ray emission. Their analysis also shows strong evidence of cold fronts at both the south and north edges of the hot X-ray emission. The presence of shock-driven re-acceleration of electrons is still up for debate: the previous analysis only found evidence for a weak shock with a Mach number of \(\mathcal{M}<1.4\)(Botteon et al., 2019). In this paper, we present sensitive observations carried out with the Westerbork Synthesis Radio Telescope (WSRT) at 21 and 92 cm targeting Abell 781, aiming at characterizing the radio emission of the cluster and its members. Throughout this paper, we will assume \(\Lambda\)CDM cosmology with \(H_{0}=69.6\)km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{m}=0.286\), \(\Omega_{\Lambda}=0.714\). At the cluster redshift of \(z=0.3004\), 1 arcsec corresponds to 4.5 kpc. At this redshift, the radio luminosity distance, \(D_{\rm L}\), is \(\sim 1570\) Mpc. ## 3 Observation and Data Reduction ### WSRT 21 and 92 cm data The Westerbork Synthesis Radio Telescope is an east-west interferometer consisting of 10 fixed-position antennas and 4 re-configurable antennas, each 25 m in diameter, prime focus and equatorial-mounted. The 10 fixed-position antennas are regularly separated by 144m, with a minimum distance of 36m between the last fixed antenna and the first re-configurable antenna. The maximum spacing is 2.7 km. The field was observed prior to wide-field phased-array receiver upgrades (see e.g. Verheijen et al., 2008) and used the old 21 cm (1321-1460 MHz) and 92 cm (320-381 MHz) receivers. The 21 and 92 cm correlators have frequency resolutions of 312.50 and 78.125 kHz respectively. Both 21 cm and 92 cm receivers employ a linearly-polarized feed system. Observation details are summarized in Table 1. The Westerbork Synthesis Radio Telescope uses a programmable temperature-stabilized noise diode to correct for the time-variable electronic gains. See for instance Casse & Muller (1974) and Bos et al. (1981) for a brief description. The frequency-dependent response of the system is calibrated with a strong celestial source. Delays and phases on crossbands are corrected with a strongly polarized celestial source before leakages are corrected using an unpolarized source. The first-order on-axis linear feed calibration strategy we followed is discussed in more detail in Hales (2017). Parallactic angle corrections are not required, because the equatorial mounts of the WSRT imply that the sky does not rotate with respect to the receiver as a function of the hour angle. This further implies that we have to rely on polarized sources with known polarization angles to calibrate cross-band phases and correct for the system-induced ellipticity and its interplay with the linear polarization angle. The 21 cm band data are calibrated for the complex bandpass response of the system using 3C48, which has limited linear polarization of 0.5 Jy, assuming the following model (Perley & Butler, 2013): \[\log S=1.3324-0.7690\log\nu_{\rm G}-0.1950\log^{2}\nu_{\rm G}+0.059\log^{3} \nu_{\rm G}.\] Here \(\nu_{\rm G}\) is given in GHz and the flux density, \(S\), in Jy. The system ellipticity (crosshand phase) calibration is performed using the strongly linearly polarized source 3C286. The source is assumed to have a constant polarization angle of \(\approx 33^{\circ}\) across the passband. First-order leakages are corrected using 3C48. We estimate that after correction the total quadrature sum of Stokes Q, U and V to I of this marginally polarized source ranges between 0.025 % and 0.006 % across the passband. The 92 cm data are calibrated for the complex bandpass using the unpolarized source 3C147, assuming the frequency response (Perley & Butler, 2013): \[\log S=1.4616-0.7187\log\nu_{\rm G}-0.2424\log^{2}\nu_{\rm G}+0.079\log^{3} \nu_{\rm G}.\] DA240 is known to be highly polarized -- parts of its western lobe are above 60% polarized, see Tsien (1982). However, the source can be resolved by the WSRT at 92 cm. Subsequently, only leakages (off-diagonal terms) can be corrected, for which the unpolarized (at 92 cm) 3C138 is used. After correction, we estimate the quadrature leakages to vary between 0.19 % and -0.05 % across the passband. Both the 92 cm and 21 cm data reductions were performed using the containerised astronomy workflow management framework Stimela v0.3.13(Makhatniti, 2018). Calibration was performed using the Common Astronomy Software Applications (CASA) v4.7(McMullin et al., 2007). We first identified and flagged Radio Frequency Interference using the AOFlagger package (Offringa, 2010). Footnote 1: Available from [https://github.com/SpheMakh/stimela.Stíncia](https://github.com/SpheMakh/stimela.Stíncia) is a pipelining framework for radio astronomy which wraps tasks from a wide variety of heterogeneous packages into a common Python(Van Rossum & Drake, 2009) interface. The often-complicated compilation and software dependencies of these packages are isolated through containerisation platforms, such as Docker(Merkel, 2014) and Singularity(Kurtzer et al., 2017), allowing a heterogenous set of often-conflicting packages to be accessible through a single workflow interface. After applying the complex bandpass, \(5^{\circ}12^{\prime}\) wide images were generated using the WSClean package (Offringa et al., 2014) with uniform weights. Both the 92 and 21 cm data were imaged at the same resolution and size to simplify the derivation of spectral index maps. All the 144 m redundant spacings were included in the synthesis in order to improve sensitivity. To account for the apparent spectral variation over the observation bandwidth due to the antenna primary beam we enabled the multi-frequency deconvolution algorithm (Offringa et al., 2014). We used the imager's auto-thresholding deconvolution (set to \(1\sigma\)) criterion which stops deconvolution based on the median absolute deviation of the residual map. A model of the field was derived as a list of fitted Gaussian components brighter than 5 \(\sigma\) using the pYBDSF source extractor (Mohan & Rafferty, 2015). After predicting model visibilities using Meqtrees (Noordam & Smirnov, 2010), self-calibration was performed by solving for phases with 180 s and 60 s solution intervals for 21 cm and 92 cm data respectively. The datasets were then re-imaged to construct an improved model for a second round of phase calibration, again fitting Gaussians above 5\(\sigma\) and using solution intervals of 30 s and 10 s respectively. Finally, residual amplitude and phase self-calibration was performed using a 7 minute solution interval. The final 21 cm and 92 cm uniform-weighted images are shown in Fig. 2. The 92 cm synthesized beam is too large, even at uniform weighting, to accurately model and subtract bright AGN cluster members at this frequency. The 92 cm data is, however, useful in estimating the spectral profiles of the compact emission. On the other hand, the resolution of the 21 cm data enables us to produce an image at an intermediate resolution of \(25.8\times 12.1\) arcsec using Briggs (Briggs, 1995) weights of -0.25 (compact sources brighter than 2\(\sigma\) were subtracted from the visibilities from modelling at highest possible resolution prior to imaging). This image has better sensitivity to extended structure and was used to assess the presence of diffuse emission. The 21 cm and 92 cm Briggs -2.0 maps have synthesized beams of \(23.2\times 10.4\) arcsec and \(83\times 74\) arcsec respectively at uniform weighting4. The area of the synthesized beam is approximately given by an elliptical Gaussian and is \(\Omega_{21\rm cm}=20.58\) px and \(\Omega_{92\rm cm}=523.97\) px respectively5, where \(\theta_{l}\) and \(\theta_{m}\) are the fitted BMAJ and BMIN at full width half maximum of the synthesized beam in pixels: Footnote 4: It is worth noting this is somewhat worse than quoted by the WEsterbork Northern Sky Survey (WENSS) (Rengelink et al., 1997), we applied a circular Gaussian taper to improve the synthesized beam shape. This resulted in a decrease in resolution by roughly a factor of 2 Footnote 5: The sampling here is kept to 3.64 arcsec — consistent with the 21cm maps to simplify computing Spectral Index (SPI) maps in the analysis. \[\Omega_{b}=\int\theta_{b}dldm\approx\frac{\pi\theta_{l}\theta_{m}}{4\ln 2}.\] \begin{table} \begin{tabular}{c c c c c c c} \hline Band & Config & Obs ID & Target & Span (UTC) & J2000 RA & J2000 DECL \\ \hline 92cm & 36m & 11200314 & 3C147 & 2012 Jan 17 18:30:50 – 18:45:50 & 05b42m36.135 [FOOTNOTE:]Footnote 5: footnotemark: [ENDFOOTNOTE] & \(+\)49\({}^{\circ}\)51\({}^{\prime}\)07\({}^{\circ}\) \\ & & 11200315 & DA240 & 2012 Jan 17 18:49:30 – 19:04:30 & 07b42m48.017* & \(+\)55\({}^{\circ}\)54\({}^{\prime}\)22\({}^{\circ}\) \\ & & 11200316 & A781 & 2012 Jan 17 19:07:50 – Jan 18 0:76:50 & 09b20m25.401* & \(+\)30\({}^{\circ}\)30\({}^{\prime}\)07\({}^{\circ}\) \\ & & 11200317 & 3C295 & 2012 Jan 18 07:12:40 – 07:27:40 & 14b11m20.652* & \(+\)52\({}^{\circ}\)12\({}^{\circ}\)09\({}^{\circ}\) \\ & & 11200407 & 3C147 & 2012 Jan 23 18:07:10 – 18:22:10 & \\ & & 11200408 & DA240 & 2012 Jan 23 18:25:50 – 18:40:50 & \\ & & 11200409 & A781 & 2012 Jan 23 18:44:10 – Jan 24 06:43:10 & \\ & & 11200410 & 3C295 & 2012 Jan 24 06:49:00 – 07:04:00 & \\ 60m & & 11200434 & 3C147 & 2012 Jan 24 18:03:02 – 18:18:20 & \\ & & 11200435 & DA240 & 2012 Jan 24 18:22:00 – 18:37:00 & \\ & & 11200436 & A781 & 2012 Jan 24 18:02:0 – 22 04:22:00 & \\ & & 11200439 & A781 & 2012 Jan 24 18:02:0 – 0 – 06:39:20 & \\ & & 11200440 & 3C295 & 2012 Jan 25 06:45:0 – 07:00:10 & \\ 96m & & 11201063 & 3C147 & 2012 Feb 13 16:44:40 – 16:59:40 & \\ & & 112010614 & DA240 & 2012 Feb 13 17:03:20 – 17:18:20 & \\ & & 11201065 & A781 & 2012 Feb 13 17:02:40 – 16:05:20:40 & \\ & & 11201066 & A781 & 2012 Feb 13 17:02:40 – 16:05:41:30 & \\ 84m & & 11201079 & 3C147 & 2012 Feb 14 16:40:40 – 16:55:40 & \\ & & 11201080 & DA240 & 2012 Feb 14 17:17:40 – 16:55:40 & \\ & & 11201081 & A781 & 2012 Feb 13 17:17:40 – 16:55:40 & \\ & & 11201082 & 3C295 & 2012 Feb 13 17:03:20 – 05:37:30 & \\ & & 11202096 & 3C147 & 2012 Mar 31 13:39:50 – 13:54:50 & \\ & & 11202097 & DA240 & 2012 Mar 31 13:58:30 – 14:13:30 & \\ & & 11202098 & A781 & 2012 Mar 31 14:16:50 – Apr 01 0:21:550 & \\ & & 11202099 & 3C2012 & 2012 Apr 01 02:21:40 – 02:36:40 & \\ 21cm & 36m & 11200302 & 3C48 & 2012 Jan 16 18:48:30 – 19:03:30 & 01b37m41.30* & \(+\)33\({}^{\circ}\)09\({}^{\circ}\)35” \\ & & 11200303 & A781 & 2012 Jan 16 19:11:40 – 23:56:10 & \\ & & 11200305 & 3C286 & 2012 Jan 17 07:37:20 – 07:52:20 & 13b31m08.29* & \(+\)30\({}^{\circ}\)30\({}^{\prime}\)33” \\ & & & 11202012 & 3C48 & 2012 Mar 28 14:05:20 – 14:20:20 & \\ & & & 11202103 & A781 & 2012 Mar 28 14:28:40 – 27:29 02:27:40 & \\ & & & 11202014 & 3C286 & 2012 Mar 29 02:33:00 – 02:48:00 & \\ 72m & & 11202119 & 3C48 & 2012 Apr 01 13:49:40 – 14:04:40 & \\ & & & 11202120 & A781 & 2012 Apr 01 14:12:50 – Apr 02 02:11:50 & \\ & & & 11202121 & 3C286 & 2012 Apr 02 02:17:10 – 02:32:10 & \\ \hline \end{tabular} \end{table} Table 1: Summary of relevant observations of A781 and celestial calibrator fields taken with WSRT in various configurations at 21 and 92 cm in 2012. ) Figure 2: 1\({}^{\rm o}\)-wide 21 cm (top) and 92 cm (bottom) images cropped and centred on the cluster. The 21 cm synthesized beam is \(23.2\times 10.4\) arcsec and the noise rms 12 \(\mu\)Jy beam\({}^{-1}\). The 92 cm synthesized beam is \(83\times 74\) arcsec and the noise rms is 1.5 mJy beam\({}^{-1}\) (calculated in an empty patch where the main contribution are direction-dependent calibration artefacts from a bright source to the top left of the zoomed-in region). Both images shown here are not corrected for primary beam attenuation. Since the 92 cm data is only useful to estimate a spectral index map and not used for the detection of faint diffuse emission we did not perform further direction-dependent calibration. We over plot the contours derived for fig. 1 in faint blue for reference — the _XMM-Newton_ exposure used only extends over the central region of the radio map. ### WSRT / NVSS flux scale comparison In order to quantify the error on the absolute flux scale of our calibration we cross-match the primary-beam-corrected population of compact sources with that of the National Radio Astronomy Observatory (NRAO) Very Large Array (JVLA or VLA interchangeably) Sky Survey (NVSS, Condon et al., 1998), integrated to the lower resolution of the NVSS in the case of our 21 cm data. The source catalogue is obtained by running pvBDSF (Mohan & Rafferty, 2015) to extract the population above 20 sigma using adaptive thresholding. The WSRT power beam attenuation was corrected according to the following analytical model, prior to catalogue fitting: \[B(\theta,\nu_{\rm G})=\cos^{6}\left(65.0\nu_{\rm G}\theta\right) \tag{1}\] Here \(\nu_{\rm G}\) is the frequency in gigahertz and theta the evaluated angular separation from the pointing centre. The flux cross-match is shown in Fig. 3. We obtain a match in flux scale with an average absolute error of 7.13%. This cross-match error on the absolute flux scale of the NVSS and WSRT 21 cm maps corresponds well to the second measure of absolute flux scale error we estimate by transfer calibration from 3C48 (observed prior to target) onto 3C286 (observed after target). The absolute error in transfer scale to the scale stated by Perley & Butler (2013) is 6.20% 6. We similarly quantify the error on the flux scale of the 92 cm data, which was calibrated using 3C147 (observed prior to target), and transferred onto 3C295 (observed after the target). The absolute error to the scale of Perley & Butler (2013) was found to be 2.39% on average across the passband. We will assume a 10% error margin of the Perley & Butler (2013) scale, which is used throughout as an upper bound to the errors when computing powers and spectral indices. Footnote 6: This moderate error could be the result of not having access to a gain calibrator for our observations to monitor for amplitude stability on the system, however out of band linearity limitations are known, for instance, with the MeerKAT system when working at \(L\)-band (856–1712 MHz) which is dominated by Global Navigation Satellite System transmitters. The error quoted here could be a combination of both. We further compare the positional accuracy by cross-matching to the NVSS. To minimize the positional uncertainties brought about by extended sources we select only compact sources to cross-match. We define a 'compactness' criterion by measuring the ratio between integrated flux to peak flux using a pvBDSF-fitted catalogue. A ratio close to unity for a high SNR source indicates that the source is compact. We select such sources that are within \(\pm 30\%\) of unity to measure positional accuracy, shown in Fig. 4. ### Archival VLA 21 cm A, C and D configuration data The 21 cm WSRT data lacks the necessary resolution to show the structure of the AGNs associated with this cluster. As a possible complementary source of information, we reduced the same archival JVLA \(L\)-band7 data used in the Figure 4: Positional cross-match to the NVSS within the resolution of the NVSS. Here we use the fitted positions and errors given by pvBDSF. In both cases of 21 and 92 cm the positional offset is a fraction of the instrument resolution. The dashed lines indicate the median offset for the selection of compact sources. Figure 3: cross-matched fluxes between WSRT 21 cm and the NVSS, integrated to the lower resolution of the NVSS. Error bars indicate 1\(\sigma\) fit errors taken from the pvBDSF catalog fitting routine. Only sources above 20\(\sigma\) are shown. The WSRT 21 cm population was scaled to the NVSS frequency assuming a population spectral index of -0.7, typical to AGN at this frequency. analysis by Govoni et al. (2011). Details of which are summarized in Table 2. The data were flagged and calibrated with the Common Astronomy Software Applications (CASA) v4.7(McMullin et al., 2007) through Stimela v0.3.1(Makhathini, 2018). We applied flags to instances of equipment failure and shadowing. Throughout we used 3C286 to set the flux scale of the observation (Perley & Butler, 2013) and to calibrate the frequency response of the system. Since the JVLA has circular _L_-band feeds it is not strictly necessary to take a polarization model into account8. Although the data is taken at multiple epochs, 3C286 is known to be a very stable calibrator -- within 1% over the duration of 30 years (Perley & Butler, 2013) -- and thus the following model can be assumed for all three datasets: Footnote 8: In the circular basis the diagonal correlations LL and RR measure \(I\pm V\)(e.g., Smirnov, 2011) and is insensitive to the linearly polarized flux of the celestial calibrator 3C286 \[\log S=1.2515-0.4605\log\nu_{\rm G}-0.1715\log^{2}\nu_{\rm G}+0.0336\log^{3} \nu_{\rm G}\] The time-variable gain on 3C286 was calibrated by correcting for gains at 30 s intervals, before computing a single normalized bandpass correction for the entire observation. We did not correct the data for polarization leakages, since we are primarily interested in supplementing our analysis with source morphological information. Unlike the WSRT observations, the time-variability of the electronics, especially its phase, needs to be calibrated with a celestial calibrator. We used 0842+185, 1438+621 and 0851+202 for D, A and C configuration data respectively. Again, we used the multi-frequency deconvolution algorithm implemented in WSClean(Offringa et al., 2014). We enabled widefield corrections at default setting, synthesized a 70.0\({}^{\prime}\) image and deconvolved to an auto-threshold of 1\(\sigma\), using Briggs Briggs (1995) weighting with robustness 0.0. The CLEAN residuals have an rms noise of 55 \(\mu\)Jy beam\({}^{-1}\). ## 4 Results and Discussion In this section, we discuss the Main cluster as a whole, a study of the field source polarimetry, the preliminary detection of a candidate halo and lack of detection of relic emission. Fig. 2 shows the 1.4 GHz image covering the whole WSRT primary beam and the corresponding field at 346 MHz. A zoom into the central field overlaid on the Sloan Digitized Sky Survey 12\({}^{\rm th}\) release (Alam et al., 2015) is shown in Fig. 5. Bright radio sources are labelled and listed, along with their optical counterparts (where available), in Table 3. 9. Compact sources S1-S6 seen at 325 and 610 MHz by Venturi et al. (2011) are visible in our image too. These bright sources (S1-S6) have corresponding optical counterparts, apart from S2. We have labelled those fainter sources with clear corresponding optical counterparts as SU1 through SU7. Sources S3, S4, S5, S6, SU5, SU7, SU2 are at a similar redshift to the cluster within the quoted (Beck et al., 2016) error bars of the photometric redshifts (\(3\sigma=0.06\)), as shown in Table 3. The spectroscopic redshift for S1 is substantially different to the confirmed cluster members and indicates that the source is not part of the Main cluster, but a background AGN. Based on the lack of corresponding optical counterparts and their radio morphology two candidate relics are labelled as CR1 and CR2. Both candidate relics are on the outskirts of the heated cluster medium, while the bright AGN S6 is clearly offset from the peak X-ray emission. The JVLA high-resolution tiles in Fig. 5 highlight the extended nature of the AGN S6, which is barely resolved by the WSRT. Footnote 9: The radio sources at 1.4 GHz follow the same naming convention used in Venturi et al. (2011) The redshifts of the previously identified radio sources S1, S3, S4, S5 and S6 are listed in Table 3. S2 does not have any apparent optical counterpart and is, therefore, not included in the table. Sources S3-S6, SU1, SU2, SU5 and SU7 are very likely all cluster members. In studies done to date, the orientation of, and the emission mechanisms behind the CR1 complex are still not clearly established. The complex spans about 540 kpc and its morphology is very peculiar, neither matching a head-tail AGN nor a shock-driven relic very well. For this reason, our study also includes polarimetric measurements of the A781 Main cluster. Rotation Measure analysis of the magnetic field depth is one way to probe the medium through which radio emission propagates. To this end we calibrated for the ellipticity of the telescope feeds and leakages stemming from the non-orthogonality of the feeds and synthesized images for the central cluster region using Briggs -0.5 weighting. We made a frequency cube at the native resolution of _L_-band and performed Rotation Measure (RM) synthesis to recover the intrinsic polarization of the cluster members. We follow the definition and conventions defined by Burn (1966) and Faraday Depth synthesis derivation of Brentjens & De Bruyn (2005). The Full Width at Half Maximum (FWHM) of the Rotation Measure Transfer Function (RMTF) is given for the Westerbork _L_-band correlator10 as: Footnote 10: Digitized bandwidth coverage 1.301 to 1.460 GHz \[{\rm RMTF}_{\rm FWHM}\approx\frac{2\pi}{\lambda_{\rm max}^{2}}=118.38\ {\rm rad\ m^{-2}},\] with a maximum function support for the channelizer given by \[\phi_{\rm sup}\approx\frac{\sqrt{3}}{\min{(\Delta\lambda^{2})}}=4269.80\ {\rm rad\ m^{-2}}.\] We additionally deconvolve the Faraday Depth spectrum at each spatial pixel in the map using a variant of the CLEAN algorithm applied in Faraday Depth space to obtain the peak RM and peak-to-noise (PNR) on a pixel-by-pixel basis along the plane of the sky. This analysis was performed on the high-resolution 21 cm data due to the resolution limitations of the 92 cm data. It is also important to note here that the (linear) Electric Vector Polarization Angle (EVPA) calibration procedure discussed does not correct the ionospheric-induced RM on the EVPA, nor does it correct the absolute angle of the Figure 5: _Top left_: Radio contours from Fig. 2 overlaid over Digitized Sky Survey II POSS2 optical red plate. Contours are drawn starting at 60 \(\mu\)Jy beam\({}^{-1}\) in steps of \(\sqrt{2}\). _Bottom left_: Radio contours from Fig. 2 are overlaid over _XMM-Newton_ X-ray from Fig. 1. _Right_: Higher resolution (synthesized beam of \(1.935\times 1.454\) arcsec) tiles of (top to bottom) S1, S2, S4 & S6, as observed with the JVLA A+C+D configurations between 1.354 and 1.515 GHz, rms noise 54.68 \(\mu\)Jy beam\({}^{-1}\). system receivers11. Referring back to the widely-used assumption that 3C286 has a frequency constant EVPA of 33\({}^{\circ}\) (with RM therefore very close to 0 rad m\({}^{-2}\)) we estimate the ionospheric RM to be \(+4.610\pm 0.922\) rad m\({}^{-2}\). This gives a reasonably small offset at the centre of the narrow WSRT band of around 7\({}^{\circ}\) from the assumed model. The angles and the quoted RM have been corrected for this contribution. The apparent recovered EVPA and fractional linear polarization are shown in Fig. 8 for a cropped region around the cluster centre. We note that the linear polarization vectors shown for CR1 and CR2, the RM peaks in Fig. 6 and the associated statistics in Table 4 are corrected for both the approximate ionospheric contribution to the Faraday rotation, as well as the approximated \(16\pm 5\) rad m\({}^{-2}\) Galactic foreground contribution (Oppermann et al., 2015). Footnote 11: See discussion in Hales (2017) The synthesized global RM map and associated peak-to-noise estimates are given in Fig. 6. There is clear evidence for compressed polarized emission along the bright eastern spine and along the northwestern edge of the CR1 complex (see Fig. 8). The compact sources at cluster redshift, S4 and S6, have a median peak Faraday depth along the line of sight in the \begin{table} \begin{tabular}{c c c c c c} \hline Conf. & Bandwidth (MHz) & Obs ID & Target & Span (UTC) & J2000 RA & J2000 DECL \\ \hline D & 1355.525-1377.4, & AM469 & 1331+305 (3C286) & 1995 Mar 15 05:54:40–05:59:50 & 13\({}^{\rm h}\)31\({}^{\rm m}\)08\({}^{\rm s}\) & +30\({}^{\circ}\)30\({}^{\prime}\)32\({}^{\prime\prime}\) \\ & 1425.725–1447.6 & & & 08:00:30–08:05:50 & \\ & & & & 12:35:10–12:41:00 & 09\({}^{\rm h}\)20\({}^{\rm m}\)23\({}^{\rm s}\) & +30\({}^{\circ}\)31\({}^{\prime}\)09\({}^{\prime\prime}\) \\ & & & 0842+185 & 1995 Mar 15 03:03:10–03:06:20 & 08\({}^{\rm h}\)42\({}^{\rm m}\)05\({}^{\rm s}\) & +18\({}^{\circ}\)35\({}^{\prime}\)40\({}^{\prime\prime}\) \\ & & & & 04:12:10–04:13:20 & \\ A & 1355.525–1377.4, & AB699 & A0781 & 1994 Apr 20 00:00–00:00:31:40 & 09\({}^{\rm h}\)20\({}^{\rm m}\)23.7\({}^{\rm s}\) & +30\({}^{\circ}\)31\({}^{\prime}\)09\({}^{\prime\prime}\) \\ & 1425.725–1447.6 & & 1331+305 (3C286) & 1994 Apr 29 01:30:01–05:10 & 13\({}^{\rm h}\)31\({}^{\rm m}\)08.2873\({}^{\rm s}\) & +30\({}^{\circ}\)30\({}^{\prime}\)32.9590\({}^{\prime\prime}\) \\ & & & & 11:37:50–11:42:40 & \\ & & & & 1994 Apr 20 11:14:40–11:18:20 & \\ & & & & 04:58:0–05:01:50 & \\ & & & & 1438+621 & 1994 Apr 20 00:0–01:41:40 & 14\({}^{\rm h}\)38\({}^{\rm m}\)44.7873\({}^{\rm s}\) & +62\({}^{\circ}\)11\({}^{\prime}\)54.397\({}^{\prime\prime}\) \\ & & & & 10:42:50–10:44:40 & 10\({}^{\rm h}\)38\({}^{\rm m}\)44.7873\({}^{\rm s}\) & +62\({}^{\circ}\)11\({}^{\prime}\)54.397\({}^{\prime\prime}\) \\ & & & & 10:42:50–10:44:40 & 10\({}^{\rm h}\)38\({}^{\rm m}\)44.7873\({}^{\rm s}\) & +62\({}^{\circ}\)11\({}^{\prime}\)54.397\({}^{\prime\prime}\) \\ & & & & 11:15:00–11:16:20 & \\ & & & & 11:35:30–11:36:40 & \\ & & & & 1994 Apr 20 08:24:10–08:25:20 & \\ & & & & 09:27:40–09:28:50 & \\ \hline Config & Bandwidth (MHz) & Obs ID & Target & Span (UTC) & B1950 RA & B1950 DECL \\ \hline C & 1452.4–1477.4, & AO048 & 0781AB & 1984 May 05 02:49:00–05:25:30 & 09\({}^{\rm h}\)17\({}^{\rm m}\)23.30\({}^{\rm s}\) & +30\({}^{\circ}\)44\({}^{\prime}\)05\({}^{\prime\prime}\) \\ & 1502.4–1527.4 & & 1328+307 (3C286) & 1984 May 05 08:32:00–08:35:00 & 13\({}^{\rm h}\)28\({}^{\rm m}\)49.657\({}^{\rm s}\) & +30\({}^{\circ}\)45\({}^{\prime}\)58.64\({}^{\prime\prime}\) \\ & & & & 08:52:00–08:55:00 & \\ & & & & 09:16:00–09:18:00 & \\ & & & & 11:15:30–11:17:30 & \\ & & & & 11:33:30–11:35:30 & \\ & & & & 11:56:30–11:58:30 & \\ & & & & 0851+202 & 1984 May 05 02:41:30–24:30:00 & 08\({}^{\rm h}\)51\({}^{\rm m}\)57.253\({}^{\rm s}\) & +20\({}^{\circ}\)17\({}^{\prime}\)58.44\({}^{\prime\prime}\) \\ & & & & 02:59:00–03:02:00 & \\ & & & & 05:06:00–05:15:30 & \\ & & & & 05:31:30–05:33:30 & \\ \hline \end{tabular} \end{table} Table 2: Archival JVLA observation details. Here only the details of the relevant calibrators and targets field are shown. \begin{table} \begin{tabular}{c c c c c} \hline Source ID & RA\({}_{\rm J2000}\) & DEC\({}_{\rm J2000}\) & \(z\) \\ \hline S1 & 9\({}^{\rm h}\)20\({}^{\rm m}\)1.2\({}^{\rm s}\) & +30\({}^{\circ}\)34\({}^{\prime}\)5.3\({}^{\rm s}\) & 1.305\({}^{\rm s}\) \\ S3 & 9\({}^{\rm h}\)20\({}^{\rm m}\)9.2\({}^{\rm s}\) & +30\({}^{\circ}\)30\({}^{\prime}\)8.1\({}^{\rm s}\) & 0.303\({}^{\rm s}\) \\ S4 & 9\({}^{\rm h}\)20\({}^{\rm m}\)1.4\({}^{\rm s}\) & +30\({}^{\circ}\)28\({}^{\prime}\)59.4\({}^{\rm s}\) & 0.297\({}^{\rm s}\) \\ S5 & 9\({}^{\rm h}\)20\({}^{\rm m}\)22.4\({}^{\rm s}\) & +30\({}^{\circ}\)32\({}^{\prime}\)30.7\({}^{\rm s}\) & 0.304\({}^{\rm s}\) \\ S6 & 9\({}^{\rm h}\)20\({}^{\rm m}\)22.3\({}^{\rm s}\) & +30\({}^{\circ}\)29\({}^{\prime}\)43.3\({}^{\rm s}\) & 0.293\({}^{\rm s}\) \\ SU1 & 9\({}^{\rm h}\)20\({}^{\rm m}\)22.5\({}^{\rm s}\) & +30\({}^{\circ}\)31\({}^{\prime}\)31.7\({}^{\rm s}\) & 0.304\({}^{\rm s}\) \\ SU2\({}_{\rm s}\) & 9\({}^{\rm h}\)20\({}^{\rm m}\)10.6\({}^{\rm s}\) & +30\({}^{\circ}\)32\({}^{\prime}\)6.0\({}^{\rm s}\) & 0.3(9)\({}^{\rm p}\) \\ SU2 range of a few 10s of rad m\({}^{-2}\) (see Fig. 6) and median peak distribution statistics in Table 4. To varying degree, the integrated Faraday Depth spectra shown in Fig. 7 indicate that, apart from S4 and S6, the other established cluster members (S3 and S5) are not subject to a single constant Faraday screen. They instead show complex magnetic fields, both parallel and orthogonal to the line of sight. The line of sight to the non-cluster-member S1 crosses the periphery to the cluster and by chance has a similar Faraday Depth to S4 and S6, both having lines of sight that may also only cross part of the whole cluster ICM. The majority of these sources are also largely depolarized, with the exception of CR1. This is not unexpected -- actively merging clusters tend to show little polarization within the merger region. This is likely due to the fine spatial-scale turbulence in such systems, where the synthesized beam acts to depolarize emission on these scales (Van Weeren et al., 2019). In this case, the synthesized beam corresponds to about 46 x 105 kpc -- ie. similar in angular extent to most of the field sources. Next, we will discuss the properties of these two candidate relics separately. ### CR1 (head-tail galaxy or relic?) This source was already observed (Venturi et al., 2011) at 325 MHz and tentatively classified as a candidate radio relic in the light of its peripheral position, morphology and spectral index steepening from \(-1.4<\alpha<-1.8\) northwards with decreasing distance from the cluster centre. Botteon et al. (2019) identifies an optical counterpart to the west of the bright optical source near the peak of the radio emission seen in Fig. 5. The source, spanning 550 kpc, is similar in its morphology when compared to observations taken at 150 MHz by Botteon et al. (2019). A bright knot of emission appears at the southernmost point of the CR1 source, connected to a high surface brightness spine that extends northeast. Two optical galaxies -- at approximately the cluster redshift -- coincide with the bright knot. Based on the combined X-ray and radio analysis, Botteon et al. (2019) concluded that CR1 is either a relic or a head-tail radio galaxy with morphology distorted by a (weak) shock. Our total intensity image is in fair agreement with previous observations. The morphology of CR1 at 1.4 GHz is similar to the low-frequency data, with a similar extent (\(\sim 540\) kpc linear size). We measure a flux density of \(S_{1.4}^{CR1}=14.6\) mJy, integrated over the the 60 \(\mu\)Jy beam\({}^{-1}\) contour shown in Fig. 5. The integrated flux density at 375 MHz (\(90\pm 9\) mJy) is in agreement with previous measurements (\(88\pm 12\) mJy) (Botteon et al., 2019). This is shown in Fig. 9. Assuming the source is within the vicinity of the Main cluster at redshift \(z=0.3004\) the power extrapolation is slightly underestimated at 1.4 GHz by Botteon et al. (2019) (who assumed a spectral index steeper than presented here). The spectrum is plotted in Fig. 9 -- here we use the integrated spectrum of the bright emission of the spine, which is about -1.4 and the integrated emission within the first contour of the CR1 complex in Fig. 5 to derive the radio power at 1.4GHz: \[P_{\rm CR1,1.4GHz}=\frac{4\pi D_{\rm L}^{2}S_{\nu}}{(1+z)^{1+\alpha}}\approx 4.78\times 10^{24}\ {\rm W\ Hz^{-1}}.\] \begin{table} \begin{tabular}{l c c c c} \hline \hline Source & \(z\) & \(\phi\) max & \(\phi\) median & \(\phi\) IQS \\ \hline S1 & 1.30526\({}^{\circ}\) & -34.48 & -18.71 & 17.08 \\ S2 & - & -34.48 & -60.46 & 143.26 \\ S3 & 0.30284\({}^{\circ}\) & -34.48 & 0.26 & 541.27 \\ S4 & 0.29741\({}^{\circ}\) & -103.45 & -43.38 & 34.16 \\ S5 & 0.30360\({}^{\circ}\) & 448.28 & -339.40 & 868.12 \\ S6 & 0.29262\({}^{\circ}\) & -34.48 & -20.61 & 24.67 \\ CR1 & - & -103.45 & -35.79 & 45.54 \\ CR2 & - & -586.21 & 394.95 & 948.77 \\ \hline \end{tabular} \end{table} Table 4: Properties of RM pixel distributions where rotation measure pixel peak to noise exceeds 3x, as shown in Fig. 6. We indicate the peak of the distribution, as well as the median and IQS of the contributing data here. Figure 6: _Left_: Peak rotation measure map of the Main cluster. _Right_: Peak-to-noise map of the rotation measure values after RM deconvolution. ) Figure 7: _Bottom_: Peak RM distributions (weighted by peak to noise) for a selection of sources, marked in the maps in fig. 6. Clips are imposed to highlight only the AGN and candidate relic emission. We also plot the normalized non-deconvolved spectra for the 30 spectra with the highest deconvolution peak-to-noise ratio (PNR) (shown top right), extracted per source. The spectra’ colours are dependent on the PNR weight — darker to be interpreted as spectra with a higher PNR value. SU1, S5 and S3 have low (as in the case of SU1) to moderate (S5 and S3) signal-to-noise ratios for RM measurement and the variations seen across the sources may not be an indication of substantial magnetic fields — we do not plot spectra for these here. The distributions consider pixels where the peak RM is at least 5x the noise, as indicated top right. The spectral index image is shown in Fig. 10. We used the same image field of view and sampling for both maps and tapered both maps to the lowest possible resolution, as taken from the 92 cm fitted beam (83 arcsec). We also corrected for the attenuation of the antenna primary lobe, using Equation 1. A 4 mJy beam\({}^{-1}\) cutoff was used to allow for only high sigma components in the spectral index image. Although the 92 cm resolution does not resolve the fine structure of either candidate (CR1 or CR2) the integrated trends are visible. As noted in Botteon et al. (2019) the bulk of the spine of CR1 has a steep spectrum of \(\alpha\approx-1.4\), steepening to \(\alpha<-1.6\) closer to the cluster centre. This is indicative of synchrotron radiation losses. The moderately steep spectra are consistent with the known spectra of other relics (Van Weeren et al., 2019). From the RM map (Fig. 6) we see that the area near the overlapping optical galaxy immediately northwest of the bright radio bulge to the south is largely depolarized (Fig. 8). On the contrary, the rest of the CR1 complex is relatively polarized (Fig. 8), especially the eastern and western edges. Both edges have peak rotation measures closer to zero and agree with what is observed on other compact sources at the cluster redshift, specifically S6 and S4. The polarization EVPA is reasonably well aligned in the plane of the sky in the direction we would expect to see a merger shock (Fig. 8). However, the increased polarization fraction along the structure is more consistent with relatively low fractional polarization as is typically observed in the jets of AGN (Homan, 2005). The polarization characteristics of this peripheral complex stand in stark contrast to the degree of polarization of typical relics observed in the literature (see e.g. Wittor et al. (2019)). The spectral index is substantially steeper than expected for steep spectra AGN population at 1.4 GHz (e.g. De Zotti et al. (2010)) -- this source will be considered very steep according to the distributions of SPI for both narrow and wide-tailed radio AGN (Sasmal et al., 2022), while the spectrum is at the low end of expected integrated spectra for radio relics Feretti et al. (2012). Although the low polarization fraction observed and the physical size (assuming cluster redshift for this source) may point to the source be Figure 8: _Top_: polarization fraction map of the cluster. The cluster members are mostly depolarized apart of predominantly CR1 and to a lesser extent CR2 and S3. The colour bar indicates fractional polarization. Here 15\(\sigma\) clips based on the total intensity map are used to differentiate the sources from the background. _Middle_ and _Bottom_: zoom in to showcase the polarized edges of CR1 and CR2 respectively. Here the vector field is overlaid on the optical DSS red plate, with contours in steps of \(\sqrt{2}\) from 13 sigma (1\(\sigma=60\)\(\mu\)Jy beam\({}^{-1}\)) Figure 9: Integrated spectrum of the source CR1. Here we show the measurements made by Botteon et al. (2019) and those we derive from the WSRT 92 and 21 cm bands. The dashed line indicates the best fitting to the LOFAR, uGMRT and WSRT flux density measurements to date. The shaded area is the propagated error on our fit. The new spectral index estimate of \(-1.26\pm 0.08\) is flatter than reported in Botteon et al. (2019) longing to the class of _Radio Pheonix_ (shock reaccelerated fossil emission from AGN), the ultra-steep spectra of these sources are typically curved and in excess of -1.5 Van Weeren et al. (2019). As a result, such emission is typically only seen at much longer wavelengths. Coupled with the coincidence of the optical counterpart with the knot of radio emission in the south of the complex, as reported by Botteon et al. (2019), the spectral and polarimetric measurement suggests that CR1 is neither a radio relic nor Pheonix. It is much more likely that CR1 is an ageing head-tail galaxy. ### CR2: another candidate relic? There is extended emission north of the hot X-ray region, labelled as CR2. This is the same source seen by Botteon et al. (2019) in LOFAR data. The elongated source is roughly \(1.5^{\prime}\) (\(\sim 405\) kpc, assuming cluster redshift), with an integrated flux density, of \(S_{\rm CR2,21cm}=2.7\pm 0.27\) mJy, measured within the area defined by the 60 \(\mu\)Jy beam\({}^{-1}\) contour in Fig. 5. The source has a spectrum between \(-1.0\) to \(-1.2\) over most of its area, as seen in Fig. 10 and there is no clear optical counterpart (Fig. 8 Bottom). Similarly to CR1, the radio spectrum of CR2 steadily steepens towards the the cluster center. If we assume an integrated spectral index of \(-1.1\), and that the source has the same redshift as the cluster, then the k-corrected radio power is \[P_{\rm CR2,1.4GHz}\approx 8.16\times 10^{23}\ {\rm W\ Hz}^{-1}.\] CR2 is only slightly polarized (Fig. 8 bottom) and has a rotation measure markedly different to the primary cluster AGNs S3, S4 and S6 (Fig. 7). Considering the contrast of the distribution of Faraday Depths of CR2 compared to the other cluster sources, save for S3 and S5, it is clear that this source of emission must either be located in an area with marked differences in foreground magnetic fields or intrinsically has complex Faraday screens. However, both its compact round morphology and low fractional polarization is in stark contrast to what is generally expected for radio Figure 10: _Top left:_ Spectral index map computed from WSRT 92 and 21 cm maps after tapering the resolution to a circular Gaussian of 73.08\({}^{\prime\prime}\) (lowest resolution) as indicated in the top map. The black contours from the uniform-weighted maps are drawn in steps of \(\sqrt{2}\) starting at 60 \(\mu\)Jy beam\({}^{-1}\). We apply a dilated mask to the levels of the contours drawn to highlight only the spectral index in the region of the sources. _Top right:_ Associated spectral index error map. We assume standard quadrature propagation of error rules (with band co-variance assumed as 0) for logarithms with flux scale errors at 10% level and tapered 21- and 92-cm noise of 18 \(\mu\)Jy beam\({}^{-1}\) and 1 mJy beam\({}^{-1}\) respectively. _Bottom left_ and _Bottom right_: Zoom in of the CR1 and CR2 complexes respectively. relics, although we cannot exclude projection effects on the morphology due to the available resolution. ### Revisiting halo claims Botteon et al. (2019) achieves sensitivities of \(\sigma_{\rm 143MHz}=270~{}\mu\)Jy beam\({}^{-1}\), \(\sigma_{\rm 325MHz}=150~{}\mu\)Jy beam\({}^{-1}\), \(\sigma_{\rm 610MHz}=120\mu\)Jy beam\({}^{-1}\) at resolutions of \(11.1\times 6.5\) arcsec, \(10.6\times 7.2\) arcsec and \(13.5\times 9.8\) arcsec respectively. At \(12~{}\mu\)Jy beam\({}^{-1}\) rms noise at a resolution of \(23.2\times 10.4\) arcsec, using all redundant spacings, our 21 cm data is the most sensitive \(L\)-band data on this field to our knowledge, and is at similar magnitude to LOFAR sensitivities when scaled by a halo spectrum of -1.3. This improved \(L\)-band sensitivity warrants a renewed look at the cluster center for any signs of halo emission. Fig. 11 shows the 21 cm Briggs \(-0.25\) weighted residuals after the bright AGNs within the cluster have been subtracted from the visibilities by means of a Direct Fourier Transform (DFT) implemented with Meqtrees(Noordam and Smirnov, 2010). The subtraction is performed by iterative fitting for components above \(2\sigma\) using pyBDSF(Mohan and Rafferty, 2015) and re-imaging. In total 3 rounds of subtraction and re-imaging were performed on the 21 cm data, each time explicitly excluding the components fitted to the southeastern complex. We carefully checked that the subtracted components only fall within the areas of AGNs S2-S6, to within instrument resolution. All fitted components were delta components. The accuracy in subtracting S6 (\(S_{1.4}\sim 32.6\pm 3.3\) mJy beam\({}^{-1}\) peak flux density) is the limiting factor in achieving high dynamic range on the low-resolution images within the vicinity of the hot merger region. To the south of the Main cluster we find traces of a bridge-like structure connecting CR1 to low-level emission extending throughout the hot X-ray plasma surrounding the central AGN. Excluding this bridge-like low-level structure we find the integrated flux of the extended emission to be \(1.9\pm 0.2\) mJy within the \(6\sigma\) contours in the X-ray-radio plot in Fig. 11 (top). This measurement is limited by a subtraction error at the level of 0.1 mJy beam\({}^{-1}\). It should be cautioned that the instrumental resolution of the WSRT system is the main limiting factor to the case presented here and spurious background sources, coupled with subtraction errors from S6 may be contributing to the integrated flux. If the apparently-diffuse emission seen is indeed a halo, its integrated flux would place it an order of magnitude below the upper bounds set by Venturi et al. (2011) and Bottoen et al. (2019). It is in-keeping with an average spectral index below \(-1.44\) if the upper detection limit of \(S^{\rm upper}_{\rm 143MHz}=50\) mJy is assumed from Bottoen et al. (2019). Assuming this as an upper limit to the spectral index, the k-corrected radio power at cluster redshift is: \[P_{\rm Halo,1.4GHz}\approx 6.28\times 10^{23}~{}{\rm W~{}Hz}^{-1}.\] Although one would have expected this power to be at least an order of magnitude larger on the \(0.1-2.4\) kev X-ray / radio power correlation, the power is still close to other detected haloes on the Mass / radio power correlation (Van Weeren et al., 2019). Without the availability of sensitive higher resolution data the error estimates presented here may be optimistic, however, it is noted that the apparently-diffuse emission extends over the majority of the disturbed X-ray thermal region with a diameter (excluding the bridge-like structure) of around 0.6 Mpc. ## 5 Conclusions We have observed the dynamically-disturbed A781 cluster Figure 11: _Top_: Briggs -0.25 weighted residuals after subtracting the bright AGNs in the cluster overlaid on _XMM-Newton_ X-ray. The rms noise is estimated as 6.23 \(\mu\)Jy beam\({}^{-1}\). Radio contours are plotted in \(\sqrt{2}\)-scale starting at 6 \(\sigma\). Dashed contours indicate \(-6\)\(\sigma\) subtraction errors. The synthesized beam at -0.25 weighting is \(25.8^{\prime\prime}\times 12.1^{\prime\prime}\) (\(\Omega_{b}\approx 26.63\)px) shown in red. _Bottom_: Subtracted residuals plotted in colour overlaid with blue contours from uniform maps presented in Fig. 2 for reference, contours starting from 7 \(\sigma\) (\(\sigma=12~{}\mu\)Jy beam\({}^{-1}\)) in \(\sqrt{2}\)-scale. Red contours from the higher resolution VLA reductions overlaid, starting from 7 \(\sigma\) (\(\sigma\approx 60~{}\mu\)Jy beam\({}^{-1}\)) in \(\sqrt{2}\)-scale. Beam indicated in hatched lines. complex with the Westerbork Synthesis Radio Telescope at 21 and 92 cm. We presented the most sensitive \(L\)-band observations of the system to date. We have found, what appears to be the existence of low-level diffuse emission around the central region of the merging cluster, although our measurement is limited by instrumental resolution. The integrated emission is nearly an order of magnitude less than the flux density claimed by Govoni et al. (2011). This is in-keeping with (Venturi, 2011) of an unusual flat-spectrum radio halo and is well below the expected radio power predicted by the \(P_{\rm 1.4GHz}-L_{x}\) relationship. Our maps corroborate the Botteon et al. (2019) observation of radio emission at the southeastern and northwestern flanks of the hot X-ray plasma. We have studied the polarimetric properties of the southeastern and northern complexes in detail. We find that the edges of the southeastern complex are polarized, with low Faraday Depth. Neither complex is highly polarized (fractions less than 8% and 1.5% for the southern and northern complexes respectively), further qualifying earlier statements by Botteon et al. (2019) that only relatively weak shocks are present in the Main cluster. This evidence, including consideration of morphology, points to the contrary that these are radio relics. The southeastern complex most likely has its origin as head-tail emission from an AGN with an unclear optical counterpart. The corroborating evidence hinting to the existance of an ultra-low flux density halo warrants further telescope time with sensitive high-resolution instruments such as LOFAR at lower frequencies, and SKA precursor telescopes such as the MeerKAT _UHF_ (544-1088 MHz) and \(L\)-band (856-1712 MHz) systems. Such observations will firmly establish the spectrum and integrated power of this very peculiar cluster. ## Data availability Data was generated at a large-scale facility, WSRT. FITS files are available from the authors upon request. ## Acknowledgements This work is made possible by use of the Westerbork Synthesis Radio Telescope operated by ASTRON Netherlands Institute for Radio Astronomy. Our research is supported by the National Research Foundation of South Africa under grant 92725. Any opinion, finding and conclusion or recommendation expressed in this material is that of the author(s) and the NRF does not accept any liability in this regard. This work is based on the research supported in part by the National Research Foundation of South Africa (grant No. 103424). This research has made use of the services of the ESO Science Archive Facility. The Second Polar Observatory Sky Survey (POSS-II) was made by the California Institute of Technology with funds from the National Science Foundation, the National Geographic Society, the Sloan Foundation, the Samuel Oschin Foundation, and the Eastman Kodak Corporation. Based on observations obtained with _XMM-Newton_, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A&AS 143, 23. This research made use of APLpy, an open-source plotting package for Python hosted at [http://aplpy.github.com](http://aplpy.github.com) This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013). The research of OS is supported by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation. The primary author wishes to thank the National Research Foundation of South Africa for time granted towards this study.
2304.00084
A new perspective on border completion in visual cortex as bicycle rear wheel geodesics paths via sub-Riemannian Hamiltonian formalism
We present a review of known models and a new simple mathematical modelling for border completion in the visual cortex V1 highlighting the striking analogies with bicycle rear wheel motions in the plane.
R. Fioresi, A. Marraffa, J. Petkovic
2023-03-31T19:18:23Z
http://arxiv.org/abs/2304.00084v1
###### Abstract ###### Abstract We present a review of known models and a new simple mathematical modelling for border completion in the visual cortex V1 highlighting the striking analogies with bicycle rear wheel motions in the plane. **A new perspective on border completion in visual cortex as bicycle rear wheel geodesics paths** **via sub-Riemannian Hamiltonian formalism** R. Fioresi12, A. Marraffa 3, J. Petkovic4 Footnote 1: University of Bologna, Italy Footnote 2: Funding by COST Action CaLISTA CA21109, CaLIGOLA MSCA-2021-SE-01-101086123, GNSAGA Footnote 3: University of Bologna, Italy Footnote 4: Mainz University Medicine, Germany ## 1 Introduction The issue of perceptual completion has been widely studied in psychology from a phenomenological point of view. The Gestalt psychology's Pragnanz laws of perception (see [14] and refs. therein), identify the empirical principles under which the human brain integrates local and incomplete visual stimuli into shapes and smooth figures. Among them, continuity is the leading principle for the perception of contours and is reproduced as a law in any mathematical modeling of boundary completion. A more quantitative approach to the study of visual phenomena and edge reconstruction, appears in [9], where the notion of receptive fields of simple and complex cells in the primary visual cortex is clearly introduced, and the functional structure of the primary visual cortex is also outlined [10]. Furthermore, the primary visual cortex, with its topographic, layered, and columnar functional organization, lends itself to fit into a differential geometry framework, through the concept of fiber bundles. This fact led to the first mathematical treatment of V1 in [20] (see also [3] and refs. therein). In particular, the experiments in [19] suggested the employment of geometrical methods to describe the border completion mechanism as in [18]. In [11, 7] a natural sub-Riemannian structure on V1 is introduced through analytic methods, arriving to a rigorous notion of geodesic for the border completion problem. Later on more work in the same direction appeared in [3] and [1]. The purpose of our paper is to provide a clear and organized overview of different treatments (e.g. [7, 18, 11, 20], [1] and refs. therein), highlighting, in particular, the striking analogy between human vision and bycicle kinematics (see [2]). In fact, we tackle the problem of border completion, by exploiting the analogies between the physiological framework, that we established for border reconstruction, and the one described in [2] for bicycle rear wheel trajectories. In particular, taking advantage of this correspondence, we are able to use the bicycle model to reformulate the problem of boundary completion in terms of finding solutions to the Hamilton's equations of the given sub-Riemannian structure in a more geometric and intrinsic manner [2, 17]. With this aim, in section 2, we give an insight into the neurophysiological aspects of the early visual system. We provide some elementary notions about the role of the retina and the lateral geniculate nucleus, and we outline the structural organization of the primary visual cortex. Here, we only focus on the aspects concerning oriented edge detection, and a more comprehensive description can be found in books on neural physiology (see [12] and refs. therein). In section 3, we give a brief overview and comparison of models present in the literature, relevant to our study. In section 4, we recall the main features of the model for computing trajectories of the front and rear wheel of the bike through subriemannian Hamiltonian mechanics in [2] and [17]. Finally, in section 5 we highlight the analogies between the model for rear wheel bicycle trajectories and a model for the border detection mechanism in V1 through the _orientation vector field_. We are able to establish a dictionary that allows us to translate one problem into the other, and treat them as one from a mathematical point of view. Finally, in section 6, we exploit this identification to find the geodesic's equation of the subriemannian structure ([7, 11]) in the Hamiltonian formalism ([17]). The solutions to the geodesic's equations provide us with the reconstructed boundaries in a more natural and intrisic way. ## 2 The visual pathway In this section, we give a quick overview of the visual system's lowest portion, from the retina to the primary visual cortex; later on we shall outline a mathematical model for orientation construction which is faithful to the effective functioning of the neural structures involved. Hence, we need to address the question of how the perception of orientation arises from sensory receptors sensitive to single, discrete light inputs. With this aim, we focus our brief introduction on the retina, the lateral geniculate nucleus, and the primary visual cortex itself. The first structure involved in the process of transforming images into nerve signals is the retina, which is located in the posterior inner surface of the eye. Roughly speaking, the eye captures the light input and projects it onto the retinal photoreceptors capable of measuring light intensity. At this early stage, the neural representation of the visual scene is still simple, with hyperpolarized receptors in lighted areas and depolarized receptors in dark areas. Retinal ganglion cells (RGCs) project the information to the lateral geniculate nucleus in the thalamus through the optic nerve. Their main computational feature is their circular receptive field, divided into two concentric parts in mutual competition: the center and the inhibitory surround. As a consequence, RGCs give the best possible response to the difference in brightness between their center and surrounds, while they respond weakly to uniform light stimuli falling on the whole receptive field. The lateral geniculate nucleus (LGN) is a small ventral projection of the thalamus connected to the optic nerve. It represents a crucial relay station in the image processing pathway, and its functionalities include a smoothing effect on the visual image, which is fundamental for contour perception and our modelling of the visual image as a smooth function. Its axons are directly sent to the primary visual cortex, where the construction of the orientation percept takes place. Along the lower section of the visual pathway, there is faithful preservation of spatial information. This property is called _retinotopy_ and, for the primary visual cortex V1, it means that we have a homeomorphism between the hemiretinal receptoral layer and V1 called _retinotopic map_. The existence of such a map allows us to identify V1 with a compact domain \(D\) in \(\mathbb{R}^{2}\). In the primary visual cortex, we find cortical columns composed of simple and complex cells. These columns, through the non-concentric, striped receptive fields of their cells, build the orientation information from the input signal, effectively associating an orientation value, i.e. an angle, \(\theta\) to each point \(x,y\) of the perceived image. According to the fundamental work [9], V1 is organized in functional units called _hypercolumns_ that allow the analysis of different aspects of the visual information separately and in parallel. In particular, an orientation hypercolumn contains a full set of simple and complex cells, i.e. a full set of _orientation columns_. Within an orientation column, we have cells which address the same portion of the visual image and best respond to the same orientation \(\theta\). In this model, called the _ice cube model_, see Fig. 1, we assume that at each point of the domain \(D\) there is a complete orientation hypercolumn, i.e at each \(2\pi\). This fact leads to identifying V1 with the _orientation bundle_ defined as \({\cal E}:=D\times S^{1}\to D\), which also represents the configuration space manifold for border reconstruction (see [7], [11], [20]). ## 3 Vision models for boundary completion In this section we give a brief overview of the models for the boundary completion mechanism; for more details see [4] and refs. therein. One of the first successful attempts to reproduce this phenomenon was reported by Mumford in [18]. Here, starting from structural considerations on the V1 horizontal connectivity, the mechanism of border completion is modeled as a stochastic process where active hypercolumns induce a random compatible orientation in the neighbouring inactive hypercolumns, i.e. the hypercolumns corresponding to points in the image where a border is missing. In this framework, a boundary is completed when this inductive process generates a line joining two given orientation cues. Clearly, there is an infinite number of curves that can be obtained in this fashion, but Mumford assumes that the ones corresponding to the experimentally seen reconstructed boundaries are the ones that maximize the probability of reaching completion after a fixed number of iterations. This condition is then proven to be equivalent to minimizing the elastica integral \[\int_{\gamma}\alpha\,k^{2}+\beta\;ds\] where \(\alpha\) and \(\beta\) are constants,\(k\) the curvature of \(\gamma\), \(ds\) the arc length. Such minimization leads to the _elastica curve_, originally investigated by Bernoulli and Euler [5] (for a review both mathematical and historical see [15] and refs. theirin). Later, Petitot and Tondut [11] show how Mumford's approach can be Figure 1: Ice cube model reformulated in a differential geometric framework. Building upon the intuitions of Hoffman [20], who first provided an interpretation of the visual cortex as a contact bundle and described the visual psychololgical constancies as invariants of the action of the conformal group \(CO(1,3)\) on such bundle, they formulate the boundary completion problem as a minimization of a suitable Lagrangian, and show that this approach is indeed equivalent to the elastica integral minimization. Years later, Citti and Sarti ([7]) further expanded this geometrical approach, interpreting Hoffman's visual bundle as a sub-riemannian manifold, and the reconstructed boundaries as (horizontal) geodesics in this space. In addition, thanks to the fact that the distribution of such manifold is bracket-generating, they propose an iterative algorithm to compute the geodesics as solutions of the sub-heat equation \[\partial_{t}u=\Delta_{h}u\] where \(\Delta_{h}\) is the sub-laplacian operator (a laplacian with derivations taken only along the horizontal directions). The existence and smoothness of \(u\) is, indeed, granted by the bracket-generating distribution (_Hormander's condition_, see also [4] for a more comprehensive treatment and a complete bibliography). In the next sections, we will show that our considerations, based on the analysis of the physiological structures of the visual pathway, can provide, with a geometric and intrinsec approach, a model for border completion, strictly related to the literature ones as described above, and leveraging a fascinating correspondence between the visual cortical structure and the paths traced by the rear wheel of a bike in motion. ## 4 Bicycle Paths In this section we briefly recall how to compute the trajectories of front and rear bicycle wheels with the aid of subriemannian geometry in the Hamiltonian formalism. Our main sources are [2] and [17], where the front wheel geodesics are explicitly computed. The configuration space of the front \(f\) and rear wheel \(b\) of a bicycle is given by: \[Q=\{(b,f)\in\mathbb{R}^{2}\times\mathbb{R}^{2}\,|\,\|b-f\|=\ell\} \tag{1}\] where \(\ell\) is the distance between the rear and front wheels. Assume \(\ell=1\) as in [2] Let us denote \[v=f-b=(\cos\theta,\sin\theta)\quad\mapsto\quad(x,y)=(\tilde{x},\tilde{y})-( \cos\theta,\sin\theta)\] where \(b=(x,y)\), \(f=(\tilde{x},\tilde{y})\). We are interested, differently from [2], in the motion of \(b\), for reasons that will be clarified later. We notice immediately that \(Q\cong\mathbb{R}^{2}\times S^{1}\cong\mathrm{SE}(2)\). The key importance of such identification, in the light of our previous sections on the modeling of the visual cortex and border detection/reconstruction, will be fully elucidated in our next section. The motion of \(b\) is subject to the differential constraint \[\dot{b}=kv,\] the so called no skid condition, meaning that, when \(b\) starts to move, it must be in the direction of \(v\). We fix \(k=1\), an immaterial constant for our problem. We observe that \(\dot{v}\) is orthogonal to \(v\), hence \(\{v,\dot{v}\}\) is an orthonormal basis for \(\mathbb{R}^{2}\). Let \(\dot{f}=\dot{f}_{0}+\dot{f}_{\perp}\) be the decomposition of \(f\) in such basis, hence: \[\dot{f}_{\perp}=\dot{v}=\dot{f}-\dot{b}=\dot{f}-\dot{f}_{0}=\dot{f}-\langle\dot {f},v\rangle v\] the last equality is just expressing the projection onto \(v\). In coordinates, we obtain: \[(-\dot{\theta}\sin\theta,\dot{\theta}\cos\theta)=(\dot{\tilde{x}},\dot{\tilde{ y}})-\langle(\dot{\tilde{x}},\dot{\tilde{y}}),(\cos\theta,\sin\theta)\rangle( \cos\theta,\sin\theta)\] A small calculation gives a differential constraint: \[\dot{\theta}-\cos\theta\dot{\tilde{y}}+\sin\theta\dot{\tilde{x}}=0 \tag{2}\] If we substitute \(f=b+v\) we get: \[\cos\theta\dot{y}-\sin\theta\dot{x}=0\] which corresponds to the contact form: \[\eta=\cos\theta dy-\sin\theta dx\] The kernel of \(\eta\) gives the distribution spanned by: \[\mathcal{D}=\mathrm{span}\{X=\partial_{\theta},Y=\cos\theta\partial_{x}+\sin \theta\partial_{y}\}\] Figure 2: Configuration Space \(Q\) Its _Reeb vector field_ is \[Z=\cos\theta\partial_{y}-\sin\theta\partial_{x} \tag{3}\] as one can readily check. We now observe that the three vectors \(X\), \(Y\), \(Z\) are left invariant for the action of the special euclidean group \(\mathrm{SE}(2)\), expressed in the coordinates \((x,y,\theta)\). There is a natural metric on \(\mathcal{D}\) imposing the orthogonality of the given generators: up to scalar multiplication this is the only invariant subriemannian metric on \(\mathrm{SE}(2)\)[2]. ## 5 Border detection in V1 and bicycle paths We want to establish a dictionary between concepts belonging to very different scopes, visual system and bicycle paths, that we introduced in our previous sections to account for the striking mathematical analogy, noticed already by the authors in [2]. This will enable us to provide a simplified model for the border completion, in our next section. In Sec. 2, we modelled the visual cortex (V1) as the fiber bundle \(\mathbb{R}^{2}\times S^{1}\) over \(\mathbb{R}^{2}\). Now we want to relate this model with the configuration space \(Q\) of bicycle wheels described in section (1). We consider the Reeb vector field \(Z\) (3) on \(\mathbb{R}^{2}\times S^{1}\): \[Z(x,y,\theta)\,=\,-sin\theta\,\partial_{x}+cos\theta\,\partial_{y},\] Thanks to the action of the lateral geniculate nucleus, we can identify each image with a smooth function \(I\colon D\to\mathbb{R}\), for a receptive field \(D\subset\mathbb{R}^{2}\). We can therefore define the _orientation of a function_. **Definition 5.1**.: Let \(I\colon D\to\mathbb{R}\) be a smooth function, and \(reg(D)\subseteq D\) the subset of the regular points of \(I\). We define the _orientation map of \(I\)_ as \[\Theta\colon reg(D) \to S^{1}\] \[(x,y) \mapsto\Theta(x,y)\,=\,\mathrm{argmax}_{\theta\in S^{1}}\{Z( \theta)I(x,y)\}\] We call \(Z\) the _orientation vector field_. The map \(\Theta\) is in charge of reproducing the behavior of simple and complex cells, so that an oriented edge is assigned at each point. We have that \(\Theta\) is well defined. **Proposition 5.2**.: _Let \(I:D\longrightarrow\mathbb{R}\) be as above and \((x_{0},y_{0})\in D\) a regular point for \(I\). Then, we have the following:_ 1. _There there exists a unique_ \(\theta_{x_{0},y_{0}}\in S^{1}\) _for which the function_ \(\zeta_{x,y}:S^{1}\longrightarrow\mathbb{R}\)_,_ \(\zeta_{x,y}(\theta):=Z(\theta)\,I(x,y)\) _attains its maximum._ 2. _The map_ \(\Theta:\mathrm{reg}(D)\longrightarrow S^{1}\)_,_ \(\Theta(x,y)=\theta_{x,y}\) _is well defined and differentiable._ 3. _The set:_ \[\Phi=\{(x,y,\Theta(x,y))\in D\times S^{1}:\Theta(x,y)=\theta_{x,y}\}\] _is a regular submanifold of_ \(D\times S^{1}\)_._ Proof.: (1). Since \(\zeta_{x,y}\) is a differentiable function on a compact domain it admits maximum, we need to show it is unique. We can explicitly express: \[\zeta_{x,y}(\theta)=-\sin\theta\;\partial_{x}I+\cos\theta\;\partial_{y}I\] Since \((\partial_{x}I,\partial_{y}I)\neq(0,0)\) and it is constant, by elementary considerations, taking the derivative of \(\zeta_{x,y}\) with respect to \(\theta\) we see the maximum is unique. (2). \(\Theta\) is well defined by (1) and differentiable. (3). It is an immediate consequence of the implicit function theorem. Notice the following important facts: * the locality of the operator \(Z(\theta)\) mirrors the locality of the hypercolumnar anatomical connections; * its operating principle is a good description of the combined action of simple and complex cells (though different from their individual behaviour); We can then view a smooth contour as the level set of some smooth function \(I\colon D\to\mathbb{R}\); \(\Theta\) is the orientation of the countour and the orientation vector field \(Z\), by construction, is orthogonal to such contour, since \(\operatorname{argmax}_{\theta\in S^{1}}\{Z(\theta)I(x,y)\}\) occurs when \(Z\) is aligned with \(\nabla\,I\). On the other hand, as we can see from our Fig. 2, \(Z\), here the Reeb vector, is orthogonal to \(v\), which is tangent to the trajectory of rear wheel \(b\) by its very construction. Hence the angle we use to steer the handle of the bicycle, brings a variation of the angle of the orientation vector field along rear wheel path. So we have: _Bicycle rear wheel paths described by the differential constraint of the contact structure with the Reeb vector \(Z\) coincide with the borders of an image detected in the visual cortex with orientation vector \(Z\)_ We report in the following table the dictionary between these two different yet intriguingly related dynamics. ## 6 Rear wheel path and border completion Once the analogy is established in the dictionary of the previous table, we can proceed and compute the geodesics of the subriemannian structure defined in Sec. 4 in the Hamiltonian formalism. They give at once both the rear wheel trajectory and the border completion curve, given initial and final position and orientation of the reconstructed edge or, respectively, of the bicycle rear wheel. We notice that, while in general Lagrangian and Hamiltonian geodesics differ, here since we are in the special situation of a fiber bundle \(SE(2)\longrightarrow\mathbb{R}^{2}\cong SE(2)/S^{1}\), they coincide (see [17] Ch. 1). Hence, with the Hamiltonian formalism, we retrieve the same geodesic up to a (differential) phase factor as in [7], hence strictly related to the treatment in [18] (Sec. 3) in the visual cortex context. Besides we obtain the same equations as the bicycle paths in [2], though the authors do not fully compute such paths for the rear wheel, but for the front one only (see also [16]). Since we have the same differential constraint and the same contact form \(\eta\), we have the same distribution \[\mathcal{D}\,=\operatorname{span}\{X=\partial_{\theta},Y=\cos\theta\partial_{ x}+\sin\theta\partial_{y}\}.\] This distribution is bracket generating and, choosing the only invariant sub-Riemannian metric on \(SE(2)\), we have \((Q,\mathcal{D},\langle,\rangle)\) sub-Riemannian manifold. Notice that both \(X\) and \(Y\) are left invariant vector fields for \(SE(2)\), so that our distribution is naturally invariant under the special euclidean group action. Furthermore, since the uniqueness of the sub-Riemannian invariant metric, our construction has this key natural invariance built-in and it appears more intrinsec than the equivalent descriptions in Sec. 3 (see [2, 16]). Following [17], we can define a cometric for every \(q\in\mathcal{E}\), \[\beta_{q}\colon T_{q}^{*}\mathcal{E} \to T_{q}\mathcal{E}\] \[p \mapsto\beta_{q}(p).\] such that \(\operatorname{Im}\beta_{q}\,=\,\operatorname{span}\{X(q),Y(q)\}\). Let us introduce the local coordinate system \((x,y,\theta,p_{x},p_{y},p_{\theta})\) on \(T^{*}\mathcal{E}\), where \((p_{x},p_{y},p_{\theta})\) are the local coordinates on the cotangent bundle corresponding to \((x,y,\theta)\), defined by writing any covector \(p\) as \(p\,=\,p_{x}\,dx\,+\,p_{y}\,dy\,+\,p_{\theta}\,d\theta\). We obtain the expression for the cometric in local coordinates \[\beta_{q}=\begin{pmatrix}cos\theta&0\\ sin\theta&0\\ 0&1\end{pmatrix}\] The subriemannian Hamiltonian associated with \(\beta\), is the functional \[H \colon T^{*}\mathcal{E} \to\mathbb{R}\] \[(q,p) \mapsto\frac{1}{2}\langle\beta_{q}(p),\beta_{q}(p)\rangle,\] and we know that we can express \(H\) as \[H\,=\,\frac{1}{2}(P_{X}^{2},P_{Y}^{2}).\] Here, \(P_{X}\) and \(P_{Y}\) are the momentum functions of \(X\) and \(Y\): \[P_{X} \,=\,cos\theta\,p_{x}+sen\theta\,p_{y}\] \[P_{Y} \,=\,p_{\theta},\] where \(p_{x}=P_{\partial_{x}}\), \(p_{y}=P_{\partial_{y}}\), \(p_{\theta}=P_{\partial_{\theta}}\), coincide with the local coordinates for \(T^{*}_{q}\mathcal{E}\), defined above. Hence, in local coordinates, we can write \[H\,=\,\frac{1}{2}[(cos\theta\,p_{x}\,+\,sin\theta\,p_{y})^{2}\,+p_{\theta}^{2}].\] We know that to each Hamiltonian functional we can associate an Hamiltonian vector field, defined by the Hamilton equations. Furthermore, we know that \(\dot{f}\,=\,\{f,H\}\), for any smooth function \(f\) on the cotangent bundle. If we define the auxiliary functions \[p_{1} \,=\,P_{X}\,=\,cos\theta\,p_{x}\,+\,sen\theta\,p_{y}\] \[p_{2} \,=\,P_{Y}\,=\,p_{\theta}\] \[p_{3} \,=\,P_{Z}\,=\,-sin\theta\,p_{x}\,+\,cos\theta\,p_{y}.\] We can obtain the Hamilton equations letting \(f\) vary over the coordinate functions \((x,y,\theta)\), and the auxiliary functions on the cotangent bundle \((p_{1},p_{2},p_{3})\). Before going on with our derivation of the Hamilton equations, let us note that \[\{P_{X},P_{Y}\}\,=\,P_{Z}\,=\,-P_{[X,Y]}.\] We can now easily calculate the Hamilton equations for the position coordinates \((x,y,\theta)\) \[\dot{q}^{i}=\{q^{i},H\}\longrightarrow\begin{cases}\dot{x}=cos\theta\,p_{1}\\ \dot{y}=sin\theta\,p_{1}\\ \dot{\theta}=p_{2}\end{cases}\] and for \((p_{1}p_{2},p_{3})\) \[\dot{p}_{i}=\{p_{i},H\}\longrightarrow\begin{cases}\dot{p_{1}}=p_{3}p_{2}\\ \dot{p_{2}}=-p_{3}p_{1}\\ \dot{p_{3}}=-p_{1}p_{2}\end{cases}\] As we know that the hamiltonian functional assumes a constant value along the hamiltonian flow (i.e, the solutions to Hamilton equations), we can write \[E\,=\,p_{1}^{2}+p_{2}^{2},\] where \(E/2\) is a constant hamiltonian value. Then, we can introduce an auxiliary variable \(\gamma(t)\) such that \[\begin{cases}p_{1}&=\,\sqrt{E}\,sin(\frac{\gamma}{2})\\ p_{2}&=\,\sqrt{E}\,cos(\frac{\gamma}{2})\\ p_{3}&=\frac{1}{2}\,\dot{\gamma},\end{cases}\] and therefore we can express the other variables as \[\begin{cases}\dot{x}&=\,\sqrt{E}\,sin(\frac{\gamma}{2})\,cos\theta\\ \dot{y}&=\,\sqrt{E}\,sin(\frac{\gamma}{2})\,sin\theta\\ \dot{\theta}&=\,\sqrt{E}\,cos(\frac{\gamma}{2}).\end{cases} \tag{4}\] The variable \(\gamma\) satisfies a pendulum-like differential equation \[\ddot{\gamma}\,+\,E\,sin\gamma\,=\,0,\] which is not analytically solvable. Hence, in order to give a graphical representation of the geodesic curves, we need to resort to numeric integration (see also [16]). The solutions of these differential equations are the the lifts in \(\mathcal{E}\) of perceived borders. In Fig. 3, we show the projections on \(D\) of some of the solutions with energy fixed to a value \(E=0.2\), obtained varying the initial value of the parameter \(\gamma\). A solution to the geodesic equations that joins the points \((0,0,0)\) and \((0.01,0.005,\frac{\pi}{3})\) is shown in Fig. 4, mimicking the border completion mechanism in the brain, given two boundary inducers, i.e, given initial and final coordinates \((x,y,\theta)\). Since we have a closed form for the vector field tangent to our solutions (see eq. (4)), we can obtain an expression allowing us to make a direct comparison with the strictly related treatments in [7, 11] and also with the Euler's elastica as Mumford introduces it for the border completion problem [18] (see Sec. 3). The curvature of the solution projection \(\sigma=(x,y)\) is obtained as: \[k_{\sigma}(t)=\frac{||\dot{\sigma}(t)\times\ddot{\sigma}(t)||}{\|\dot{\sigma}^ {3}(t)\|}=\frac{|\dot{\theta}|}{\sqrt{\dot{x}^{2}+\dot{y}^{2}}}=\left|\cot \frac{\gamma}{2}\right|\] We notice from (4) that: \[\dot{\Sigma}=\dot{x}\partial_{x}+\dot{y}\partial_{y}+\dot{\theta}\partial_{\theta} =\sin\frac{\gamma}{2}(X_{1}+k_{\sigma}X_{2}),\qquad X_{1}=\cos\theta\partial_{x }+\sin\theta\partial_{y},\quad X_{2}=\partial_{\theta}\] We compare with the expression of \(\dot{\Sigma}\) appearing in [11], which is equivalent to the treatment in [7]: \[\dot{\Sigma}=X_{1}+kX_{2} \tag{5}\] Figure 4: A solution to the geodesic equations that joins \((0,0,0)\) and \((0.01,0.005,\frac{\pi}{3})\) (on the right), and its projection onto the domain \(D\) (on the left). Figure 3: Solutions of the geodesic equations for some values of \(\gamma\), projected onto the domain \(D\). The energy is fixed to \(E=0.2\). for the same vector fields \(X_{1}\), \(X_{2}\). We see the appearance of a _phase factor_ compatible with the treatments in [16, 15], due essentially to the fact we look at the rear wheel, while the solutions of the equation (5) correspond to the front one, leading to the elastica curve (this is a remark appearing also in [2]). The Hamiltonian energy minimization along a curve \(\Sigma\) can, therefore, be expressed in function of the projection curvature as follows: \[\mathcal{E}(\Sigma)=\int_{\Sigma}E\,dt=\int_{\Sigma}E\sin^{2}\frac{\gamma(t)} {2}\,\left[1+k_{\sigma}^{2}(t)\right]\,dt\] again, the same as in [18], up to the multiplicative factor \(\sin^{2}\frac{\gamma(t)}{2}\) as we commented above. We also mention that, though in general the question of sub-riemannian geodesics, may have different answers in the Hamiltonian and Lagrangian formalism, since here we have a principal bundle, due to the V1 modelling, there is a complete equivalence of the results obtained in the two formalisms (see [17]). To conclude, we note the similarity of the geodesic curves depicted in fig. 3 with the local _association fields_ from [6] shown in fig. 5. Fields, Heyes and Hess investigate, through a set of experiments, how the relative alignment of neighboring oriented elements is related to the perception of continuity in the human brain. The information detected by single orientation selective cells is supposed to propagate locally through some _long-range connections_ in an orientation and position-specific modality ([8], [13]). According to our model, the natural interpretation of the local association field is the representation of the projection onto \(D\) of a family of integral curves of the Hamiltonian vector field, i.e, the solutions to the geodesic equations corresponding to the joint constraints of position and orientation. Figure 5: The _association field_ (on the left): the rays extending from the ends of the central oriented element represent the optimal orientations at different positions. On the right, the specific rules of alignment are represented. Conclusions Our comparison among the various V1 models for border completion, available in the literature, shows that with a simple mathematical modelling, taking advantage of the analogies with the bicycle paths, allows us to obtain geodesics, which are strictly related to the ones in [11, 7, 18]. Our treatment with hamiltonian formalism is natural and establishes a dictionary, through low dimensional subriemannian geometry, between visual path items and the bicycle paths one as in [2]. ## 8 Acknowledgements R. Fioresi wishes to thank Prof. Alekseevski, Prof. Citti, Prof. Sarti, Prof. Latini and Prof. Breveglieri for helpful discussions and comments.
2309.11175
Testing frequency distributions in a stream
We study how to verify specific frequency distributions when we observe a stream of $N$ data items taken from a universe of $n$ distinct items. We introduce the \emph{relative Fr\'echet distance} to compare two frequency functions in a homogeneous manner. We consider two streaming models: insertions only and sliding windows. We present a Tester for a certain class of functions, which decides if $f $ is close to $g$ or if $f$ is far from $g$ with high probability, when $f$ is given and $g$ is defined by a stream. If $f$ is uniform we show a space $\Omega(n)$ lower bound. If $f$ decreases fast enough, we then only use space $O(\log^2 n\cdot \log\log n)$. The analysis relies on the Spacesaving algorithm \cite{MAE2005,Z22} and on sampling the stream.
Claire Mathieu, Michel de Rougemont
2023-09-20T09:44:57Z
http://arxiv.org/abs/2309.11175v1
# Testing frequency distributions in a stream ###### Abstract We study how to verify specific frequency distributions when we observe a stream of \(N\) data items taken from a universe of \(n\) distinct items. We introduce the _relative Frechet distance_ to compare two frequency functions in a homogeneous manner. We consider two streaming models: insertions only and sliding windows. We present a Tester for a certain class of functions, which decides if \(f\) is close to \(g\) or if \(f\) is far from \(g\) with high probability, when \(f\) is given and \(g\) is defined by a stream. If \(f\) is uniform we show a space \(\Omega(n)\) lower bound. If \(f\) decreases fast enough, we then only use space \(O(\log^{2}n\cdot\log\log n)\). The analysis relies on the Spacesaving algorithm [18, 20] and on sampling the stream. ## 1 Introduction We study streams of data items and the distribution \(g\) of frequencies where \(g(i)\) is the number of occurrences of the \(i\)th most frequent item in the stream. Here, we consider a stream of length \(N\) of elements from a domain \(U\) of size \(n\) and we want to approximately verify whether the frequency \(g\) of the stream is close to a fixed distribution \(f\). We may also look at two different streams and ask whether their frequencies \(g_{1}\) and \(g_{2}\) are close to each other. In practice, of particular interest are settings with single-pass streams and very small memory [17]. What kind of properties can we hope to verify if we only allow poly-logarithmic space? We first prove an \(\Omega(n)\) space lower bound on the space of the Tester, theorem 1, when \(f\) is the uniform distribution. We therefore need some additional conditions on the frequency function \(f\). The approximation follows the Property Testing framework, where we use the _relative Frechet distance_ between two frequency functions \(f\) and \(g\) as a new measure of distance. Given a stream and a frequency function \(f\) which satisfies a certain weak continuity property and is decreasing fast enough, we decide in space \(O(\log^{2}n\cdot\log\log n)\) whether the frequency \(g\) defined by the stream is close to \(f\) for the relative Frechet distance. **Frequency functions.** There are two different ways to study frequency functions. Either the function is from \(U\) to \(\mathbf{N}_{+}\) and gives the frequency of each item, in which case the problem is easy; or the function \(f\) is from \(\{1,2,..n\}\) to \(N\) such that \(f(i)\) is the frequency of the \(i\)-th most frequent item; we take the latter viewpoint. A _frequency function_\(f\) is a non-negative integer-valued function over a set of elements such that \(f(i)\) is the number of occurences of the \(i\)th most frequent element. The problem is harder as we don't know which element of \(U\) is the \(i\)-th most frequent, and, for example, the two streams \(aaabba\) and \(bbbaab\) that are identical up to permuting the items have identical frequency functions even though \(b\) has \(2\) occurrences in the first stream and \(4\) occurrences in the second stream. **Relative Frechet distance.** What is the relative Frechet distance? The classical (discrete) Frechet distance between two discrete distributions, viewed as sequences of points \(\{(i,f(i))\}\) and \(\{(i,g(i))\}\) is an absolute distance. It is the minimum distance of a coupling between the two sequences. The discrete Frechet distance between discrete curves has been studied, in particular in computational geometry, including in the streaming context [8, 12], but with a different oracle model. We generalize this distance to a _relative Frechet distance_: the distance of the coupling must preserve within \((1+\varepsilon_{1})\) the distance on the \(x\)-axis and within \((1+\varepsilon_{2})\) the distance on the \(y\)-axis. **Additional assumptions.** The weak continuity property, called \(\varepsilon\)-step compatibility, assumes that the frequency function \(f\) may have discontinuities, i.e. large drops, but no double discontinuities. Points which are \(\varepsilon\)-close on the \(x\)-axis are also close on the \(y\)-axis. We combined two well known techniques: the Spacesaving algorithm [18, 20] which deterministically selects the most frequent items approximately and the Minhash technique which approximates the low frequencies probabilistically. Our main results are: * A link between the relative Frechet distance of two discrete functions which are step-compatible, and a separating rectangle, theorem 2, * A streaming Tester for a step compatible frequency function and the relative Frechet distance, when \(f\) is \(\gamma\)-decreasing. The Tester uses \(O(\log^{2}n\cdot\log\log n)\) space, theorem 1. In the second section, we present our main definitions. In the third section, we define the classical distributions with a compact representation, the Spacesaving algorithms whose fine analysis, lemma 11, is in the appendix A.2. In the fourth section, we introduce the relative Frechet distance and the proof of theorem 2 is in the appendix B. In the fifth section we present the streaming Tester first for the insertion only model, then for the sliding window model. ### Motivations and comparison with other approaches Problems that are hard in the worst-case may be much simpler for inputs which follow specific distributions, for example power law distributions. It is therefore important to verify if some given data follow certain distributions, when the data arrive in a stream. The area of _Distribution testing_[5] studies this type of problems in general. We first work in the insertion model, and then consider the _sliding window_ model with insertions and deletions outside a window. We will study the turnstile model [19] with insertions and deletions for the bounded deletions model1 from [14] in some later work2. Notice that the sliding window model is not a bounded deletion model, as \(I/D\) tends to \(1\) when \(I\) goes to \(\infty\). In [6], the verification of properties of a stream is studied with streaming interactive proofs. In [13], the verification is done efficiently thanks to prior work done by annotating the stream in advance in preparation for the task. In our setting, we use the Property Testing framework without any annotations or other additional prior information. We propose this setting for the verification of the distribution of frequent items. A standard problem in statistics is to check if some observed data, i.e. in the insertion only model, approximately fit some statistics \(F\) where \(F(e_{i})\) is the frequency of the element \(e_{i}\). Let \(G\) be the frequency of the elements of the observed data. The standard \(\chi^{2}\) test computes: \[\chi^{2}(F,G)=\sum_{i=1}^{n}(F(e_{i})-G(e_{i}))^{2}/F(e_{i})\] If \(\chi^{2}(F,G)\leq a\), we know that \(G\) follows \(F\) with confidence \(1-\alpha\), for example \(a=11,07\) and \(1-\alpha=95\%\). In this setting, [10] gives an algorithm which uses space \(O(\log N.\sqrt{N})\) to decide if \(F\) and \(G\) are close or far for the \(\chi^{2}\) test. In fact, the AMS-sketch [2] can be adapted and requires only \(O(\log n)\) space. In this paper, we study the case when the frequency function \(g\) is given by a stream of \(N\) data items and we want to test if \(g\) approximately follows the frequency function \(f\) over the domain \(\{1,2,...n\}\), in polylogarithmic space and without necessarily knowing the exact value of \(n\). For example \(f\) might be a Zipf distribution. If we observe sliding windows of the stream, the frequency \(g\) may be stable in each window, although the most frequent items change over time. Thus we are interested in making restrictive but reasonable assumptions that will imply that we can test in polylogarithmic space. We turn to a measure of proximity between distributions that we call _relative Frechet distance_. We use the Spacesaving algorithm [18] with additional hypothesis on the function \(g\), to be step-compatible and \(\gamma\)-decreasing, in order to obtain relative errors on the frequencies, as in [7] to approximate the rank of an item. Our main result is a Tester when \(f\) follows some continuity property for the relative Frechet distance. If \(f\) satisfies a decreasing condition, the Tester uses \(O(\log^{2}n\cdot\log\log n)\) space. ## 2 Definitions and Main Result The SpaceSaving algorithm was introduced in [18] to compute estimates of th frequencies of the \(k\) most frequent elements in a stream of elements from a universe of size \(n\), using a table \(T\) with \(K\leq n\) entries. Each table entry consists of an element and a counter (plus some auxiliary information), which is a rough estimate of the frequency of the element in the stream. The table is kept sorted by counters: \(c_{1}\geq c_{2}\geq\cdots c_{K}\). The SpaceSaving algorithm is straightforward: if the next element \(e\) of the stream is in \(T\), then the algorithm increments the corresponding counter; otherwise, it substitutes \(e\) for the element whose counter is minimum (in position \(K\)), and increments the corresponding counter. Let \(\mbox{count}(e)\) be the value of the counter of elment \(e\). See Appendix A for details. The following additive error result was proved in the original paper. (Note that \(f_{i}\) is the \(i\)th largest frequency whereas \(c_{i}\) is the \(i\)th largest counter, so they count occurences of different elements in general). **Lemma 1**.: _[_18_]_ _Let \(K\) denote the size of the table, \(N\) denote the length of the stream, and \(c_{K}\) the variable defined in the Space Saving algorithm. Then for every \(i\leq K\) we have \(|f_{i}-c_{i}|\leq c_{K}\) and for every \(i>K\) we have \(f_{i}\leq c_{K}\); moreover, \(c_{K}\leq N/K\)._ Here, we would like to leverage the power of the SpaceSaving algorithm to test whether the _entire_ distribution of frequencies of the stream approximates a given frequency distribution, with small _relative_ error. For example, this can be used to check whether a stream of graph edges defines a graph whose degree sequence is close to a predicted degree sequence. First, we need to specify what we mean by "close". To that end, we first define a relative distance between points. **Definition 1**.: _Let \(0<\varepsilon_{1},\varepsilon_{2}<1\). We say that two non-negative numbers \(a,b\) are \(\varepsilon\)-close, and denote it by \(a\simeq_{\varepsilon}b\), if \(|a-b|\leq\varepsilon\cdot\min\{a,b\}\). We say that two points \(p=(x,y)\) and \(p^{\prime}=(x^{\prime},y^{\prime})\) are \((\varepsilon_{1},\varepsilon_{2})\)-close, and denote it by \(p\simeq_{(\varepsilon_{1},\varepsilon_{2})}p^{\prime}\), if \(x\simeq_{\varepsilon_{1}}x^{\prime}\) and \(y\simeq_{\varepsilon_{2}}y^{\prime}\)._ ### Algorithm 1 With that, we can describe our streaming algorithm to test whether the frequency distribution \(g\) defined by the elements of a stream is close to a specified frequency distribution \(f\). Let \(z_{i}=(1+\varepsilon_{1}^{2})^{i}\) for \(i\geq 1\). We first define a partition of \(\{1,2,\ldots,n\}\) for the frequency function \(f\) into _Boxes_\([\ell_{j},r_{j}]\) in Lemma 3 and only consider the \(z_{i}\) which are not close to the Boxes endpoints. The streaming Algorithm 1 consists of the following three steps in parallel for all \(\lceil\log_{1+\varepsilon_{1}}n\rceil\) distinct values of \(i\): 1. We sample each element of the stream \(s\) to define a substreams \(s_{i}\). The sample probability is chosen so that (assuming that the frequency distribution \(g\) of the elements of stream \(s\) equals \(f\)), in expectation substream \(s_{i}\) contains \(\Theta(1/\varepsilon_{1}^{2})\) elements whose number of occurences is greater than \(f(z_{i})\). 2. We consider two cases, the case when \(\varepsilon_{2}f(z_{i})\leq f(n)\) and we run the SpaceSaving algorithm with a table size \(K_{i}=O(h(\gamma,\varepsilon_{1},\varepsilon_{2}).\log n\cdot\log\log n)\), and the case when \(\varepsilon_{2}f(z_{i})>f(n)\) and we do an exact counting. The two cases are determined by a value \(t_{0}=n/\gamma^{\log(1/\varepsilon_{2})}\). In the first case, \(z_{i}\leq t_{0}\) and in the second case \(z_{i}>t_{0}\). Let \(r\) be the expected number of elements of \(s_{i}\) whose number of occurences is greater than \(f(z_{i})\), and let \(c_{r}\) be the corresponding value of the counter in the table. 3. We apply a simple Coherence test to check whether point \((z_{i},c_{r})\) is close to \((t,f(t))\) for some \(t\). Finally, the algorithm accepts with probability \(1-\delta\) if and only if the Coherence test succeeds for every substream \(s_{i}\). The frequency function \(g\) of the stream \(s\) and the reference frequency \(f\) are both from \(\{1,2,..n\}\) to \(N\). **Notation 1**.: _Let \(K_{i}\) denote the size of the table used by Algorithm 1 for the substream \(s_{i}\). We set_ \[K_{i}=\frac{4.z_{i}}{\varepsilon_{2}.a_{i}}\cdot\frac{2(\gamma-1)}{2-\gamma} \cdot\frac{\log n}{\delta}\cdot(1+\varepsilon_{1})=O(\log n\cdot\log\log n),\] _where \(z_{i}=(1+\varepsilon_{1}^{2})^{i}\), \(a_{i}=\varepsilon_{1}^{2}z_{i}/\log\log n\), \(\gamma\) is such that \(f\) and \(g\) are \(\gamma\)-decreasing (see Definition 6), \(\varepsilon_{1},\varepsilon_{2}\) are the Frechet parameters (see Definition 3), and \(\delta\) is the desired error probability of the Tester (see Definition 4). Let \(U\) be the set of elements \(e\) and \(\mathit{occ}(e)\) is the number of occurences of \(e\). For each stream \(s_{i}\), the counter \(c_{r}\) of the Spacesaving algorithm is compared with \(f(z_{i})\) where \(r=\lceil z_{i}/a_{i}\rceil\)._ Algorithm 1 gives the complete description. ``` 1Tester Algorithm 1A\((\varepsilon_{1},\varepsilon_{2},\delta\); step-compatible function \(f)\) Data: a stream \(s\) from a universe \(\{e_{1},e_{2},\ldots,e_{n}\}\). Compute the decomposition of \([1,n]\) into Boxes according to Lemma 3 for \(f\). for each \(i=1,2,\ldots,\lceil\log_{1+\varepsilon_{1}}n\rceil\) : do 2\(z_{i}\leftarrow(1+\varepsilon_{1}^{2})^{i}\) ; \(K_{i}\gets O(\log n\cdot\log\log n)\) ; If \(z_{i}\) is not \(\varepsilon_{1}^{2}\)-close to a Box endpoint then: 31. Defining substreams ; \(a_{i}\leftarrow\Theta(\varepsilon_{1}^{2}.z_{i}/\log\log n)\) ; \(h_{i}\leftarrow\) uniform hash function over \([1,a_{i}]\) ; Let \(s_{i}\) denote the substream consisting of those elements \(e\) s.t. \(h_{i}(e)=1\) ; 32. Dealing with substreams \(s_{i}\) in parallel ; if\(f(n)<\varepsilon_{2}.f(z_{i})\)then on substream \(s_{i}\), run SpaceSaving with a table \(T_{i}\) of size \(K_{i}\) else on substream \(s_{i}\), run exact counting algorithm with a table \(T_{i}\) of size equal to the number of distinct elements in \(s_{i}\). end if 3. Coherence Test ; \(r\leftarrow\lceil z_{i}/a_{i}\rceil\) ; \(c_{r}\leftarrow\) the counter at position \(r\) of table \(T_{i}\) ; if\(c_{r}\not\simeq_{3,\varepsilon_{2}}f(z_{i})\)then break and output NO end if end if output YES ``` **Algorithm 1**The Streaming Tester ### Analysis of Algorithm 1 What does this algorithm accomplish? Before we answer that question, we first need to define what it means for two functions to be relatively close. We thus introduce the notion of _relative Frechet distance_ between two functions. The (absolute) Frechet distance is based on the notion of _co_upling, defined in [9] and which we now recall. Here we also define the _relative length_ of a coupling. **Definition 2**.: _Let \(f\) and \(g\) be two functions with domain \(\{1,\cdots,n\}\). For \(1\leq t\leq n\), consider the points \(u_{t}=(t,f(t))\) and \(v_{t}=(t,g(t))\). A_ coupling _between \(f\) and \(g\) is a sequence \((u_{a_{1}},v_{b_{1}}),(u_{a_{2}},v_{b_{2}}),\cdots,(u_{a_{m}},v_{b_{m}})\) such that \(a_{1}=1,b_{1}=1,a_{m}=n,b_{m}=n\), and for all \(i\) we have \(a_{i+1}\in\{a_{i},a_{i}+1\}\) and \(b_{i+1}\in\{b_{i},b_{i}+1\}\). The_ relative length _of the coupling is the minimum \(\varepsilon_{1},\varepsilon_{2}\) such that for all \(i\) we have \(u_{a_{i}}\simeq_{(\varepsilon_{1},\varepsilon_{2})}v_{b_{i}}\)._ We now define the relative Frechet distance. **Definition 3**.: _(Relative Frechet distance) Let \(f\) and \(g\) be two functions with domain \(\{1,\cdots,n\}\). We say that \(f\) and \(g\) are \((\varepsilon_{1},\varepsilon_{2})\)-close, denoted \(f\sim_{(\varepsilon_{1},\varepsilon_{2})}g\), if there exists a coupling of relative length at most \(\varepsilon_{1},\varepsilon_{2}\)._ Note that unlike the absolute Frechet distance, the relative Frechet distance is invariant by scaling. The relation \(f\sim_{(\varepsilon_{1},\varepsilon_{2})}g\) is reflexive and symmetric. The relative Frechet distance differs from the absolute Frechet distance. For example, consider two families of step functions, depending on an integer parameter \(a\): \[f(i)=\begin{cases}2a&\text{if }i\leq 10a\\ a&\text{if }i>10a\end{cases}\qquad g(i)=\begin{cases}2a&\text{if }i\leq 11a\\ a&\text{if }i>11a\end{cases} \tag{1}\] The absolute Frechet distance between \(f\) and \(g\) is \(a\) which is arbitrary large, whereas the relative Frechet distance is \(\varepsilon=10\%\), independent of \(a\). The notion of a Property Tester goes back to [4] and the streaming version to [11]. We use the tolerant version of a Tester. **Definition 4**.: _Let \(\varepsilon_{1},\varepsilon_{2},\delta\in(0,1)\). A streaming \(\delta\)-_Tester _is a streaming algorithm \(A\) which, given a function \(f\) over \(\{1,2,\cdots,n\}\), takes as input a stream of elements from a universe of size \(n\) defining a frequency function \(g\) such that \(g(j)\) is the number of occurrences of the \(j\)th most frequent element in the stream and:_ * _if_ \(f=g\) _then_ \(A\) _accepts with probability at least_ \(1-\delta\)_; and_ * _if_ \(g\) _is_ \((10\varepsilon_{1},10\varepsilon_{2})\)_-far from_ \(f\) _for the relative Frechet distance then_ \(A\) _rejects with probability at least_ \(1-4\delta\)_._ A more general _Tolerant \(\delta\)-Tester_ replaces the first condition with the tolerant version: if \(g\) is \((\varepsilon_{1}/10,\varepsilon_{2}/10)\)-close to \(f\) for the relative Frechet distance then \(A\) accepts with probability at least \(1-\delta\). We want Algorithm 1 to be a streaming \(\delta\)-Tester. For that, we need two assumptions on the frequency distributions being tested: they must be \(s\)tep-compatible and \(\gamma\)-decreasing, two notions that we now define. **Definition 5**.: _(Rectangle and Step compatibility)._ _Let \(0<\varepsilon_{1},\varepsilon_{2}<1\). An \((\varepsilon_{1},\varepsilon_{2})\)-rectangle is a set \(R\subseteq[1,n]\times[0,\infty]\) with bottom left corner \((x,y)\) and top right corner \((x(1+\varepsilon_{1}),y(1+\varepsilon_{2}))\). A function \(f\) with domain \(\{1,\cdots,n\}\) is \((\varepsilon_{1},\varepsilon_{2})\)-step-compatible if for every \(t\), \(1\leq t\leq n\), there exists an \((\varepsilon_{1},\varepsilon_{2})\)-rectangle \(R\) containing \((t,f(t))\) and all the points of \(f\) within the horizontal span of \(R\)._ Zipf distributions assume \(f_{i}=\frac{c}{i^{\alpha}}\) for \(\alpha>0\), and power laws assume \(\alpha>1\). We ignore rounding problems as each \(f_{i}\) is an integer value. Power laws and Zipf distributions are \((\varepsilon,\varepsilon^{\prime})\)_-step-compatible_ whereas the geometric distribution is not _step-compatible_, as it has large consecutive discontinuities. **Lemma 2**.: _If \(f\) is the frequency function of a Zipf distribution of parameter \(\alpha\), then \(f\) is \((\varepsilon/\alpha,\varepsilon)\)-step-compatible._ Proof.: Let us find \(j>i\) such that \(f(j)\simeq f(i)/1+\varepsilon\). We have: \[f(j)=\frac{c}{j^{\alpha}}\simeq\frac{c}{i^{\alpha}.(1+\varepsilon)}\] Then \(j\simeq i.(1+\varepsilon)^{1/\alpha}\simeq i.(1+\varepsilon/\alpha)\). **Lemma 3**.: _(Step-compatible property)._ _Let \(f\) be an \((\varepsilon_{1},\varepsilon_{2})\)-step-compatible frequency function. Then there exists a partition of \(\{1,2,\dots,n\}\) into Boxes \([\ell_{j},r_{j}]\) such that for all \(j\):_ * \(\ell_{j+1}>(1+\varepsilon_{1})\ell_{j}\)_; and_ * \(f(\ell_{j})\leq(1+4\varepsilon_{2})f(r_{j})\)_._ Proof.: The intervals are defined in a 2-step process. The first step is greedy: let \((x_{i})_{i\geq 1}\) denote the sequence of distinct values of \(\lceil(1+\varepsilon_{1}/3)^{j}\rceil\) and \(y_{i}=x_{i+1}-1\) (or \(y_{i}=n\) if \(i\) is the last term of the sequence). Using the fact that \(f\) is \((\varepsilon_{1},\varepsilon_{2})\)-step-compatible, let \(R_{i}\) denote the \((\varepsilon_{1},\varepsilon_{2})\) rectangle containing \((x_{i},f(x_{i}))\) and note that \(R_{i}\) must contain \((x_{i+1},f(x_{i+1}))\) or \((x_{i-1},f(x_{i-1}))\) (otherwise its relative horizontal span would be less than \((1+\varepsilon_{1})^{2}<1+\varepsilon_{1}\)), so it intersects \(R_{i-1}\) or \(R_{i+1}\). Extract a maximal subsequence \(R_{i_{1}},R_{i_{2}},R_{i_{3}},\cdots\) of \(R_{i}\)'s containing \(R_{1}\) and among which no two intersect. The sequence \(\ell_{j}\) then consists of the left endpoints of the rectangles in that subsequence. Finally, we set \(r_{j}=\ell_{j+1}-1\) (except that we set \(r_{j}=n\) for the last interval). Each interval \([\ell_{j},r_{j}]\) contains at least the horizontal span of a rectangle \(R_{i_{j}}\) of the subsequence, so the first property holds: \(\ell_{j+1}>(1+\varepsilon_{1})\ell_{j}\). Consider the rightmost rectangle \(R_{k}\) that intersects \(R_{i_{j}}\), and the leftmost rectangle \(R_{k^{\prime}}\) that intersects \(R_{i_{j}}\). All the points \((t,f(t))\) with \(\ell_{j}\leq t\leq r_{j}\) are in the horizontal span of \(R_{k^{\prime}}\cup R_{i_{j}}\cup R_{k}\). The vertical span is therefore at most that of \(3\)\((\varepsilon_{1},\varepsilon_{2})\) rectangles, i.e. \(f(\ell_{j})\leq(1+\varepsilon_{2})^{3}f(r_{j})<(1+4\varepsilon_{2})f(r_{j})\). **Definition 6**.: _(\(\gamma\)-decreasing) Let \(\gamma>1\). A non-increasing function \(f\) with domain \(\{1,\cdots,n\}\) is \(\gamma\)-decreasing if for all \(t\) such that \(1\leq\gamma.t\leq n\):_ \[f([\gamma.t])\leq f(t)/2\] Notice that Zipf distributions are \(\gamma\)-decreasing. We detail some key properties of step-compatible functions in section 3.1 and of \(\gamma\)-decreasing functions in section 3.2. We then obtain the main result for the Insertion model: **Theorem 1**.: _Let \(\varepsilon_{1},\varepsilon_{2},\delta\), a frequency function \(f\) and a stream \(s\) with insertions only be given. If the distributions \(f\) and \(g\) are \((3\varepsilon_{1},\varepsilon_{2})\)-step-compatible and \(\gamma\)-decreasing then Algorithm \(A(s,\varepsilon_{1},\varepsilon_{2},f)\) is a streaming \(4\delta\)-Tester that uses space \(O(\log^{2}n\cdot\log\log n)\)._ Properties of the Step-compatible and \(\gamma\)-decreasing functions The relation \(\simeq_{\varepsilon}\) is reflexive and symmetric and satisfies a variant of the triangle inequality: \(a\simeq_{\varepsilon}b\) and \(b\simeq_{\varepsilon^{\prime}}c\) imply that \(a\simeq_{(\varepsilon+\varepsilon^{\prime}+\varepsilon\varepsilon^{\prime})}c\). Indeed, the largest gap between \(a,c\) is when the \(a<b<c\) and the error is: \[(b-a)+(c-b)\leq\varepsilon.a+\varepsilon.b\leq\varepsilon.a+\varepsilon^{ \prime}(a+\varepsilon.a)\leq(\varepsilon+\varepsilon^{\prime}+\varepsilon \varepsilon^{\prime})a=((1+\varepsilon)(1+\varepsilon^{\prime})-1)a.\] **Lemma 4**.: _Let \(p_{j}=(x_{j},y_{j})\) be a sequence of \(j_{0}\) points such that \(p_{j}\simeq_{(\varepsilon_{j},\eta_{j})}p_{j+1}\) for \(j=1,2,\ldots j_{0}-1\). Then_ \[p_{1}\simeq_{(\prod_{1\leq j\leq j_{0}}(1+\varepsilon_{j})-1,\prod_{1\leq j \leq j_{0}}(1+\eta_{j})-1)}p_{j_{0}}.\] _If \(\sum_{j}\varepsilon_{j}<1\) and \(\sum_{j}\eta_{j}<1\) then_ \[p_{1}\simeq_{(2\sum_{1\leq j\leq j_{0}}\varepsilon_{j},2\sum_{1\leq j\leq j_{ 0}}\eta_{j})}p_{j_{0}}.\] Proof.: Induction on \(j_{0}\) and standard approximation. ### Properties of step-compatible functions, and Separating rectangles We will show that functions that are far according to the relative Frechet distance are separated by a certain type of rectangle defined as follows. **Definition 7**.: _We say that such a rectangle separates two functions \(f\) and \(g\) with domain \(\{1,\ldots,n\}\) if_ \[\max_{j\in(x,x(1+\varepsilon_{1}))}g(j)\leq y\quad\text{and}\quad y(1+ \varepsilon_{2})\leq\min_{j\in(x,x(1+\varepsilon_{1}))}f(j)\] _or conversely (exchanging \(f\) and \(g\))._ In other words, \(f\) is below the rectangle \(R\) and \(g\) is above \(R\). No points \((t,f(t))\) of \(f\) or \((t,g(t))\) of \(g\) is in \(R\). Notice that the point \((t,f(t))\) is the left of the rectangle for \(t=1\) and at the right of the rectangle for \(t=n\). We now present a central result used by the analysis of the streaming Tester of the subsequent section. **Theorem 2** (Separation theorem).: _If \(f\) and \(g\) are \((3\varepsilon_{1},\varepsilon_{2})\)-step-compatible and \(f\not\sim_{(3\varepsilon_{1},3\varepsilon_{2})}g\) then there exists an \((\varepsilon_{1},\varepsilon_{2})\)-rectangle which separates \(f\) and \(g\)._ The proof is in the appendix B. ### Properties of \(\gamma\)-decreasing functions Let \(F^{res(k)}=\sum_{k+1\leq i\leq n}f_{i}\) be the tail of the frequency distribution. **Lemma 5**.: _If \(f\) is \(\gamma\)-decreasing then_ \[\frac{\varepsilon}{k}.F^{res(k)}\leq\varepsilon.f_{k}.\frac{2(\gamma-1)}{2-\gamma}\] Proof.: If \(f\) is \(\gamma\)-decreasing then for \(j\geq 0\): \[\sum_{i>\gamma^{j}.k}^{i=\gamma^{j+1}.k}f_{i}\leq\frac{f_{k}\cdot(\gamma^{j+1}.k- \gamma^{j}.k)}{2^{j}}\] Hence: \[F^{res(k)}=\sum_{k+1\leq i\leq n}f_{i}\leq k.f_{k}.(\gamma-1).\sum_{j\geq 0 }\frac{\gamma^{j}}{2^{j}}=k.f_{k}.(\gamma-1).\frac{1}{1-\gamma/2}=k.f_{k}.\frac {2(\gamma-1)}{2-\gamma}\] We use this bound in section 4.1 to obtain a relative error on the estimation of the Top frequencies. ## 4 Frequency distributions, the Spacesaving algorithms and a simple lower bound Given a stream of \(N\) elements drawn from a universe \(U\) of size \(n\), let \(f_{j}\) denote the frequency (number of occurences) of the \(j\)th most frequent element, so that \(f_{1}\geq f_{2}\geq\cdots\geq f_{n}\geq 0\) and \(\sum_{i=1}^{n}f_{j}=N\). For example, in the case of a graph given as a stream of \(m\) edges, _i.e._ a stream of pairs of vertices, we can define the elements of the stream as the vertices, so the length of the stream is \(N=2m\), and \((f_{j})\) is the degree sequence of the graph. We are particularly interested in frequencies which have a compact representation. For example, _uniform frequencies_ where \(f_{i}=N/n\), _Zipf_ frequencies (also called heavy-tailed, or scale-free, or power-law) with parameter \(\alpha\), where \(f_{i}=cN/i^{\alpha}\) with \(c=1/\sum_{1\leq j\leq n}(1/j^{\alpha})\), and _geometric_ frequencies where \(f_{i}=cN/2^{i}\) with \(c=1/\sum_{1\leq j\leq n}1/2^{j}\). For Zipf frequencies with parameter \(\alpha\) the maximum frequency is \(f_{1}=\Theta(N)\) if \(\alpha>1\) and \(f_{1}=\Theta(N/\log n)\) if \(\alpha=1\). ### The Spacesaving algorithms The classical Spacesaving [18] gives a solution to the Top \(k\) most frequent elements for the _insertion only_ model and an additive error. In [3] a better bound is given, which is a lower bound in the worst-case. We need however to obtain the Top \(k\) elements with a relative error and show that it is possible for \(\gamma\)-decreasing frequency functions \(f\), in section A.1 of the appendix A. We can summarize the various previous additive bounds in table 1. If we take the strong bound from [3] and combine it with Lemma 5 of the previous section, we obtain the relative error bound, where for the top-\(k\) frequencies \(f_{i}\) where \(i\leq k\): \[|f_{i}-c_{i}|\leq\varepsilon.f_{k}.\frac{2(\gamma-1)}{2-\gamma}\leq\varepsilon.f_{i}.\frac{2(\gamma-1)}{2-\gamma}\] The Spacesaving\(\pm\)[20] generalizes for the _insertion and \(\alpha\)-bounded deletion_ model. We will analyse it in some other work. We consider another model, the sliding window_ model, an _insertion and window deletion_ model which is not a bounded deletion model in section A.5 of the appendix A. In both cases, we have a solution to the Top-\(k\) problem, the building block used by the Tester. ### A lower bound when \(f\) is uniform A classical observation is that in the worst-case, the approximation of \(F_{\infty}=\text{Max}_{j}\ f_{j}\) requires space \(\Omega(n)\), using a standard reduction from Communication Complexity. [15] reduces the Unique-Disjointness problem for \(x,y\in\{0,1\}^{n}\) to the approximation of \(F_{\infty}\) on a stream \(s\). Another standard problem which requires space \(\Omega(n)\) for the One-way Communication complexity is the \(\text{Index}(x,y)\) problem, see [16], where \(x\in\{0,1\}^{n}\), \(y\in\{1,2,...n\}\) and the goal is to compute \(x_{y}\in\{0,1\}\). We write \(\text{Index}(x,y)=x_{y}\), as Alice holds \(x\) of length \(n\), Bob holds \(y\) of length \(\log n\) and only Alice can send information to Bob. Notice that we can assume that \(|\{i:\ x_{i}=1\}|=O(n)\) for example \(n/2\), otherwise Alice would directly send these positions to Bob. We show in the next result a simple reduction from the Index problem to the the _streaming Test problem_ which given \(f\) and a stream \(s\) over the items \(a_{1},...a_{n}\), which defines a frequency \(g\), decides: either \(f\sim_{\varepsilon/10}g\) or \(f\not\sim_{10\varepsilon}g\) with h.p. **Theorem 1**.: _The streaming Test problem requires space \(\Omega(n)\)._ Proof.: Consider the following reduction from Index to Test. Given \(x\in\{0,1\}^{n}\) and \(y\in\{1,2,...n\}\) the inputs to Index, let \(f\) be the uniform distribution on the \(a_{i}\) such that \(x_{i}=1\). The stream \(s\) is determined by the elements of \(x\) of weight \(1\), followed by the element \(a_{y}\) associated with \(y\), i.e. \(a_{i_{1}},...a_{i_{k}}\) where \(x_{i_{j}}=1\) and \(k=O(n)\), followed by \(a_{y}\). If \(\text{Index}(x,y)=1\) then the relative frequency \(g\) has an element of frequency \(2/k\). The point \((1,1/k)\) of \(f\) is far from the closest point \((1,2/k)\) of \(g\). Hence \(f\not\sim_{10\varepsilon}g\). If \(\text{Index}(x,y)=0\) then \(g\) is uniform over \(k+1\) elements. The points \((i,1/k)\) of \(f\) for \(i=1,2...k\) are at relative distance \(\frac{1/k-1/(k+1)}{1/k}=1/(k+1)\) from the closest point \((i,1/k+1)\) of \(g\) for \(i=1,2...k\). The point \((k+1,1/(k+1))\) of \(g\) is at relative distance \((1/k,1/(k+1))\) from the point \((k,1/k)\) of \(f\). Hence \(f\sim_{\varepsilon/10}g\) for \(n\) large enough. We reduced a Yes-instance to Index to a No-instance of Test, and a No-instance of Index to a Yes-instance of Test. As Index requires space \(\Omega(n)\), so does the streaming Test problem. Analysis of Algorithm 1, a Streaming Tester A stream \(s\) of \(N\) elements of a universe \(\{e_{1},e_{2},\cdots,e_{n}\}\) of size \(n\) determines an integer frequency function \(g\) whose domain is \(\{1,...n\}\), such that \(g(i)\) is the number of occurences of the \(i\)th most frequent element in the stream. Suppose we are given a frequency function \(f\) whose domain is \(\{1,2,\cdots,n\}\) in a compact form, such that _Heavy-tail_, power-law or Zipf. We want to verify that the frequencies of elements in a stream approximately follows this law. We propose the following streaming Tester for this problem. ### Analysis of the space used by Algorithm 1 If \(f\) is \(\gamma\)-decreasing, we can write: \(f(\gamma.t)<f(t)/2\). Hence for \(\alpha=\log(1/\varepsilon_{2})\) we have \[f(\gamma^{\alpha}.t)<f(t)/2^{\alpha}=\varepsilon_{2}.f(t)\] For \(n=\gamma^{\alpha}.t_{0}\), we find the threshold \(t_{0}=n/\gamma^{\log(1/\varepsilon_{2})}\). For \(z_{i}\leq t_{0}\), we run the Spacesaving with a table of size \(K_{i}=\frac{4.z_{i}}{\varepsilon_{2}.a_{i}}\cdot\frac{2(\gamma-1)}{2-\gamma} \cdot\frac{\log n}{\delta}\) and for \(z_{i}>t_{0}\) we do an exact counting. **Lemma 6**.: _Algorithm 1 uses \(O((\log n)^{2}\cdot\log\log n)\) space._ Proof.: For \(z_{i}\leq t_{0}\), we run the Spacesaving with a table of size \(K_{i}\) where \(a_{i}=\Theta(\varepsilon_{1}^{2}.z_{i}/\log\log n)\). Hence: \[K_{i}=\frac{4.z_{i}}{\varepsilon_{2}.a_{i}}\cdot\frac{2(\gamma-1)}{2-\gamma} \cdot\frac{\log n}{\delta}\leq\frac{4\cdot\log\log n}{\varepsilon_{2}. \varepsilon_{1}^{2}}\cdot\frac{2(\gamma-1)}{2-\gamma}\cdot\frac{\log n}{ \delta}=O(\log n\cdot\log\log n)\] When \(z_{i}>t_{0}=n/\gamma^{\log(1/\varepsilon_{2})}\), we do an exact counting. In this case, \(K_{i}=n/a_{i}\). Therefore \[K_{i}=n/a_{i}=n/\varepsilon_{1}^{2}.z_{i}\leq n/\varepsilon_{1}^{2}.t_{0}< \gamma^{\log(1/\varepsilon_{2})}/\varepsilon_{1}^{2}\] In this case, \(K_{i}\) only depends on the parameters \(\varepsilon_{1},\varepsilon_{2}\) and \(\gamma\) and is independent of \(n\). Since we run the algorithm in parallel for \(\log_{1+\epsilon_{1}}n\) values of \(z_{i}\), for fixed values of \(\varepsilon_{1},\varepsilon_{2}\) and \(\gamma\) the total space used is \(O((\log n)^{2}\cdot\log\log n)\). ### Analysis of the error probability of Algorithm 1 **Notation 2**.: _Let \(\tilde{e_{i}}\) be the element whose counter value is \(c_{r}\), i.e. count\((\tilde{e_{i}})=c_{r}\) and \(e^{\prime}_{i}\) the element whose rank is \(r\) in the stream \(s_{i}\), for the frequency function \(g_{i}\), i.e. occ\((e^{\prime}_{i})=g_{i}(r)\) or \(rank_{s_{i}}(e^{\prime}_{i})=r\). The functions occ, count, rank are from \(U\) to \(N\). We assume that tie-breaking rules are consistent over \(s\) and the substreams \(s_{i}\): \(U=\{e^{1},e^{2},\cdots,e^{n}\}\) and if two elements \(e^{j}\) and \(e^{k}\), with \(j<k\), have the same number of occurrences, then \(rank_{s}(e^{j})<rank_{s}(e^{k})\) and \(rank_{s_{i}}(e^{j})<rank_{s_{i}}(e^{k})\) for all substreams._ We recall the following classic Hoeffding probabilistic bound. **Lemma 7**.: _Let \(X=\sum_{j=1}^{p}X_{i}\) where \(X_{j}=1\) with probability \(q_{j}\) and \(X_{j}=0\) with probability \(1-q_{j}\), and the \(X_{j}\)'s are independent. Let \(\mu=I\!\!E(X)\). Then for all \(0<\beta<1\) we have_ \[\Pr(|X-\mu|>\beta\mu)\leq 2e^{-\mu\beta^{2}/3}.\] We now prove the probabilistic Lemma 8, which analyzes the sampling that is used to create the substram \(s_{i}\) and relates \(e_{i}^{\prime}\) to \(z_{i}\). This depends on the sampling process alone and not on the Spacesaving algorithm and analysis. The main Lemma 9 guarantees an error bound on Spacesaving on each \(s_{i}\) with high probability. **Lemma 8**.: _Recall that each element is kept in substream \(s_{i}\) with probability \(1/a_{i}\) and that \(e_{i}^{\prime}\) denotes the element with rank \(z_{i}/a_{i}\) in substream \(s_{i}\) (when sorted in non-increasing order of number of occurences): \(rank_{s_{i}}(e_{i}^{\prime})=z_{i}/a_{i}\). Then, the rank of \(e_{i}^{\prime}\) in stream \(s\) (when sorted in non-increasing order of number of occurences) satisfies_ \[\Pr(z_{i}(1-\varepsilon_{1}^{2})\leq rank_{s}(e_{i}^{\prime})\leq z_{i}(1+ \varepsilon_{1}^{2}))\geq 1-4\delta/\log n.\] _Moreover, if \(f=g\) then \(f(z_{i})\sim_{\varepsilon_{2}}\text{occ}(e_{i}^{\prime})\)._ Proof.: By definition of \(e_{i}^{\prime}\), the rank of \(\text{occ}(e_{i}^{\prime})\) in the substream \(s_{i}\) equals \(r=z_{i}/a_{i}\). We will prove the following: With probability at least \(1-4\delta/\log n\), the following properties hold: 1. The number of elements that appear in \(s_{i}\) and have rank less than \(z_{i}(1-\varepsilon_{1}^{2})\) in \(s\) is less than \(z_{i}/a_{i}\) 2. The number of elements that appear in \(s_{i}\) and have rank less than \(z_{i}(1+\varepsilon_{1}^{2})\) in \(s\) is more than \(z_{i}/a_{i}\) This will imply the Lemma. For the first item, we apply Lemma 7 with \(X\) denoting the number of elements that appear in \(s_{i}\) and have rank less than \(p=z_{i}(1-\varepsilon_{1}^{2})\) in \(s\), so that \(X_{j}=1\) if and only if the element of rank \(j\leq z_{i}(1-\varepsilon_{1}^{2})\) in \(s\) appears in \(s_{i}\). We have \(\mu=z_{i}(1-\varepsilon_{1}^{2})/a_{i}\). We set \(\beta=\varepsilon_{1}^{2}/(1-\varepsilon_{1}^{2})\). We obtain that the probability that the statement does not hold is at most \(2exp(-\frac{z_{i}\varepsilon_{1}^{2}}{3a_{i}(1-\varepsilon_{1}^{2})})\leq 2 exp(-\frac{z_{i}\varepsilon_{1}^{2}}{3a_{i}(1+\varepsilon_{1}^{2})})\). For the second item, we apply Lemma 7 with \(X\) denoting the number of elements that appear in \(s_{i}\) and have rank less than \(p=z_{i}(1+\varepsilon_{1}^{2})\) in \(s\), so that \(X_{j}=1\) if and only if the element of rank \(j\leq z_{i}(1+\varepsilon_{1}^{2})\) in \(s\) appears in \(s_{i}\). We have \(\mu=z_{i}(1+\varepsilon_{1}^{2})/a_{i}\). We set \(\beta=\frac{\varepsilon_{1}^{2}}{(1+\varepsilon_{1}^{2})}\). We obtain that the probability that the statement does not hold is at most \(2exp(-\frac{z_{i}\varepsilon_{1}^{2}}{3a_{i}(1+\varepsilon_{1}^{2})})\). By the union bound, the probability that the two statements do not both hold is bounded by \(4exp(-\frac{z_{i}\varepsilon_{1}^{2}}{3a_{i}(1+\varepsilon_{1}^{2})})\). Let \(a_{i}=\varepsilon_{1}^{2}z_{i}/(6\ln((\ln n)/\delta))\). Then this probability is at most \(4\delta/\ln n\). Since \(f=g\) and \(z_{i}\) is not close to one of the endpoints of the boxes of \(f\), we also have \(f(z_{i})\sim_{\varepsilon_{2}}\text{occ}(e_{i}^{\prime})\). Now we turn to the analysis of the SpaceSaving algorithm. **Lemma 9**.: _Assume that \(g\) is step-compatible and \(\gamma\)-decreasing. Consider Algorithm 1 and recall that \(K_{i}=4(z_{i}/a_{i})\cdot\frac{2(\gamma-1)}{2-\gamma}\cdot(1-\varepsilon_{1}^{ 2})\cdot(1+\varepsilon_{2})\cdot\frac{\log n}{\varepsilon_{2}\delta}\). We have:_ \[\Pr[c_{K_{i}}\leq\varepsilon_{2}.g(z_{i})]\geq 1-5\delta/\log n.\] Proof.: Let \(g_{i}\) be the frequency function of substream \(i\). For table \(T_{i}\) of size \(K_{i}\) used by the algorithm. Let \(n_{i}\) denote the number of distinct elements in stream \(s_{i}\). Then the domain of \(g_{i}\) is \([1,n_{i}]\), and \(n_{i}\) is a random variable with expectation equal to \(n/a_{i}\). Let \(N_{i}\) denote the length of substream \(s_{i}\): we have \(N_{i}=\sum_{x=1}^{x=n_{i}}g_{i}(x)\). Let \(G_{i}(u)=\sum_{j=1}^{u}g_{i}(j)\) denote the cumulative frequency, and \(G_{i}^{res(u)}=\sum_{j=u+1}^{n_{i}}g_{i}(j)\). Let \(\widehat{z_{i}}=z_{i}/a_{i}\). We apply Lemma 11 to table \(T_{i}\), using \(u=\widehat{z_{i}}\) and noting that \(K_{i}-2\widehat{z_{i}}>K_{i}/2\): \[c_{K_{i}}\leq\min_{u<K_{i}/2}\frac{G_{i}^{res(u)}}{K-2u}\leq\frac{\sum_{ \widehat{z_{i}}+1}^{n_{i}}g_{i}(x)}{K_{i}-2\widehat{z_{i}}}\leq\frac{2}{K_{i}} \sum_{\widehat{z_{i}}+1}^{n_{i}}g_{i}(x). \tag{2}\] As in Lemma 8, let \(e_{i}^{\prime}\) denote the element of substream such that \(rank_{s_{i}}(e_{i}^{\prime})=z_{i}/a_{i}\). We have: \[\sum_{\widehat{z_{i}}+1}^{n_{i}}g_{i}(x)=\sum_{y=rank_{s}(e_{i}^{\prime})+1}^{ n}g(y)\mathbf{1}(\mbox{the element of $s$ with rank $y$ is in $s_{i}$}).\] Let \(A\) denote the following event: \[rank_{s}(e_{i}^{\prime})\geq z_{i}(1-\varepsilon_{1}^{2})\] Assume that \(A\) holds. Then \[\sum_{\widehat{z_{i}}+1}^{n_{i}}g_{i}(x)\leq\sum_{y=z_{i}(1-\varepsilon_{1}^{ 2})+1}^{n}g(y)\mathbf{1}(\mbox{the element of $s$ with rank $y$ is in $s_{i}$})\] Observe that the value of the right-hand side is determined by which elements of \(s\) are put in \(s_{i}\), among the ones with \(rank_{s}\) greater than \(z_{i}(1-\varepsilon_{1}^{2})\). Also observe that event \(A\) is determined by how many elements of \(s\) are put in \(s_{i}\), among the ones with \(rank_{s}\) smaller than or equal to \(z_{i}(1-\varepsilon_{1}^{2})\). Thus the expression in the right-hand side is independent of event \(A\), and we can write: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Since \(z_{i}\) is not close to a Box endpoint of \(g\), by Lemma 3 we have \[g(z_{i}(1-\varepsilon_{1}^{2}))\leq g(z_{i})(1+\varepsilon_{2}).\] Combining the inequalities (2) and (3) gives: \[I\!\!E[c_{K_{i}}|A]\leq\frac{2}{K_{i}}\cdot\frac{1}{a_{i}}\cdot\frac{2(\gamma-1 )}{2-\gamma}z_{i}(1-\varepsilon_{1}^{2})\cdot g(z_{i})(1+\varepsilon_{2}).\] As \(K_{i}=4(z_{i}/a_{i})\cdot\frac{2(\gamma-1)}{2-\gamma}\cdot(1-\varepsilon_{1}^ {2})\cdot(1+\varepsilon_{2})\cdot\frac{\log n}{\varepsilon_{2}\delta}\), we have: \[I\!\!E[c_{K_{i}}|A]\leq\frac{\delta}{\log n}.\varepsilon_{2}.g(z_{i})\] We use Markov's inequality to conclude that, conditioned on event \(A\) we have: \[\Pr(c_{K_{i}}\leq\frac{\log n}{\delta}\cdot I\!\!E[c_{K_{i}}|A]\ \ |\ \ A]\geq 1- \delta/\log n.\] By Lemma 8 event \(A\) has probability at least \(1-4\delta/\log n\). We conclude that \[\Pr[c_{K_{i}}\leq\varepsilon_{2}.g(z_{i}))]\geq(1-4\delta/\log n)(1-\delta/ \log n)\geq 1-5\delta/\log n.\] We can now prove our main Theorem: **Theorem 1**.: _Let \(\varepsilon_{1},\varepsilon_{2},\delta\), a frequency function \(f\) and a stream \(s\) with insertions only be given. If the distributions \(f\) and \(g\) are \((3\varepsilon_{1},\varepsilon_{2})\)-step-compatible and \(\gamma\)-decreasing then Algorithm \(A(s,\varepsilon_{1},\varepsilon_{2},f)\) is a streaming \(4\delta\)-Tester that uses space \(O(\log^{2}n\cdot\log\log n)\)._ Proof.: First, we assume that \(f=g\) and aim to prove that the algorithm outputs YES with probability \(1-O(\delta)\). To that end, for each \(i\) such that \(z_{i}\) is not \(\varepsilon_{1}^{2}\)-close to a Box endpoint, we will prove that with probability at least \(1-O(\delta/\log n)\) we have \(|g(z_{i})-c_{r}|\leq 3\varepsilon_{2}g(z_{i})\), and then apply the union bound. We conclude that \(c_{r}\simeq_{3\varepsilon_{2}}f(z_{j})\) and the test is positive with high probability. Focus on one value of \(i\) such that \(z_{i}\) is not \(\varepsilon_{1}^{2}\)-close to a Box endpoint of \(f\), and consider the substream \(s_{i}\). We first write: \[|g(z_{i})-c_{r}|\leq|g(z_{i})-\operatorname{occ}(e_{i}^{\prime})|+| \operatorname{occ}(e_{i}^{\prime})-\operatorname{count}(e_{i}^{\prime})|+| \operatorname{count}(e_{i}^{\prime})-\operatorname{count}(\tilde{e}_{i})| \tag{4}\] and analyze the right-hand side term by term. First we will prove that with probability \(1-4\delta/\log n\) we have \[|g(z_{i})-\operatorname{occ}(e_{i}^{\prime})|\leq\varepsilon_{2}g(z_{i}). \tag{5}\] To that end, we let \(I=[z_{i}/(1+\varepsilon_{1}^{2}),z_{i}(1+\varepsilon_{1}^{2})]\). Since \(g\) is step-compatible and \(z_{i}\) it is not \(\varepsilon_{1}^{2}\)-close to a Box endpoint, \(g\) is near-constant inside the entirety of interval \(I\): the maximum exceeds the minimum by a \((1+\varepsilon_{2})\) factor at most. By Lemma 8, with probability at least \(1-4\delta/\log n\) we have that \(\operatorname{rank}_{s}(e_{i}^{\prime})\) is inside \(I\), hence Equation 5. Secondly, we observe that by Property 3 of Spacesaving (see page 20), \[|\operatorname{occ}(e_{i}^{\prime})-\operatorname{count}(e_{i}^{\prime})|\leq c _{K_{i}}. \tag{6}\] Thirdly, we will argue that \[|\text{count}(e^{\prime}_{i})-\text{count}(\tilde{e_{i}})|\leq c_{K_{i}}. \tag{7}\] To that end, we refer the reader to Figure 1. By Property 3, for any element \(e\) of \(s_{i}\) we have \(\text{occ}(e)\leq\text{count}(e)\leq\text{occ}(e)+c_{K_{i}}\), so when we plot the points \((\text{occ}(e),\text{count}(e))\) for the elements occuring in stream \(s_{i}\), all points are inside the strip of equation \(x\leq y\leq x+c_{K_{i}}\). Consider the point \((\text{occ}(e^{\prime}_{i}),\text{count}(\tilde{e_{i}}))\). We partition the strip into three parts (see Figure 1): 1. \(P_{1}\) consisting of the points \((x,y)\) such that \(x>\text{count}(\tilde{e_{i}})\). Since \(\tilde{e_{i}}\) has rank \(r\) according to count, there are at most \(r-1\) points in \(P_{1}\). 2. \(P_{2}\) consisting of the points \((x,y)\) such that \(x<\text{count}(\tilde{e_{i}})-c_{K_{i}}\). Since \(\tilde{e_{i}}\) has rank \(r\) according to count, there are fewer than \(n_{i}-r\) where \(n_{i}\) is the number of elements in the stream \(s_{i}\). 3. \(P_{3}\) consisting of the rest. All points of \(P_{1}\) have occ value larger than all points of \(P_{3}\), and all points of \(P_{2}\) have occ value smaller than all points of \(P_{3}\). Recall that \(e^{\prime}_{i}\) has rank \(r\) according to occ. Thus the point \((\text{occ}(e^{\prime}_{i}),\text{count}(\tilde{e_{i}}))\) cannot be in \(P_{1}\) nor in \(P_{2}\). This implies that \(e^{\prime}_{i}\) is in \(P_{3}\), hence Equation 7. Finally, we apply Lemma 9: with probability at least \(1-5\delta/\log n\) we have \(c_{K_{i}}\leq\varepsilon_{2}g(z_{i})\). Combining with Equations 4,5,6 and 7 we obtain that with probability at least \(1-9\delta/\log n\) we have \(|g(z_{i})-\text{occ}(e^{\prime}_{i})|\leq 3\varepsilon_{2}g(z_{i})\). By the union Figure 1: Counters and Frequencies for a stream \(s_{i}\). The error \(\Delta=c_{K_{i}}\) and \(|\text{occ}(e^{\prime}_{i})-\text{count}(\tilde{e_{i}})|<c_{K_{i}}\). bound, with probability at least \(1-O(\delta)\) test test is positive and Algorithm 1 outputs YES, as desired. Assume that \(g\) is far from \(f\), i.e. \(f\not\sim_{(20\varepsilon_{1},20\varepsilon_{2})}g\). By Theorem 2 there exists a separating rectangle \(R=[b,b(1+6\varepsilon_{1})]*[c,c(1+6\varepsilon_{2})]\) which separates \(f\) from \(g\). Let \(j\) be the smallest integer such that \(b(1+3.\varepsilon_{1})<z_{j}=(1+\varepsilon_{1}^{2})^{j}\). Consider the streams \(s_{j}\) or \(s_{j-1}\) or \(s_{j+1}\) so that \(z_{j}\) avoids the limits of the Boxes of \(f\) and \(g\). As the relative width \((1+3\varepsilon_{1})\) is larger than \((1+\varepsilon_{1}^{2})\), the point \(z_{j}\) is close to the center on the \(x\)-axis of the separating rectangle \(R\). Consider the two cases, \(f\) is above the rectangle (case 1) or \(f\) is below the rectangle (case 2). \(\bullet\) Assume that \(g\) is below \(R\) and \(f\) is above \(R\) (case 1). The value \(c_{r}\) is the count of an element \(\tilde{e_{j}}\) which with high probability is close to \(\operatorname{occ}(e_{j}^{\prime})\) for an element \(e_{j}^{\prime}\) of the stream \(s_{j}\). The triangle inequality gives: \[|c_{r}-f(z_{j})|\geq|\operatorname{occ}(e_{j}^{\prime})-f(z_{j})|-| \operatorname{occ}(e_{j}^{\prime})-c_{r}|\] By equations ( 6 ) and (7 ): \(|\operatorname{occ}(e_{j}^{\prime})-c_{r}|\leq|\operatorname{occ}(e_{i}^{ \prime})-\operatorname{count}(e_{i}^{\prime})|+|\operatorname{count}(e_{i}^{ \prime})-\operatorname{count}(\tilde{e_{i}})|\leq 2.c_{K_{i}}\) and by Lemma 9 with high probability: \[|\operatorname{occ}(e_{j}^{\prime})-c_{r}|\leq 2\varepsilon_{2}.g(z_{j})\] Because \(g\) is below the rectangle \(R\), then \(|\operatorname{occ}(e_{j}^{\prime})-f(z_{j})|\geq 6\varepsilon_{2}.g(z_{j})\). Then with high probability: \[|c_{r}-f(z_{j})|\geq 6\varepsilon_{2}.g(z_{j})-2\varepsilon_{2}.g(z_{j})\geq 4 \varepsilon_{2}.g(z_{j})\geq 3\varepsilon_{2}.c_{r}\] Hence \(c_{r}\not\simeq_{3\varepsilon_{2}}f(z_{j})\) with high probability as \(c_{r}\leq f(z_{j})\), so the algorithm will reject, as desired. \(\bullet\) Assume that \(f\) is below \(R\) and \(g\) is above \(R\) (case 2). Select the position of the separating rectangle \(R=[b,b.(1+6\varepsilon_{1})]*[c_{L},c_{L}.(1+6\varepsilon_{2})]\) so that the top of the rectangle coincides with the bottom of the Box of \(g(z_{j})\). Notice that \(c_{L}\geq f(z_{j})\). As \(z_{j}\) is not close to the limits of the Boxes of \(f\) and \(g\), we can make the separating rectangle narrower, i.e. \(R^{\prime}=[b,b.(1+\varepsilon_{1}^{2})]*[c_{L},c_{L}.(1+6\varepsilon_{2})]\) We can therefore write: \(g(z_{j})\leq c_{L}.(1+6\varepsilon_{2}).(1+\varepsilon_{2})\simeq c_{L}.(1+7 \varepsilon_{2})\). Hence: \[-2\varepsilon_{2}.g(z_{j})\geq-2\varepsilon_{2}.c_{L}.(1+7\varepsilon_{2}) \tag{8}\] The previous triangle inequality gives: \[|c_{r}-f(z_{j})|\geq|\operatorname{occ}(e_{j}^{\prime})-f(z_{j})|-| \operatorname{occ}(e_{j}^{\prime})-c_{r}|\] As \(|\operatorname{occ}(e_{j}^{\prime})-c_{r}|\leq 2.\varepsilon_{2}.g(z_{j})\) by Lemma 9 with high probability as in case 1, and \(f\) is below the rectangle \(R^{\prime}\), we can then bound \(|\operatorname{occ}(e_{j}^{\prime})-f(z_{j})|\geq 6\varepsilon_{2}.c_{L}\). Then, with high probability, using the inequality (8): \[|c_{r}-f(z_{j})|\geq 6\varepsilon_{2}.c_{L}-2\varepsilon_{2}.g(z_{j})\geq 6 \varepsilon_{2}.c_{L}-2\varepsilon_{2}.c_{L}.(1+7\varepsilon_{2})\geq 3 \varepsilon_{2}.c_{L}\geq 3\varepsilon_{2}.f(z_{j})\] Hence \(c_{r}\not\simeq_{3\varepsilon_{2}}f(z_{j})\) with high probability as \(c_{r}\geq f(z_{j})\), so the algorithm will reject, as desired. ### Streaming \(\delta\)-Tester for sliding windows Theorem 1 can be extended to the sliding windows model defined in the Appendix A.5. We want to test if the last window defined by the parameters \(\lambda,\Delta\) follows a frequency function \(f\). **Corollary 1**.: _If \(f\) and \(g\) are \((3\varepsilon_{1},\varepsilon_{2})\)-step-compatible and \(\gamma\)-decreasing in each window, then Algorithm \(A(s,\varepsilon_{1},\varepsilon_{2},f)\) is a streaming \(4\delta\)-Tester which uses uses space \(O(\log^{2}n\cdot\log\log n)\)._ Proof.: As \(f\) is \(\gamma\)-decreasing, we apply Lemma 5 to the Spacesaving version of the sliding window (see Appendix A.5) and obtain the relative error \(|f_{k}-c_{k}|\leq\varepsilon.f_{k}.\frac{2(\gamma-1)}{2-\gamma}\). Both Lemmas 8 on the sampling and 9 on Spacesaving generalize. Hence the main Theorem in section 5.2 also applies. ## 6 Conclusion We introduced a scale free distance between two frequency distributions, the relative version of the Frechet distance. We then studied how to verify a frequency distribution \(g\) defined by a stream of \(N\) items among \(n\) distinct items. We first proved a \(\Omega(n)\) lower bound on the space required in general. If we assume that the frequency distribution \(f\) and the frequency \(g\) defined by the stream satisfy a step-compatibility condition and decrease fast enough, we presented a Tester that uses \(O(\log^{2}n\cdot\log\log n)\) space. Zipf and Power law distributions are both step-compatible and \(\gamma\)-decreasing.
2303.00023
Global Well-Posedness for Eddy-Mean Vorticity Equations on $\mathbb{T}^2$
We consider the two-dimensional, $\beta$-plane, vorticity equations for an incompressible flow, where the zonally averaged flow varies on scales much larger than the perturbation. We prove global existence and uniqueness of the solution to the equations on periodic settings.
Yuri Cacchió
2023-02-28T19:01:20Z
http://arxiv.org/abs/2303.00023v2
[ ###### Abstract We consider the two-dimensional, \(\beta\)-plane, eddy-mean vorticity equations for an incompressible flow, where the zonally averaged flow varies on scales much larger than the perturbation. We prove global existence and uniqueness of the solution to the equations on periodic settings. Geophysical fluid dynamics, Well-Posedness, Vorticity equation. Global Well-Posedness for Eddy-Mean Vorticity Equations on \(\mathbb{T}^{2}\)]Global Well-Posedness for Eddy-Mean Vorticity Equations on \(\mathbb{T}^{2}\)]Yuri Cacchio+ Footnote †: The author was supported in part by Sapienza ”Giovani Ricercatori” Grant DR n.1607. 35Q86 ## 1 Introduction Atmospheric and oceanic flows on a rotating sphere are often modeled using equations for a single fluid layer of constant density as a consequence of their large horizontal extent compared with their depth. However, on large scale these flows are strongly influenced not only by rotation but also by stratification [2, 8, 19, 20, 24]. In three-dimensional stratified configurations turbulence naturally arises as a result of dynamical instabilities such as baroclinic instability, however, there is no such underlying instability in the two-dimensional case [8]. Consequently, it is common to artificially force two-dimensional turbulence through an exogenous, statistically homogeneous, forcing function. A state of equilibrium can then be achieved by the inclusion of dissipation terms [17, 28]. In this article we take into account only the rotation by neglecting stratification in a single layer model where we incorporate the effects of planetary rotation by adopting a beta-plane approximation, which is a simple device used to represent the latitudinal variation in the vertical component of the planetary rotation [12, 23, 33]. Keeping this in mind, let us start by considering the two-dimensional beta-plane vorticity equation \[\partial_{t}\zeta+J(\psi,\zeta+\beta y)=\nu\nabla^{2}\zeta, \tag{1.1}\] where \(J(A,B)=A_{x}B_{y}-A_{y}B_{x}\), which is completely determined by a single dynamical variable, the stream function \(\psi\), since the vorticity \(\zeta=\nabla^{2}\psi\). Equation (1.1) has been used in a wide range of studies investigating large scale planetary flows in double periodic geometry [7, 9, 18, 32] and on the sphere [13, 22, 27]. Introducing the eddy-mean decomposition, sometimes called Reynolds decomposition, denoted by a prime and an overbar respectively, \[f(t,x,y)=\overline{f}(t,y)+f^{\prime}(t,x,y), \tag{1.2}\] we can split equation (1.1) into mean and fluctuating components obtaining the following initial value problem \[\left\{\begin{array}{rl}&\partial_{t}\zeta^{\prime}+C_{1}\partial_{x}\nabla^ {-2}\zeta^{\prime}-\nu\nabla^{2}\zeta^{\prime}=F(\overline{u},\zeta^{\prime}) \\ &\partial_{t}\overline{u}-\nu\nabla^{2}\overline{u}=G(\zeta^{\prime})\\ &\zeta^{\prime}(0,x,y)=\zeta^{\prime}_{0}(x,y)\\ &\overline{u}(0,y)=\overline{u}_{0}(y)\end{array}\right. \tag{1.3}\] with \[F(\overline{u},\zeta^{\prime}) :=(\partial_{y}\nabla^{-2}\zeta^{\prime})\zeta^{\prime}_{x}-( \partial_{x}\nabla^{-2}\zeta^{\prime})\zeta^{\prime}_{y}-\overline{u}\zeta^{ \prime}_{x}+\partial_{y}\overline{[(\partial_{x}\nabla^{-2}\zeta^{\prime}) \zeta^{\prime}]},\] \[G(\zeta^{\prime}) :=\partial_{y}\overline{[\partial_{x}\nabla^{-2}\zeta^{\prime} \partial_{y}\nabla^{-2}\zeta^{\prime}]}.\] We denote by \(\overline{u}=\overline{u}(t,y)\) the zonal mean velocity, also called jet velocity profile, and by \(\zeta^{\prime}=\zeta^{\prime}(t,x,y)\) the eddy-vorticity; the parameter \(\nu>0\) is called kinematic viscosity. Here, the system of equations (1.3) is posed on a periodic box (periodic boundary condition) \(\mathbb{T}_{l}^{2}=[0,l)^{2}\) of size \(l>0\). In the next section we see a more detailed description of the model (1.1) and how to get system (1.3) from equation (1.1). The goal of the paper is to prove existence and uniqueness of the solution of problem (1.3) in the periodic setting, which allows us to determine the Reynolds stress as shown in Section 2. The latter quantity turns out to be crucial in fluid dynamics since it is the component of the total stress tensor in a fluid to account for turbulent fluctuations in fluid momentum. Moreover, as written in [33, p. 414], the "closure problem" of turbulence may be thought of as finding a representation of such Reynolds stress terms in terms of mean flow quantities, which seems to be critical without introducing physical assumptions not directly deducible from the equations of motion themselves. In light of that we prove the following global well-posedness result: **Theorem 1.1**.: _If \(\overline{u}_{0}\in H^{s}(\mathbb{T}_{l})\) and \(\zeta^{\prime}_{0}\in H^{s}(\mathbb{T}_{l}^{2})\), \(s\geq 0\), then there exists a unique global solution \((\overline{u},\zeta^{\prime})\in L^{\infty}\left((0,+\infty);\;H^{s}(\mathbb{T }_{l})\cross H^{s}(\mathbb{T}_{l}^{2})\right)\) of the initial value problem (1.3)._ The proof of the above theorem is first obtained locally in time via a contraction method, and then extended using an iteration method based on the conservation of the \(L^{2}\)-norm. The estimates used in the argument are reminiscent of those used to study periodic dispersive initial value problems and inspired by the pioneering works of Bourgain [3, 4, 6] and Kenig, Ponce and Vega [14, 15]. Relevant results in periodic settings can also be found in [5, 30]. We also refer the interested reader to Tao's notes [31] for an introduction to this method, where he shows local well-posedness of the Navier-Stokes equations. Therefore, we proceed as follows: 1. **Duhamel's principle.** Since in Theorem 1.1 we assume very low regularity it is clear that the initial value problem (1.3) needs to be interpreted in an appropriate manner. To this end the first step is to use the Duhamel's principle so that the solution to problem (1.3) can be interpreted as the solution of the integral system \[(\overline{u},\zeta^{\prime})=\left(e^{\nu t\partial_{yy}} \overline{u}_{0}+\int_{0}^{t}e^{(\nu\partial_{yy})(t-t^{\prime})}G(\zeta^{ \prime})\ dt^{\prime},\right.\] (1.4) \[\left.e^{(\nu\nabla^{2}-\partial_{x}\nabla^{-2})t}\ \zeta^{\prime}_{0}+\int_{0}^{t}e^{(\nu \nabla^{2}-\partial_{x}\nabla^{-2})(t-t^{\prime})}F(\overline{u},\zeta^{ \prime})\ dt^{\prime}\right)\] 2. **Upper bounds.** The second step is to define the functional \[\Phi(\overline{u},\zeta^{\prime}):=\left(\begin{array}{c}e^{\nu t \partial_{yy}}\overline{u}_{0}+\int_{0}^{t}e^{(\nu\partial_{yy})(t-t^{\prime}) }G(\zeta^{\prime})\ dt^{\prime},\\ e^{(\nu\nabla^{2}-\partial_{x}\nabla^{-2})t}\ \zeta^{\prime}_{0}+\int_{0}^{t}e^{( \nu\nabla^{2}-\partial_{x}\nabla^{-2})(t-t^{\prime})}F(\overline{u},\zeta^{ \prime})\ dt^{\prime}\right)\end{array}\] and prove that \[\left\|\Phi(\overline{u},\zeta^{\prime})\right\|_{L^{\infty}_{[0,\delta]}(H^ {s}(\mathbb{T}_{l})\cross H^{s}(\mathbb{T}_{l}^{2}))}\leq C(\left\|\overline{u }_{0}\right\|_{H^{s}(\mathbb{T}_{l})}+\left\|\zeta^{\prime}_{0}\right\|_{H^{s} (\mathbb{T}_{l}^{2})}),\] (1.5) \(\forall\ s\geq 0\), on a small enough time interval \([0,\delta]\). Similarly one also proves that \(\Phi\) is a contraction. In this way we have that \[\Phi:B(0,R)\to B(0,R)\] on a suitable ball of radius \(R>0\) and contraction. 3. **Local Well-Posedness.** The next step is to use the fixed point theorem on the ball defined in step 2) in order to prove existence and uniqueness of the solution on \(B(0,R)\). After that, due to the estimate (1.5), we extend this solution from the ball over the whole space \(L^{\infty}_{[0,\delta]}(H^{s}(\mathbb{T}_{l})\cross H^{s}(\mathbb{T}_{l}^{2}))\), with \(s\geq 0\). 4. **Global Well-Posedness.** Finally, since the small time \(\delta\) in Step 2 depends only on the \(L^{2}\)-norms of \(\overline{u}\) and \(\zeta^{\prime}\), and these can be proved to be bounded by \(L^{2}\)-norms of initial data, i.e., \[\left\|\overline{u}\right\|_{L^{2}_{y}(\mathbb{T}_{l})}\leq C\|\overline{u}_{ 0}\|_{L^{2}_{y}(\mathbb{T}_{l})}\ \text{and}\ \left\|\zeta^{\prime}\right\|_{L^{2}_{xy}(\mathbb{T}_{l}^{2})} \leq C\|\zeta^{\prime}_{0}\|_{L^{2}_{xy}(\mathbb{T}_{l}^{2})},\] (1.6) for some universal constant \(C\), and we can extend the solution by covering the whole time interval \([0,\infty)\) by iteration. The organization of the paper is as follows. In Section 2, we state assumptions, certain terminology and we derive from the vorticity equation (1.1) the system of equations (1.3). In Section 3, as mentioned in Steps 2 and 3 above, we prove inequality (1.5), we properly define the functional \(\Phi\), and after proving that it is a contraction we obtain local well-posedness on a small enough time interval. Finally, we extend the solution by iteration to get the global well-posedness result of (1.3) in the last section. ## 2 The Eddy-Mean Decomposition and Assumptions To establish the notation used in this paper, we start with the incompressible 2D Navier-Stokes equations [2, 16, 26, 29], \[\left\{\begin{array}{rl}\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+f \mathbf{\vec{z}}\mathbf{\times}\mathbf{u}&=\nu\nabla^{2}\mathbf{u}-\nabla p\\ \mathbf{\nabla}\mathbf{\cdot}\mathbf{u}&=0\end{array}\right. \tag{2.1}\] where we include the effect of the planetary rotation through the Coriolis force \(f\) as a tool to understand various geophysical flows [1, 10, 11, 25]. Here \(\mathbf{u}=(u,v)\) and \(p\) are unknown velocity field and pressure, \(\nu>0\) is the kinematic viscosity and \((\mathbf{u}\cdot\nabla)\) stands for the differential operator \(u\partial_{x}+v\partial_{y}\). For the purpose of this discussion, we do an important approximation, so-called \(\beta\)-\(plane\), which captures the most important dynamical effects of sphericity, without the complicating geometric effects, which are not essential to describe many phenomena [33]. Since the magnitude of the vertical component of rotation varies with latitude, we can approximate this effect by allowing the effective rotation vector to vary. Thus, Taylor-expanding the Coriolis parameter around a latitude \(\Theta_{0}\), for small variation in latitude, we have [12, 23, 26] \[f=2\Omega\sin\Theta\approx 2\Omega\sin\Theta_{0}+2\Omega(\Theta-\Theta_{0}) \cos\Theta_{0}, \tag{2.2}\] where \(\Theta\) is the latitude and \(\Omega\) is the angular velocity of the sphere. Then, on the tangent plane we may mimic this by allowing the Coriolis parameter to vary as \[f=f_{0}+\beta y \tag{2.3}\] where \(f_{0}=2\Omega\sin\Theta_{0}\), \(\beta=\partial f/\partial y=(2\Omega\cos\Theta_{0})/a\) and \(a\) is the radius of the planet. Moreover, since in 2D flows the velocity field has two component which depend on two physical space coordinates and time, \(\mathbf{u}=(u(t,x,y),v(t,x,y),0)\), then the vorticity field \(\mathbf{\zeta}\) has only one non-zero component, \[\mathbf{\zeta}:=\nabla\mathbf{\times}\mathbf{u}=(0,0,\zeta(t,x,y))=(0,0,v_{x}-u_{y}). \tag{2.4}\] This component satisfies the \(\beta\)-plane vorticity equation, \[\zeta_{t}+u\zeta_{x}+v\zeta_{y}+\beta v=\nu\nabla^{2}\zeta, \tag{2.5}\] which is obtained by taking the curl from (2.1). For 2D incompressible flows, we can introduce a representation of the velocity field in terms of the stream function \(\psi(t,x,y)\) as in [21], \[(u,v) =(-\psi_{y},\psi_{x}), \tag{6}\] \[\zeta =\nabla^{2}\psi =\psi_{xx}+\psi_{yy}. \tag{7}\] In addition, since we work in a periodic frame, we can use the eddy-mean decomposition (2) in (5), where \[\overline{f}=\frac{1}{l}\int_{0}^{l}f\ dx\] and get the zonal mean momentum equation \[\partial_{t}\overline{u}+\partial_{y}(\overline{u^{\prime}v^{\prime}})=\nu \nabla^{2}\overline{u}, \tag{8}\] and the eddy vorticity equation \[\partial_{t}\zeta^{\prime}+\overline{u}\zeta^{\prime}_{x}+(\beta-\overline{u} _{yy})v^{\prime}=\nu\nabla^{2}\zeta^{\prime}+\partial_{y}(\overline{v^{\prime }\zeta^{\prime}})-u^{\prime}\zeta^{\prime}_{x}-v^{\prime}\zeta^{\prime}_{y}. \tag{9}\] The last hypothesis we make is to assume constant \(\overline{u}_{yy}\), that is we assume that the zonally averaged flow varies on scales much larger than the perturbation [7, 29]. Finally, from (2), (6) and (7) we obtain \[u^{\prime} =-\partial_{y}\nabla^{-2}\zeta^{\prime};\] \[v^{\prime} =\partial_{x}\nabla^{-2}\zeta^{\prime}, \tag{10}\] and we derive the initial value problem (3) by replacing relations (10) in (8) and (9). As mentioned in the introduction, proving well posedness for (3) will allow us to derive the Reynolds stress using relations (10), since it is defined as \(\overline{u^{\prime}v^{\prime}}\)[20]. Before we address the computations of well posedness, with the purpose of fixing the functions space on which we work, we show that the average of \(\overline{u}\) and \(\zeta^{\prime}\) are both conserved. In fact, for \(\overline{u}\) we have \[\partial_{t}\int_{0}^{l}\overline{u}\ dy =\int_{0}^{l}\partial_{t}\overline{u}\ dy\] \[=\int_{0}^{l}\nu\partial_{y}^{2}\overline{u}+\partial_{y}[ \overline{\partial_{x}\nabla^{-2}\zeta^{\prime}\partial_{y}\nabla^{-2}\zeta^ {\prime}}]\ dy\] \[=\nu\left[\partial_{y}\overline{u}\right]_{0}^{l}+\left[[ \overline{\partial_{x}\nabla^{-2}\zeta^{\prime}\partial_{y}\nabla^{-2}\zeta^ {\prime}}]\right]_{0}^{l}=0\] which vanishes by using periodic boundary conditions. On the other hand, for \(\zeta^{\prime}\), recalling that \(\zeta=\zeta^{\prime}+\overline{\zeta}\) with \(\overline{\zeta}=\frac{1}{l}\int_{0}^{l}\zeta\ dx\), we have \[\int_{0}^{l}\int_{0}^{l}\zeta^{\prime}\ dxdy=\int_{0}^{l}\int_{0}^{l}\zeta\ dxdy-\int_{0}^{l}\int_{0}^{l}\overline{\zeta}\ dxdy=l\int_{0}^{l}\overline{\zeta}\ dy-l\int_{0}^{l}\overline{\zeta}\ dy=0.\] Then, \[\widehat{\zeta}^{\prime}(0)=\int_{0}^{l}\int_{0}^{l}\zeta^{\prime}(x,y)e^{i0 \cdot(x,y)}\ dxdy=\int_{0}^{l}\int_{0}^{l}\zeta^{\prime}(x,y)\ dxdy=0, \tag{2.11}\] and if we define \[c_{0}=\int_{0}^{l}\overline{u}\ dy,\] we have, \[c_{0}=\int_{0}^{l}\overline{u}\ dy=\int_{0}^{l}\overline{u}(y)e^{i0\cdot y}\ dx=\widehat{\overline{u}}(0). \tag{2.12}\] We set \[\mu:=\overline{u}-c_{0}\] so that \[\widehat{\mu}(0)=0. \tag{2.13}\] Moreover, using relation (2.8), \[\partial_{t}\mu=\partial_{t}\overline{u} =\nu\nabla^{2}\overline{u}+\partial_{y}\overline{[\partial_{x} \nabla^{-2}\zeta^{\prime}\partial_{y}\nabla^{-2}\zeta^{\prime}]}\] \[=\nu\nabla^{2}\mu+\partial_{y}\overline{[\partial_{x}\nabla^{-2} \zeta^{\prime}\partial_{y}\nabla^{-2}\zeta^{\prime}]}. \tag{2.14}\] Then \(\mu\) satisfies equation (2.8) with the additional condition (2.13). Due to the above remarks, we study the initial value problem (1.3) equivalently in the following functions spaces. **Definition 2.1**.: \[X^{s}:=\left\{(f,g)\in L^{\infty}\left((0,\infty);\ H^{s}(\mathbb{T}_{l}) \crosscross H^{s}(\mathbb{T}_{l}^{2})\right);\widehat{f}(0)=0,\ \widehat{g}(0)=0\right\},\] and in a similar way \[X^{s,\delta}:=\left\{(f,g)\in L^{\infty}\left([0,\delta];\ H^{s}(\mathbb{T}_{ l})\crosscross H^{s}(\mathbb{T}_{l}^{2})\right);\widehat{f}(0)=0,\ \widehat{g}(0)=0\right\},\] equipped with norm \[\|(f,g)\|_{X^{s}} =\|f\|_{L^{\infty}_{[0,\infty)}H^{s}(\mathbb{T}_{l})}+\|g\|_{L^{ \infty}_{[0,\infty)}H^{s}(\mathbb{T}_{l}^{2})},\] \[\|(f,g)\|_{X^{s,\delta}} =\|f\|_{L^{\infty}_{[0,\delta]}H^{s}(\mathbb{T}_{l})}+\|g\|_{L^{ \infty}_{[0,\delta]}H^{s}(\mathbb{T}_{l}^{2})},\] respectively. Via Plancherel's theorem we express the \(\|\cdot\|_{H^{s}}\)-norm in the Fourier modes, namely \[\|f\|_{H^{s}}^{2}=\sum_{k}\langle k\rangle^{2s}|\widehat{f}|^{2},\] where \(\langle k\rangle:=(1+|k|^{2})^{\frac{1}{2}}\) is the Japanese bracket. **Notation 1**: Throughout the paper we use \(A\lesssim B\) to denote an estimate of the form \(A\leq CB\) for some absolute constant \(C\). If \(A\lesssim B\) and \(B\lesssim A\) we write \(A\sim B\). **Notation 2**: From here on we use the following notations, \[\left\|\cdot\right\|_{L^{\infty}_{[0,\infty)}} =\left\|\cdot\right\|_{L^{\infty}_{t}}\] \[\left\|\cdot\right\|_{L^{\infty}_{[0,\delta]}} =\left\|\cdot\right\|_{L^{\infty}_{\delta}}\] \[\left\|\cdot\right\|_{H^{s}(\mathbb{T}^{2}_{l})} =\left\|\cdot\right\|_{H^{s}_{xy}}.\] ## 3 Local Well-Posedness In this section we prove well posedness of (1.3) on the space \(X^{s,\delta}\). In order to do this, we first set the initial data of the problem. We fix \[(\zeta^{\prime}_{0}(x,y),\overline{u}_{0}(y))\in H^{s}(\mathbb{T}^{2}_{l}) \crosscross H^{s}(\mathbb{T}_{l}). \tag{3.1}\] Due to (2.14), we equivalently rewrite (1.3) in the following form \[\left\{\begin{array}{rl}\partial_{t}\mu-\nu\partial_{yy}\mu&=G(\gamma),\\ \partial_{t}\gamma+C_{1}\partial_{x}\nabla^{-2}\gamma-\nu\nabla^{2}\gamma&=F (\mu,\gamma),\\ \mu(0,y)&=\mu_{0}(y)\in H^{s}(\mathbb{T}_{l}),\\ \gamma(0,x,y)&=\gamma_{0}(x,y)\in H^{s}(\mathbb{T}^{2}_{l}),\end{array}\right. \tag{3.2}\] where we define, \[\mu(t,y) :=\overline{u}(t,y)-c_{0},\] \[\gamma(t,x,y) :=\zeta^{\prime}(t,x,y), \tag{3.3}\] \[G(\gamma) :=\partial_{y}\overline{[\partial_{x}\nabla^{-2}\gamma\partial_{ y}\nabla^{-2}\gamma]},\] \[F(\mu,\gamma) :=(\partial_{y}\nabla^{-2}\gamma)\gamma_{x}-(\partial_{x}\nabla^ {-2}\gamma)\gamma_{y}-\mu\gamma_{x}+c_{0}\gamma_{x}+\partial_{y}\overline{[( \partial_{x}\nabla^{-2}\gamma)\gamma]}.\] We now use the Duhamel's principle so that the solution to problem (3.2) can be interpreted as the solution of the integral system, \[(\mu,\gamma) =\left(e^{\nu t\partial_{yy}}\mu_{0}+\int_{0}^{t}e^{(\nu\partial_ {yy})(t-t^{\prime})}G(\gamma)\ dt^{\prime},\right. \tag{3.4}\] \[\left.e^{\tilde{D}^{2}t}\ \gamma_{0}+\int_{0}^{t}e^{\tilde{D}^{2}(t-t^{ \prime})}F(\mu,\gamma)\ dt^{\prime}\right)\] with \(\tilde{D}^{2}=\nu\nabla^{2}-C_{1}\partial_{x}\nabla^{-2}\). It is not restrictive to assume \(C_{1}=1\) as we will see below. ### Bounds and estimates We proceed by deriving estimates of the norm of the various terms of the Duhamel expressions above. #### 3.1.1 Zonal-Mean Momentum Equation Estimate Let us start with the equation for \(\mu\). **Proposition 3.1**.: _Let \(t\in[0,\delta]\), \(\delta\in\mathbb{R}_{+}\), \(s\geq 0\), \(\alpha\in(\frac{1}{2},1)\). Then,_ \[\left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)\ dt^{\prime} \right\|_{L^{\infty}_{\delta}H^{s}_{xy}}\lesssim\delta^{1-\alpha}\|\gamma\|^{2 }_{L^{\infty}_{\delta}H^{s}_{xy}}.\] Proof.: We recall that, \[\left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)dt^{ \prime}\right\|_{L^{\infty}_{\delta}H^{s}_{y}}\\ =\sup_{t\in[0,\delta]}\left(\sum_{k_{2}}\left|\int_{0}^{t}e^{-(t-t ^{\prime})\nu k_{2}^{2}}\widehat{G(\gamma)}(k_{2})dt^{\prime}\right|^{2}\left< k_{2}\right>^{2s}\right)^{\frac{1}{2}}\] where, \[\widehat{G(\gamma)}(k_{2}) =\mathfrak{F}_{k}\left(\partial_{y}\overline{[\partial_{x}\nabla ^{-2}\gamma\partial_{y}\nabla^{-2}\gamma]}\right)\] \[=ik_{2}\left.\left(-i\frac{k_{1}}{|k|^{2}}\widehat{\gamma}*(-i \frac{k_{2}}{|k|^{2}}\widehat{\gamma})\right)\right|_{[0,k_{2}]}\] \[=\sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\\ m\neq 0\end{subarray}}i\frac{h_{1}m_{2}k_{2}}{|h|^{2}|m|^{2}}\widehat{\gamma }(h)\widehat{\gamma}(m). \tag{3.5}\] In fact, for a general function \(f(x,y)\in H^{s}_{xy}(\mathbb{T}^{2}_{l})\), \[\overline{f(x,y)}:=\frac{1}{l}\int_{0}^{l}f(x,y)\ dx=\overline{f}(y)\] and \[\widehat{f(x,y)}(k_{1},k_{2}) =\int e^{ijk_{2}}\overline{f}(y)\ dy\] \[=\int e^{ijk_{2}}\left(\frac{1}{l}\int_{0}^{l}f(x,y)\ dx\right)dy\] \[=\frac{1}{l}\int_{0}^{l}\left(\int e^{ijk_{2}}f(x,y)\ dy\right)dx\] \[=\frac{1}{l}\int_{0}^{l}\widehat{f}(0,k_{2})\ dx\] \[=\widehat{f}(0,k_{2})\frac{1}{l}\int_{0}^{l}dx=\widehat{f}(0,k_{ 2}).\] _Remark 3.2_.: Due to (2.11) and (2.13) we can assume \(m\neq 0\) and \(h\neq 0\) in (3.5). Henceforth we omit this subscript. For now, we only consider \[\left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)dt^ {\prime}\right\|_{H^{s}_{y}}^{2} \leq\int_{0}^{t}\left\|e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma )\right\|_{H^{s}_{y}}^{2}dt^{\prime}\] \[=\int_{0}^{t}\sum_{k_{2}}\left|e^{-(t-t^{\prime})\nu k_{2}^{2}} \widehat{G(\gamma)}(k_{2})\right|^{2}\left<k_{2}\right>^{2s}dt^{\prime}.\] Using (3.5), \[=\int_{0}^{t}\sum_{k_{2}}\left|e^{-(t-t^{\prime})\nu k_{2}^{2}}\sum_{ \begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}i\frac{h_{1}m_{2}k_{2}}{|h|^{2}|m|^{2}}\widehat{ \gamma}(h)\widehat{\gamma}(m)\right|^{2}\langle k_{2}\rangle^{2s}dt^{\prime}\] \[\leq\int_{0}^{t}\sum_{k_{2}}\left(e^{-(t-t^{\prime})\nu k_{2}^{2}} \sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h||m||k_{2}\langle k_{2}\rangle^{s}}{| h|^{2}|m|^{2}}\widehat{\gamma}(h)\widehat{\gamma}(m)\right)^{2}dt^{\prime}\] \[=\int_{0}^{t}\sum_{k_{2}}\left(\frac{e^{-(t-t^{\prime})\nu k_{2}^ {2}}((t-t^{\prime})\nu k_{2}^{2})^{\alpha}}{((t-t^{\prime})\nu k_{2}^{2})^{ \alpha}}\sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k_{2}|\langle k_{2}\rangle^{s}}{|h||m| }\widehat{\gamma}(h)\widehat{\gamma}(m)\right)^{2}dt^{\prime}.\] Since \[e^{-(t-t^{\prime})\nu k_{2}^{2}}((t-t^{\prime})\nu k_{2}^{2})^{\alpha}\leq C, \ \ \alpha\in[0,1], \tag{3.6}\] is uniformly bounded, we can focus on \[\sum_{k_{2}}\left(\frac{1}{(\nu k_{2}^{2})^{\alpha}}\sum_{ \begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k_{2}|\langle k_{2}\rangle^{s}}{|h||m|} |\widehat{\gamma}(h)||\widehat{\gamma}(m)|\right)^{2},\] which is equivalent to \[\sum_{k_{2}}\left(\frac{1}{(\nu k_{2}^{2})^{\alpha}}\sum_{ \begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k_{2}|\langle k_{2}\rangle^{s}}{|h|| \tilde{m}|}|\widehat{\gamma}(h)||\widehat{\gamma}(\tilde{m})|\right)^{2}\] with \[\tilde{m}=(-h_{1},m_{2}).\] By duality, we want to show \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k_{2}}\sum_{\begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^{ s}\langle\tilde{m}\rangle^{s}}f_{1}(h)f_{2}(\tilde{m})g(k_{2})\lesssim\|f_{1}\|_{l^ {2}}\|f_{2}\|_{l^{2}}\|g\|_{l^{2}},\] where \[V(k_{2},h,\tilde{m}) :=\frac{\langle k_{2}\rangle^{s}}{|k_{2}|^{2\alpha-1}|h||\tilde{ m}|};\] \[f_{1}(h) :=\langle h\rangle^{s}|\widehat{\gamma}(h)|;\] \[f_{2}(\tilde{m}) :=\langle\tilde{m}\rangle^{s}|\widehat{\gamma}(\tilde{m})|.\] We remark that if \(\gamma\in H^{s}\) then \(f_{i}\in l^{2}\) for \(i=1,2.\) Moreover, we have \[\langle k\rangle^{s}=\left((1+|k|^{2})^{\frac{1}{2}}\right)^{s}\sim|k|^{s}.\] Hence, \[\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^{s}\langle\tilde{m}\rangle^{s}} \sim\frac{|k_{2}|^{s-2\alpha+1}}{|h|^{s+1}|\tilde{m}|^{s+1}}.\] We are ready to study all possible cases as \(h,m\) and \(k\) varies: 1. \(|h_{2}|\gg|m_{2}|\Rightarrow|h_{2}|\sim|k_{2}|\). \[\frac{|k_{2}|^{s-2\alpha+1}}{|h|^{s+1}|\tilde{m}|^{s+1}}\sim\frac{|h_{2}|^{s-2 \alpha+1}}{(|h_{1}|^{2}+|h_{2}|^{2})^{\frac{s+1}{2}}(|h_{1}|^{2}+|m_{2}|^{2})^{ \frac{s+1}{2}}}.\] Since \(|\tilde{m}|^{s+1}=(|h_{1}|^{2}+|m_{2}|^{2})^{\frac{s+1}{2}}\geq 1\) and \(|h_{2}|\neq 0\) because \(|h_{2}|\gg|m_{2}|\), \[\leq\frac{|h_{2}|^{s-2\alpha+1}}{|h_{2}|^{s+1}}\leq\frac{1}{|h_{2}|^{2\alpha}}.\] Using three times Cauchy-Schwartz inequality we have \[\sum_{k_{2}}\sum_{\begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^ {s}\langle\tilde{m}\rangle^{s}}f_{1}(h)f_{2}(\tilde{m})g(k_{2})\] \[\leq\sum_{k_{2}=h_{2}+m_{2}}\frac{1}{|h_{2}|^{2\alpha}}g(k_{2}) \sum_{h_{1}}f_{1}(h_{1},h_{2})f_{2}(-h_{1},m_{2})\] \[\leq\sum_{k_{2}=h_{2}+m_{2}}\frac{g(k_{2})}{|h_{2}|^{2\alpha}} \left(\sum_{h_{1}}|f_{1}(h_{1},h_{2})|^{2}\right)^{\frac{1}{2}}\left(\sum_{h_ {1}}|f_{2}(-h_{1},m_{2})|^{2}\right)^{\frac{1}{2}}\] \[=\sum_{h_{2}}\sum_{k_{2}}\frac{g(k_{2})}{|h_{2}|^{2\alpha}}\left( \sum_{h_{1}}|f_{1}(h_{1},h_{2})|^{2}\right)^{\frac{1}{2}}\left(\sum_{h_{1}}|f _{2}(-h_{1},k_{2}-h_{2})|^{2}\right)^{\frac{1}{2}}\] \[\leq\|g\|_{l_{2}}\|f_{2}\|_{l_{2}}\sum_{h_{2}}\frac{1}{|h_{2}|^{2 \alpha}}\left(\sum_{h_{1}}|f_{1}(h_{1},h_{2})|^{2}\right)^{\frac{1}{2}}\] \[\leq\|g\|_{l_{2}}\|f_{1}\|_{l_{2}}\|f_{2}\|_{l_{2}}\left(\sum_{h_ {2}}\frac{1}{|h_{2}|^{4\alpha}}\right)^{\frac{1}{2}}.\] The last term is summable if \(4\alpha>1\), i.e. \[\alpha>\frac{1}{4}.\] (3.7) 2. \(|h_{2}|\ll|m_{2}|\Rightarrow|m_{2}|\sim|k_{2}|\). \[\frac{|k_{2}|^{s-2\alpha+1}}{|h|^{s+1}|\tilde{m}|^{s+1}}\sim\frac{|m_{2}|^{s-2 \alpha+1}}{(|h_{1}|^{2}+|h_{2}|^{2})^{\frac{s+1}{2}}(|h_{1}|^{2}+|m_{2}|^{2})^ {\frac{s+1}{2}}}\] \[\leq\frac{|m_{2}|^{s-2\alpha+1}}{|m_{2}|^{s+1}}\leq\frac{1}{|m_{2 }|^{2\alpha}}\] because \(|h|^{s+1}=(|h_{1}|^{2}+|h_{2}|^{2})^{\frac{s+1}{2}}\geq 1\) and \(|m_{2}|\neq 0\) since \(|m_{2}|\gg|h_{2}|\). Then, \[\sum_{k_{2}}\sum_{\begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^ {s}\langle\tilde{m}\rangle^{s}}f_{1}(h)f_{2}(\tilde{m})g(k_{2})\] \[\leq\sum_{k_{2}=h_{2}+m_{2}}\frac{1}{|m_{2}|^{2\alpha}}g(k_{2}) \sum_{h_{1}}f_{1}(h_{1},h_{2})f_{2}(-h_{1},m_{2})\] \[\leq\sum_{k_{2}=h_{2}+m_{2}}\frac{g(k_{2})}{|m_{2}|^{2\alpha}} \left(\sum_{h_{1}}|f_{1}(h_{1},h_{2})|^{2}\right)^{\frac{1}{2}}\left(\sum_{h_ {1}}|f_{2}(-h_{1},m_{2})|^{2}\right)^{\frac{1}{2}}\] \[=\sum_{m_{2}}\sum_{k_{2}}\frac{g(k_{2})}{|m_{2}|^{2\alpha}} \left(\sum_{h_{1}}|f_{1}(h_{1},k_{2}-m_{2})|^{2}\right)^{\frac{1}{2}}\left( \sum_{h_{1}}|f_{2}(-h_{1},m_{2})|^{2}\right)^{\frac{1}{2}}\] \[\leq\left\|g\right\|_{l_{2}}\left\|f_{1}\right\|_{l_{2}}\sum_{m_{ 2}}\frac{1}{|m_{2}|^{2\alpha}}\left(\sum_{h_{1}}|f_{2}(-h_{1},m_{2})|^{2} \right)^{\frac{1}{2}}\] \[\leq\left\|g\right\|_{l_{2}}\left\|f_{1}\right\|_{l_{2}}\left\|f _{2}\right\|_{l_{2}}\left(\sum_{m_{2}}\frac{1}{|m_{2}|^{4\alpha}}\right)^{ \frac{1}{2}}.\] As above, the last term is summable if \[\alpha>\frac{1}{4}.\] 3. \(|h_{2}|\sim|m_{2}|\Rightarrow|k_{2}|\sim|h_{2}|\sim|m_{2}|\). \[\frac{|k_{2}|^{s-2\alpha+1}}{|h|^{s+1}|\tilde{m}|^{s+1}}\sim\frac{|h_{2}|^{s-2 \alpha+1}}{(|h_{1}|^{2}+|h_{2}|^{2})^{\frac{s+1}{2}}(|h_{1}|^{2}+|m_{2}|^{2}) ^{\frac{s+1}{2}}}\] Since \(|k_{2}|\neq 0\) at least one between \(|h_{2}|\) and \(|m_{2}|\) is not zero. Moreover \(|k_{2}|\sim|h_{2}|\sim|m_{2}|\), then \[\frac{|k_{2}|^{s-2\alpha+1}}{|h|^{s+1}|\tilde{m}|^{s+1}}\lesssim\frac{1}{|h_{2 }|^{2\alpha}}\] and we conclude as in the previous case. By \(1),2),3)\) we have that \[s\geq 0;\] \[\alpha>\frac{1}{4}.\] Then, by (3.6) and definitions of \(f_{1}\) and \(f_{2}\), \[\left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)dt^ {\prime}\right\|_{H_{y}^{s}} \lesssim\|\gamma\|_{H_{xy}^{s}}^{2}\int_{0}^{t}\frac{1}{(t-t^{ \prime})^{\alpha}}dt^{\prime}\] \[\sim t^{1-\alpha}\|\gamma\|_{H_{xy}^{s}}^{2},\] for \(\frac{1}{4}<\alpha<1\) and \(s\geq 0.\) Finally \[\left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)dt^{\prime}\right\| _{L_{\delta}^{\infty}H_{y}^{s}}\lesssim\delta^{1-\alpha}\|\gamma\|_{L_{\delta}^ {\infty}H_{xy}^{s}}^{2}\] for \(\frac{1}{4}<\alpha<1\) and \(s\geq 0.\) We now define the following functional, \[\Phi_{1}(\mu,\gamma)=e^{\nu t\partial_{yy}}\mu_{0}+\int_{0}^{t}e^{\nu(t-t^{ \prime})\partial_{yy}}G(\gamma)dt^{\prime}, \tag{3.8}\] and as a consequence of the previous proposition we have, **Corollary 3.3**.: _Let \(t\in[0,\delta]\), \(\delta\in\mathbb{R}_{+}\), \(s\geq 0\), \(\alpha\in(\frac{1}{2},1).\) Then,_ \[\|\Phi_{1}(\mu,\gamma)\|_{L_{\delta}^{\infty}H_{y}^{s}}\lesssim\|\mu_{0}\|_{H _{y}^{s}}+\delta^{1-\alpha}\|\gamma\|_{L_{\delta}^{\infty}H_{xy}^{s}}^{2}.\] Proof.: \[\|\Phi_{1}(\mu,\gamma)\|_{L_{\delta}^{\infty}H_{y}^{s}}\leq\underbrace{\|e^ {\nu t\partial_{yy}}\mu_{0}\|_{L_{\delta}^{\infty}H_{y}^{s}}}_{A}+\underbrace{ \left\|\int_{0}^{t}e^{(t-t^{\prime})\nu\partial_{yy}}G(\gamma)dt^{\prime}\right\| _{L_{t}^{\infty}H_{y}^{s}}^{2}}_{B}.\] By Proposition 3.1 we have \[B\lesssim\delta^{1-\alpha}\|\gamma\|_{L_{\delta}^{\infty}H_{xy}^{s}}^{2}.\] On the other hand, \[A=\left\|e^{\nu t\partial_{yy}}\mu_{0}\right\|_{L_{\delta}^{\infty }H_{y}^{s}} =\sup_{t\in[0,\delta]}\left(\sum_{k_{2}}\left|e^{-t\nu k_{2}^{2}} \widehat{\mu}_{0}(k_{2})\right|^{2}\left\langle k_{2}\right\rangle^{2s}\right) ^{\frac{1}{2}}\] \[\lesssim\left(\sum_{k_{2}}\left|\widehat{\mu}_{0}(k_{2})\right| ^{2}\left\langle k_{2}\right\rangle^{2s}\right)^{\frac{1}{2}}=\|\mu_{0}\|_{H _{y}^{s}}.\] #### 3.1.2 Eddy-Vorticity Equation Estimate Similarly, we study the equation for \(\gamma\). **Proposition 3.4**.: _Let \(t\in[0,\delta]\), \(\delta\in\mathbb{R}_{+}\), \(s\geq 0\), \(\alpha\in(\frac{3}{4},1)\). Then,_ \[\left\|\int_{0}^{t}e^{\hat{D}^{2}(t-t^{\prime})}F(\mu,\gamma)\ dt ^{\prime}\right\|_{L_{\delta}^{\infty}H_{xy}^{s}}\\ \lesssim\delta^{1-\alpha}\left(\|\gamma\|_{L_{\delta}^{\infty}H_ {xy}^{s}}^{2}+\|\mu\|_{L_{\delta}^{\infty}H_{y}^{s}}\|\gamma\|_{L_{\delta}^{ \infty}H_{xy}^{s}}+\|\gamma\|_{L_{\delta}^{\infty}H_{xy}^{s}}\right).\] Proof.: By definition, \[\left\|\int_{0}^{t}e^{\hat{D}^{2}(t-t^{\prime})}F(\mu,\gamma)\ dt^{ \prime}\right\|_{L^{\infty}_{\delta}H^{s}_{xy}}\\ =\sup_{t\in[0,\delta]}\left(\sum_{k}\left|\int_{0}^{t}\ e^{(i\frac {k_{1}}{|k|^{2}}-\nu|k|^{2})(t-t^{\prime})}\widehat{F(\mu,\gamma)}(k)dt^{ \prime}\right|^{2}\langle k\rangle^{2s}\right)^{\frac{1}{2}}\] where, \[\widehat{F(\mu,\gamma)}(k)\] \[=\mathfrak{F}\left((\partial_{y}\nabla^{-2}\zeta^{\prime})\zeta _{x}^{\prime}-(\partial_{x}\nabla^{-2}\zeta^{\prime})\zeta_{y}^{\prime}-\mu \gamma_{x}+c_{0}\gamma_{x}+\partial_{y}\overline{[(\partial_{x}\nabla^{-2} \zeta^{\prime})\zeta^{\prime}]}\right)\] \[=-\left(i\frac{k_{2}}{|k|^{2}}\widehat{\gamma}\right)\ast(ik_{1} \widehat{\gamma})+\left(i\frac{k_{1}}{|k|^{2}}\widehat{\gamma}\right)\ast(ik_ {2}\widehat{\gamma})-\widehat{\mu}(k_{2})\ast ik_{1}\widehat{\gamma}(k)\] \[+c_{0}ik_{1}\widehat{\gamma}(k)-ik_{2}\left(i\frac{k_{1}}{|k|^{2} }\widehat{\gamma}\ast\widehat{\gamma}\right)\biggr{|}_{[0,k_{2}]}\] \[=\sum_{k=h+m}\left(\frac{h_{2}m_{1}}{|h|^{2}}-\frac{h_{1}m_{2}}{| h|^{2}}\right)\widehat{\gamma}(h)\widehat{\gamma}(m)-\sum_{\begin{subarray}{c}k_{1}=h_{ 1}+0\\ k_{2}=h_{2}+m_{2}\end{subarray}}ih_{1}\widehat{\mu}(m_{2})\widehat{\gamma}(h)\] \[+c_{0}ik_{1}\widehat{\gamma}(k)+\sum_{\begin{subarray}{c}0=k_{1} =h_{1}+m_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{k_{2}h_{1}}{|h|^{2}}\widehat{\gamma}(h) \widehat{\gamma}(m).\] For now, we only consider \[\left\|\int_{0}^{t}e^{\hat{D}^{2}(t-t^{\prime})}F(\mu,\gamma)\ dt ^{\prime}\right\|_{H^{s}_{xy}}^{2}\\ \leq \int_{0}^{t}\left\|\int_{0}^{t}e^{\hat{D}^{2}(t-t^{\prime})}F(\mu,\gamma)\ dt^{\prime}\right\|_{H^{s}_{xy}}^{2}dt^{\prime}\] \[= \int_{0}^{t}\sum_{k}\left|e^{(i\frac{k_{1}}{|k|^{2}}-\nu|k|^{2}) (t-t^{\prime})}\widehat{F(\mu,\gamma)}(k)\right|^{2}\langle k\rangle^{2s}dt^ {\prime}\] \[= \int_{0}^{t}\sum_{k}\left|e^{(i\frac{k_{1}}{|k|^{2}}-\nu|k|^{2}) (t-t^{\prime})}\bigg{(}\sum_{k=h+m}\left(\frac{h_{2}m_{1}}{|h|^{2}}-\frac{h_ {1}m_{2}}{|h|^{2}}\right)\widehat{\gamma}(h)\widehat{\gamma}(m)\] \[+\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}ih_{1}\widehat{\mu}(m_{2})\widehat{\gamma}(h)+ c_{0}ik_{1}\widehat{\gamma}(k)\] \[+\sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{k_{2}h_{1}}{|h|^{2}}\widehat{\gamma}(h) \widehat{\gamma}(m)\bigg{)}\Bigg{|}^{2}\langle k\rangle^{2s}dt^{\prime}\] \[\leq\int_{0}^{t}\sum_{k}\left[e^{-\nu|k|^{2}(t-t^{\prime})}\langle k \rangle^{s}\bigg{(}\sum_{k=h+m}\left(\frac{|h||m|+|h||m|}{|h|^{2}}\right)| \widehat{\gamma}(h)||\widehat{\gamma}(m)|\right.\] \[\left.+\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}|h||\widehat{\mu}(m_{2})||\widehat{\gamma}(h) |+c_{0}|k||\widehat{\gamma}(k)|\right.\] \[\left.+\sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k||h|}{|h|^{2}}|\widehat{\gamma}(h)|| \widehat{\gamma}(m)|\right)^{2}dt^{\prime}.\] Multiplying and dividing by \((\nu|k|^{2}(t-t^{\prime}))^{\alpha}\), \[=\int_{0}^{t}\sum_{k}\left[\frac{e^{-\nu|k|^{2}(t-t^{\prime})}( \nu|k|^{2}(t-t^{\prime}))^{\alpha}}{(\nu|k|^{2}(t-t^{\prime}))^{\alpha}} \langle k\rangle^{s}\Bigg{(}\sum_{k=h+m}2\frac{|m|}{|h|}|\widehat{\gamma}(h) ||\widehat{\gamma}(m)|\right.\] \[+\left.\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}|h||\widehat{\mu}(m_{2})||\widehat{\gamma}(h) |+c_{0}|k||\widehat{\gamma}(k)|\right.\] \[+\left.\sum_{\begin{subarray}{c}k_{1}=h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|}|\widehat{\gamma}(h)||\widehat{ \gamma}(m)|\right)^{2}dt^{\prime}.\] Since \[e^{-\nu|k|^{2}(t-t^{\prime})}(\nu|k|^{2}(t-t^{\prime}))^{\alpha}\leq C,\quad C \in\mathbb{R} \tag{3.9}\] is uniformly bounded, we study \[\sum_{k}\left[\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}\;\bigg{(} \sum_{k=h+m}2\frac{|m|}{|h|}|\widehat{\gamma}(h)||\widehat{\gamma}(m)|+\sum_{ \begin{subarray}{c}k_{1}=h_{1}+0\\ k_{2}=h_{2}+m_{2}\end{subarray}}|h||\widehat{\mu}(m_{2})||\widehat{\gamma}(h)|\right.\] \[+c_{0}|k||\widehat{\gamma}(k)|+\sum_{ \begin{subarray}{c}h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|}|\widehat{\gamma}(h)||\widehat{ \gamma}(m)|\bigg{)}\right]^{2}.\] By duality we want to show \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\Bigg{(}\sum_{k=h+m}2\frac{|m|}{|h|}|\widehat{\gamma}(h)||\widehat{ \gamma}(m)|+\sum_{\begin{subarray}{c}k_{1}=h_{1}+0\\ k_{2}=h_{2}+m_{2}\end{subarray}}|h||\widehat{\mu}(m_{2})||\widehat{\gamma}(h)|\] \[+c_{0}|k||\widehat{\gamma}(k)|+\sum_{ \begin{subarray}{c}h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|}|\widehat{\gamma}(h)||\widehat{ \gamma}(m)|\Bigg{)}g(k)\] \[\lesssim\big{(}\|f_{1}\|_{l^{2}}^{2}\|g\|_{l^{2}}+\|f_{1}\|_{l^{2 }}\|g\|_{l^{2}}+\|f_{1}\|_{l^{2}}\|f_{2}\|_{l^{2}}\|g\|_{l^{2}}\big{)} \tag{3.10}\] where \[f_{1}(k) :=\langle k\rangle^{s}|\widehat{\gamma}(k)|,\] \[f_{2}(k_{2}) :=\langle k_{2}\rangle^{s}|\widehat{\mu}(k_{2})|.\] We observe that if \(\gamma,\ \mu\in H^{s}\) then \(f_{i}\in l^{2}\) for \(i=1,2\). Since \[\sup(a+b)\leq\sup(a)+\sup(b),\] we split (3.10) and study the following problems \[\sup_{\|g\|_{2}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{k=h+m}\frac{|m|}{|h|}\frac{1}{\langle h\rangle^{s}\langle m \rangle^{s}}f_{1}(h)f_{1}(m)g(k)\lesssim\|f_{1}\|_{l^{2}}^{2}\|g\|_{l^{2}} \tag{3.11}\] \[\sup_{\|g\|_{2}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{\begin{subarray}{c}h_{1}+m_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|}\frac{1}{\langle h\rangle^{s} \langle m\rangle^{s}}f_{1}(h)f_{1}(m)g(k)\lesssim\|f_{1}\|_{l^{2}}^{2}\|g\|_{l ^{2}}\] (3.12) \[\sup_{\|g\|_{2}\leq 1}\sum_{k}c_{0}\frac{|k|}{|k|^{2\alpha}}f_{1}( k)g(k)\lesssim\|f_{1}\|_{l^{2}}\|g\|_{l^{2}}\] (3.13) \[\sup_{\|g\|_{2}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h|}{\langle h\rangle^{s}\langle m_{2} \rangle^{s}}f_{1}(h)f_{2}(m_{2})g(k)\lesssim\|f_{1}\|_{l^{2}}\|f_{2}\|_{l^{2}} \|g\|_{l^{2}}. \tag{3.14}\] We start with (3.11) and since \[\langle k\rangle^{s}\sim|k|^{s}, \tag{3.15}\] we have \[\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}\frac{|m|}{|h|}\frac{1}{\langle h \rangle^{s}\langle m\rangle^{s}}\sim\frac{|k|^{s-2\alpha}}{|h|^{s+1}|m|^{s-1}}.\] 1. \(|h|\gg|m|\Rightarrow|k|\sim|h|\). \[\frac{|k|^{s-2\alpha}}{|h|^{s+1}|m|^{s-1}}\sim\frac{1}{|h|^{2\alpha+1}|m|^{s- 1}}\leq\frac{1}{|m|^{2\alpha+s}}.\] Proceeding as in Proposition 3.1, we get \[\sum_{k}\sum_{k=h+m} \frac{1}{|m|^{2\alpha+s}}f_{1}(h)f_{1}(m)g(k)\] \[= \sum_{m}\frac{1}{|m|^{2\alpha+s}}f_{1}(m)\sum_{k}f_{1}(k-m)g(k)\] \[\leq \sum_{m}\frac{1}{|m|^{2\alpha+s}}f_{1}(m)\left(\sum_{k}f_{1}^{2}( k-m)\right)^{\frac{1}{2}}\left(\sum_{k}g^{2}(k)\right)^{\frac{1}{2}}\] \[\leq \left(\sum_{m}\frac{1}{|m|^{4\alpha+2s}}\right)^{\frac{1}{2}} \left(\sum_{m}f_{1}^{2}(m)\right)^{\frac{1}{2}}\|f_{1}\|_{l^{2}}\|g\|_{l^{2}}\] \[\leq \left(\sum_{m}\frac{1}{|m|^{4\alpha+2s}}\right)^{\frac{1}{2}}\|f _{1}\|_{l^{2}}^{2}\|g\|_{l^{2}}.\] The first term is summable if \[4\alpha+2s>2.\] The worst case is when \(s=0\), but in this situation it is sufficient to choose \[\alpha>\frac{1}{2}.\] 2. \(|h|\ll|m|\Rightarrow|k|\sim|m|\). \[\frac{|k|^{s-2\alpha}}{|h|^{s+1}|m|^{s-1}}\sim\frac{1}{|h|^{s+1}|m|^{2\alpha- 1}}\overbrace{\leq}^{\alpha>\frac{1}{2}}\frac{1}{|h|^{2\alpha+s}}\overbrace{ \leq}^{s\geq 0}\frac{1}{|h|^{2\alpha}}.\] Then, \[\sum_{k}\sum_{k=h+m} \frac{1}{|h|^{2\alpha}}f_{1}(h)f_{1}(m)g(k)\] \[=\sum_{h}\frac{1}{|h|^{2\alpha}}f_{1}(h)\sum_{m}f_{1}(m)g(h+m)\] \[\leq\left(\sum_{h}\frac{1}{|h|^{4\alpha}}\right)^{\frac{1}{2}}\|f_{1} \|_{l^{2}}^{2}\|g\|_{l^{2}}\] which is summable if \(\alpha>\frac{1}{2}\). 3. \(|h|\sim|m|\Rightarrow|k|\sim|h|\sim|m|\). \[\frac{|k|^{s-2\alpha}}{|h|^{s+1}|m|^{s-1}}\sim\frac{1}{|h|^{2\alpha+s}}\leq \frac{1}{|h|^{2\alpha}}.\] we conclude as in the previous case. By \(1),2),3)\) we get \[s\geq 0;\] \[\alpha>\frac{1}{2}.\] Similarly, we study (3.12). We have \[\frac{|k|}{|h|\langle h\rangle^{s}\langle m\rangle^{s}}\sim\frac{|k|}{|h|^{s+1 }|m|^{s}},\] so that we write \[\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}g(k)\sum_{ \begin{subarray}{c}k_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|^{s+1}|m|^{s}}f_{1}(h)f_{1}(m) \tag{3.16}\] If we define \(\tilde{m}=(-h_{1},m_{2})\), equation (3.16) becomes \[\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}g(k)\sum_{k=h+ \tilde{m}}\frac{|k|}{|h|^{s+1}|\tilde{m}|^{s}}f_{1}(h)f_{1}(\tilde{m})\] 1. \(|h|\gg|\tilde{m}|\Rightarrow|k|\sim|h|\). \[\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}\frac{|k|}{|h|^{s+1}|\tilde{m}|^{s}}\sim \frac{1}{|h|^{2\alpha}|\tilde{m}|^{s}}\leq\frac{1}{|\tilde{m}|^{2\alpha}|\tilde {m}|^{s}}\leq\frac{1}{|\tilde{m}|^{2\alpha}}.\] Then, \[\sum_{k}\sum_{k=h+\tilde{m}} \frac{1}{|\tilde{m}|^{2\alpha}}f_{1}(h)f_{1}(\tilde{m})g(k)\] \[= \sum_{\tilde{m}}\frac{1}{|\tilde{m}|^{2\alpha}}f_{1}(\tilde{m}) \sum_{k}f_{1}(k-\tilde{m})g(k)\] \[\leq \sum_{\tilde{m}}\frac{1}{|\tilde{m}|^{2\alpha}}f_{1}(\tilde{m}) \left(\sum_{k}f_{1}^{2}(k-\tilde{m})\right)^{\frac{1}{2}}\left(\sum_{k}g^{2}( k)\right)^{\frac{1}{2}}\] \[\leq \left(\sum_{\tilde{m}}\frac{1}{|\tilde{m}|^{4\alpha}}\right)^{ \frac{1}{2}}\left(\sum_{\tilde{m}}f_{1}^{2}(\tilde{m})\right)^{\frac{1}{2}}\| f_{1}\|_{l^{2}}\|g\|_{l^{2}}\] \[\leq \left(\sum_{\tilde{m}}\frac{1}{|\tilde{m}|^{4\alpha}}\right)^{ \frac{1}{2}}\|f_{1}\|_{l^{2}}^{2}\|g\|_{l^{2}}.\] The first term is summable if \[\alpha>\frac{1}{2}.\] 2. \(|h|\ll|\tilde{m}|\Rightarrow|k|\sim|\tilde{m}|\). \[\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}\frac{|k|}{|h|^{s+1}|\tilde{m}|^{s}} \sim\frac{1}{|k|^{2\alpha-1}|h|^{s+1}}\overbrace{\leq}^{\alpha>\frac{1}{2}} \frac{1}{|h|^{2\alpha-1}|h|^{s+1}}\leq\frac{1}{|h|^{2\alpha}}.\] Then, \[\sum_{k}\sum_{k=h+\tilde{m}} \frac{1}{|h|^{2\alpha}}f_{1}(h)f_{1}(\tilde{m})g(k)\] \[=\sum_{h}\frac{1}{|h|^{2\alpha}}f_{1}(h)\sum_{\tilde{m}}f_{1}( \tilde{m})g(h+\tilde{m})\] \[\leq\left(\sum_{h}\frac{1}{|h|^{4\alpha}}\right)^{\frac{1}{2}}\| f_{1}\|_{l^{2}}^{2}\|g\|_{l^{2}},\] which is summable if \(\alpha>\frac{1}{2}\). 3. \(|h|\sim|\tilde{m}|\Rightarrow|k|\sim|h|\sim|\tilde{m}|\). \[\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}\frac{|k|}{|h|^{s+1}|\tilde{m}|^{s}} \lesssim\frac{1}{|h|^{2\alpha}}.\] We conclude as in the previous case. By \(1),2),3)\) we get \[s\geq 0;\] \[\alpha>\frac{1}{2}.\] For (3.13), if we assume \(\alpha\geq\frac{1}{2}\) \[\sum_{k}c_{0}\frac{1}{|k|^{2\alpha-1}}f_{1}(k)g(k) \leq c_{0}\left(\sum_{k}f_{1}^{2}(k)\right)^{\frac{1}{2}}\left( \sum_{k}g^{2}(k)\right)^{\frac{1}{2}}\] \[\sim\|f_{1}\|_{l^{2}}\|g\|_{l_{2}},\] where we used \(|k|\geq 1\). Finally, we study (3.14). We have \[\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2\alpha}}g(k)\sum_{ \begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h|}{\langle h\rangle^{s}\langle m_{2} \rangle^{s}}f_{1}(h)f_{2}(m_{2})\] \[\sim \sum_{k}\frac{|k|^{s}}{|k|^{2\alpha}}g(k)\sum_{\begin{subarray}{ c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h|}{|h|^{s}|m_{2}|^{s}}f_{1}(h)f_{2}(m_{ 2}).\] 1. \(|h|\gg|m|=|m_{2}|\Rightarrow|k|\sim|h|\). Then, \[\frac{|k|^{s}}{|k|^{2\alpha}}\frac{|h|}{|h|^{s}|m_{2}|^{s}}\sim\frac{1}{|h|^{2 \alpha-1}|m_{2}|^{s}}.\] If \(\alpha>1/2\), \[\leq\frac{1}{|m_{2}|^{s-1+2\alpha}},\] we get, \[\sum_{k}\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{1}{|m_{2}|^{2\alpha-1+s}}f_{1}(h_{1},h_ {2})f_{2}(m_{2})g(k_{1},k_{2})\] \[\leq\sum_{k_{2}=h_{2}+m_{2}}\frac{1}{|m_{2}|^{2\alpha-1+s}}f_{2}(m _{2})\overbrace{\left(\sum_{k_{1}}|f_{1}(k_{1},h_{2})|^{2}\right)^{\frac{1}{2 }}}^{w(h_{2})}\overbrace{\left(\sum_{k_{1}}|g(k_{1},k_{2})|^{2}\right)^{\frac {1}{2}}}^{w(k_{2})}\overbrace{\left(\sum_{k_{1}}|g(k_{1},k_{2})|^{2}\right)^ {\frac{1}{2}}}^{v(k_{2})}\] \[=\sum_{k_{2}}\sum_{m_{2}}\frac{1}{|m_{2}|^{2\alpha-1+s}}f_{2}(m_{ 2})w(k_{2}-m_{2})v(k_{2})\] \[\leq\sum_{m_{2}}\frac{1}{|m_{2}|^{2\alpha-1+s}}f_{2}(m_{2})\|f_{ 1}\|_{l^{2}}\|g\|_{l^{2}}\] \[\leq\left(\sum_{m_{2}}\frac{1}{|m_{2}|^{4\alpha-2+2s}}\right)^{ \frac{1}{2}}\|f_{1}\|_{l^{2}}\|f_{2}\|_{l^{2}}\|g\|_{l^{2}}\] which is summable if \(4\alpha-2+2s>1\). If \(s=0\), then \(\alpha>\frac{3}{4}\). 2. \(|m_{2}|\gg|h|\)\(\Rightarrow\)\(|k|\sim|m_{2}|.\) Then, \[\frac{|k|^{s}}{|k|^{2\alpha}}\frac{|h|}{|h|^{s}|m_{2}|^{s}}\lesssim\frac{|m_{2}|} {|m_{2}|^{2\alpha}|h|^{s}}=\frac{1}{|m_{2}|^{2\alpha-1}|h|^{s}}\leq\frac{1}{|m_ {2}|^{2\alpha-1}}\] recalling that \(|h|\geq 1\) and \(s\geq 0.\) We conclude as in the previous case. 3. \(|m_{2}|\sim|h|\)\(\Rightarrow\)\(|k|\sim|m_{2}|\sim|h|.\)Then, \[\frac{|k|^{s}}{|k|^{2\alpha}}\frac{|h|}{|h|^{s}|m_{2}|^{s}}\sim\frac{1}{|m_{2} |^{2\alpha-1+s}}\] and we conclude as in the previous case. Putting all the conditions on \(\alpha\) and \(s\) together, we get \[s\geq 0;\] \[1>\alpha>\frac{3}{4}.\] By definition of \(f_{1}\) and \(f_{2}\) and using (3.9), we derive \[\left\|\int_{0}^{t}e^{\tilde{D}^{2}(t-t^{\prime})} F(\gamma,t^{\prime})dt^{\prime}\right\|_{H^{s}_{xy}}\] \[\lesssim\int_{0}^{t}\frac{1}{(t-t^{\prime})^{\alpha}}dt^{\prime} \left(\|\gamma\|_{H^{s}_{xy}}^{2}+\|\mu\|_{H^{s}_{y}}\|\gamma\|_{H^{s}_{xy}}+ \|\gamma\|_{H^{s}_{xy}}\right)\] \[\sim t^{1-\alpha}\left(\|\gamma\|_{H^{s}_{xy}}^{2}+\|\mu\|_{H^{s} _{y}}\|\gamma\|_{H^{s}_{xy}}+\|\gamma\|_{H^{s}_{xy}}\right).\] Finally, \[\left\|\int_{0}^{t}e^{\tilde{D}^{2}(t-t^{\prime})} F(\gamma,t^{\prime})dt^{\prime}\right\|_{L^{\infty}_{\delta}H^{s}_{ xy}}\] \[\lesssim\delta^{1-\alpha}\left(\|\gamma\|_{L^{\infty}_{\delta}H^ {s}_{xy}}^{2}+\|\mu\|_{L^{\infty}_{\delta}H^{s}_{y}}\|\gamma\|_{L^{\infty}_{ \delta}H^{s}_{xy}}+\|\gamma\|_{L^{\infty}_{\delta}H^{s}_{xy}}\right).\] for \(s\geq 0\) and \(\frac{3}{4}<\alpha<1.\) Let us introduce the following functional, \[\Phi_{2}(\mu,\gamma)=e^{t\tilde{D}^{2}}\gamma_{0}+\int_{0}^{t}e^{\tilde{D}^{2 }(t-t^{\prime})}F(\mu,\gamma)dt^{\prime}. \tag{3.17}\] An immediate consequence of the previous proposition is, **Corollary 3.5**.: _Let \(t\in[0,\delta]\), \(\delta\in\mathbb{R}_{+}\), \(s\geq 0\), \(\alpha\in(\frac{3}{4},1).\) Then,_ \[\|\Phi_{2}(\mu,\gamma)\|_{L^{\infty}_{\delta}H^{s}_{xy}}\] \[\lesssim\|\gamma_{0}\|_{H^{s}_{xy}}+\delta^{1-\alpha}\|\gamma\|_{ L^{\infty}_{\delta}H^{s}_{xy}}\left(\|\gamma\|_{L^{\infty}_{\delta}H^{s}_{xy}}+ \|\mu\|_{L^{\infty}_{\delta}H^{s}_{y}}+1\right).\] Proof.: \[\|\Phi_{2}(\gamma)\|_{L^{\infty}_{\delta}H^{s}_{xy}}\leq\underbrace{\left\|e^ {t\tilde{D}^{2}}\gamma_{0}\right\|_{L^{\infty}_{\delta}H^{s}_{xy}}}_{A}+ \underbrace{\left\|\int_{0}^{t}e^{\tilde{D}^{2}(t-t^{\prime})}F(\gamma,t^{ \prime})dt^{\prime}\right\|_{L^{\infty}_{\delta}H^{s}_{xy}}}_{B}\] By Proposition 3.4 we have \[B\lesssim\delta^{1-\alpha}\left(\|\gamma\|_{L^{\infty}_{\delta}H^{x}_{xy}}^{2}+\| \mu\|_{L^{\infty}_{\delta}H^{x}_{y}}\|\gamma\|_{L^{\infty}_{\delta}H^{x}_{xy}}+ \|\gamma\|_{L^{\infty}_{\delta}H^{x}_{xy}}\right).\] On the other hand, \[A=\left\|\mathfrak{F}(e^{t\hat{D}^{2}}\gamma_{0})\right\|_{H^{s}} =\left(\sum_{k}\left|e^{(i\frac{k_{1}}{|k|^{2}}-\nu|k|^{2})}\widehat {\gamma}_{0}(k)\right|^{2}\langle k\rangle^{2s}\right)^{\frac{1}{2}}\] \[\lesssim\left(\sum_{k}|\widehat{\gamma}_{0}(k)|^{2}\langle k \rangle^{2s}\right)^{\frac{1}{2}}\] \[=\|\gamma_{0}\|_{H^{x}_{xy}}.\] #### 3.1.3 The \(\Phi\)-Functional With \(\Phi_{1}\) defined in (3.8) and \(\Phi_{2}\) defined in (3.17), let us define \[\Phi(\mu,\gamma)=(\Phi_{1},\Phi_{2})\] \[=\left(e^{\nu t\partial_{yy}}\mu_{0}+\int_{0}^{t}e^{\nu(t-t^{ \prime})\partial_{yy}}G(\gamma)\ dt^{\prime},\ e^{\hat{D}^{2}t}\gamma_{0}+\int_{0 }^{t}e^{\hat{D}^{2}(t-t^{\prime})}F(\mu,\gamma)\ dt^{\prime}\right)\] By Corollary 3.3 and 3.5 we have **Corollary 3.6**.: _Let \(t\in[0,\delta]\), \(\delta\in\mathbb{R}_{+}\), \(s\geq 0\), \(\alpha\in(\frac{3}{4},1)\). Then,_ \[\|\Phi(\mu,\gamma)\|_{X^{s,\delta}}\leq C_{1}\|\mu_{0}\|_{H^{s}_{y }}+C_{2}\|\gamma_{0}\|_{H^{s}_{xy}} \tag{3.18}\] \[\quad+C_{3}\delta^{1-\alpha}\|\gamma\|_{L^{\infty}_{\delta}H^{x}_ {xy}}^{2}+C_{4}\delta^{1-\alpha}\|\gamma\|_{L^{\infty}_{\delta}H^{s}_{xy}} \left(\|\gamma\|_{L^{\infty}_{\delta}H^{s}_{xy}}+\|\mu\|_{L^{\infty}_{\delta}H ^{s}_{y}}+1\right).\] Proof.: Since \[\|\Phi(\mu,\gamma)\|_{X^{s,\delta}}=\|\Phi_{1}(\mu,\gamma)\|_{L^{\infty}_{t}H^ {s}_{y}}+\|\Phi_{2}(\mu,\gamma)\|_{L^{\infty}_{\delta}H^{s}_{xy}},\] we use the computations made in Corollary 3.3 and 3.5 which give us \[\|\Phi_{1}(\mu,\gamma)\|_{L^{\infty}_{t}H^{x}_{xy}} \leq C_{1}\|\mu_{0}\|_{H^{s}_{y}}+C_{3}\delta^{1-\alpha}\|\gamma\| _{L^{\infty}_{\delta}H^{x}_{xy}}^{2}\] \[\|\Phi_{2}(\mu,\gamma)\|_{L^{\infty}_{t}H^{x}_{xy}} \leq C_{3}\|\gamma_{0}\|_{H^{s}_{xy}}\] \[+C_{4}\delta^{1-\alpha}\|\gamma\|_{L^{\infty}_{\delta}H^{x}_{xy}} \left(\|\gamma\|_{L^{\infty}_{\delta}H^{x}_{xy}}+\|\mu\|_{L^{\infty}_{\delta}H ^{s}_{y}}+1\right).\] Combining the results we get (3.18). ### Local Well-Posedness Let us consider \[B(0,R)\subset X^{s,\delta}\] where \(R:=2\left(C_{1}\|\mu_{0}\|_{H^{s}_{y}}+C_{2}\|\gamma_{0}\|_{H^{s}_{xy}}\right)\). If we set \(C_{1},C_{2}\gg 1\) such that \(R\geq 1\) and we fix \(\delta\) sufficiently small such that \[C_{i}\delta^{1-\alpha}R<\frac{1}{8},\ \text{with}\ i=3,4, \tag{3.19}\] then by (3.18) we get \[\Phi:B(0,R)\to B(0,R).\] We remark that (3.19) implies \[\delta\sim R^{-\frac{1}{1-\alpha}}\sim(\|\gamma_{0}\|_{H^{s}_{xy}}+\|\mu_{0}\|_{H ^{s}_{y}})^{-\frac{1}{1-\alpha}}. \tag{3.20}\] #### 3.2.1 Contraction We now show that \(\Phi\) is a contraction on \(B(0,R).\) As in the previous section, we proceed by doing one estimate at a time and then combine the results. **Proposition 3.7**.: _Let \((\mu_{1},\gamma_{1}),\ (\mu_{2},\gamma_{2})\in B(0,R)\) be vectors functions such that \(\mu_{1}(0,y)=\mu_{2}(0,y)=:\mu_{0}\) and let \(s\geq 0\), \(\alpha\in(\frac{1}{2},1)\) Then,_ \[\|\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})\|_{L^{\infty}_{s} H^{s}_{y}}\lesssim\delta^{1-\alpha}R\|(\mu_{1},\gamma_{1})-(\mu_{2},\gamma_{2}) \|_{X^{s,\delta}}.\] Proof.: By definition we have \[\Phi_{1}(\mu_{1},\gamma_{1})=e^{\nu t\partial_{yy}}\mu_{0}+\int_{ 0}^{t}e^{\nu(t-t^{\prime})\partial_{yy}}G(\gamma_{1})dt^{\prime},\] \[\Phi_{1}(\mu_{2},\gamma_{2})=e^{\nu t\partial_{yy}}\mu_{0}+\int_{ 0}^{t}e^{\nu(t-t^{\prime})\partial_{yy}}G(\gamma_{2})dt^{\prime}.\] Then, \[\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})=\int_{0}^{t}e^{\nu(t -t^{\prime})\partial_{yy}}\left(G(\gamma_{1})-G(\gamma_{2})\right)dt^{\prime}\] where, \[G(\gamma_{1})-G(\gamma_{2})=\partial_{y}\left(\overline{[\partial _{x}\nabla^{-2}\gamma_{1}\partial_{y}\nabla^{-2}\gamma_{1}]-[\partial_{x} \nabla^{-2}\gamma_{2}\partial_{y}\nabla^{-2}\gamma_{2}]}\right)\] \[= \partial_{y}\left(\overline{[\partial_{x}\nabla^{-2}(\gamma_{1}- \gamma_{2})\partial_{y}\nabla^{-2}\gamma_{1}]+[\partial_{x}\nabla^{-2}\gamma _{2}\partial_{y}\nabla^{-2}(\gamma_{1}-\gamma_{2})]}\right)\] and \[\|\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})\|_{L^{\infty}_{ \delta}H^{s}_{y}}=\left\|\int_{0}^{t}e^{\nu(t-t^{\prime})\partial_{yy}}\left(G (\gamma_{1})-G(\gamma_{2})\right)dt^{\prime}\right\|_{L^{\infty}_{\delta}H^{s} _{y}}.\] As seen in Proposition 3.1, by duality we can study the following two equivalent quantities \[\sup_{\|g\|_{2}\leq 1}\sum_{k_{2}}\sum_{\begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^{ s}\langle\tilde{m}\rangle^{s}}f_{1}(h)f_{2}(\tilde{m})g(k_{2})\lesssim\|f_{1}\|_{ l^{2}}\|f_{2}\|_{l^{2}}\|g\|_{l^{2}}\] \[\sup_{\|g\|_{2}\leq 1}\sum_{k_{2}}\sum_{\begin{subarray}{c}h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{V(k_{2},h,\tilde{m})}{\langle h\rangle^{ s}\langle\tilde{m}\rangle^{s}}f_{3}(h)f_{1}(\tilde{m})g(k_{2})\lesssim\|f_{1}\|_{ l^{2}}\|f_{3}\|_{l^{2}}\|g\|_{l^{2}}.\] where \[\tilde{m} :=(-h_{1},m_{2});\] \[V(k_{2},h,\tilde{m}) :=\frac{\langle k_{2}\rangle^{s}}{|k_{2}|^{2\alpha-1}|h||\tilde{m}|};\] \[f_{1}(k) :=\langle k\rangle^{s}|(\widehat{\gamma_{1}-\gamma_{2}})(k)|;\] \[f_{2}(k) :=\langle k\rangle^{s}|\widehat{\gamma_{1}}(k)|;\] \[f_{3}(k) :=\langle k\rangle^{s}|\widehat{\gamma_{2}}(k)|.\] Since we have the same coefficients as in the Proposition 3.1, using the same computations we get \[\|\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})\|_{L _{\delta}^{\infty}H_{y}^{s}}\] \[\lesssim\delta^{1-\alpha}\|\gamma_{1}-\gamma_{2}\|_{L_{\delta}^{ \infty}H_{xy}^{s}}\left(\|\gamma_{1}\|_{L_{\delta}^{\infty}H_{xy}^{s}}+\| \gamma_{2}\|_{L_{\delta}^{\infty}H_{xy}^{s}}\right)\] \[\lesssim\delta^{1-\alpha}R\|\gamma_{1}-\gamma_{2}\|_{L_{\delta}^ {\infty}H_{xy}^{s}}\] \[\lesssim\delta^{1-\alpha}R\|(\mu_{1},\gamma_{1})-(\mu_{2},\gamma _{2})\|_{X^{s,\delta}}.\] for \(s\geq 0\) and \(\frac{1}{2}<\alpha<1\). **Proposition 3.8**.: _Let \((\mu_{1},\gamma_{1}),\ (\mu_{2},\gamma_{2})\in B(0,R)\) be vectors functions such that \(\gamma_{1}(0,x,y)=\gamma_{2}(0,x,y)=:\gamma_{0}(x,y)\) and let \(s\geq 0\), \(\alpha\in(\frac{3}{4},1)\). Then,_ \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L_{\delta}^{ \infty}H_{xy}^{s}}\lesssim(1+3R)\delta^{1-\alpha}\|(\mu_{1},\gamma_{1})-(\mu_ {2},\gamma_{2})\|_{X^{s,\delta}},\] _with \(\alpha\in\left(\frac{3}{4},1\right)\)._ Proof.: By definition we have \[\Phi_{2}(\mu_{1},\gamma_{1}) =e^{t\tilde{D}^{2}}\gamma_{0}+\int_{0}^{t}e^{(t-t^{\prime}) \tilde{D}^{2}}F(\mu_{1},\gamma_{1})\ dt^{\prime}\] \[\Phi_{2}(\mu_{2},\gamma_{2}) =e^{t\tilde{D}^{2}}\gamma_{0}+\int_{0}^{t}e^{(t-t^{\prime})\tilde {D}^{2}}F(\mu_{2},\gamma_{2})\ dt^{\prime},\] where \[F(\mu,\gamma)=\overbrace{\gamma_{x}\partial_{y}\nabla^{-2}\gamma}^{A}- \overbrace{\gamma_{y}\partial_{x}\nabla^{-2}\gamma}^{B}+\overbrace{\partial_ {y}[(\gamma\partial_{x}\nabla^{-2}\gamma)]}^{C}-\overbrace{\mu\gamma_{x}}^{D }+\overbrace{\mathcal{C}_{0}\gamma_{x}}^{E}.\] Then, \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L_ {\delta}^{\infty}H_{xy}^{s}}\] \[=\left\|\int_{0}^{t}e^{(i\frac{k_{1}}{|k|^{2}}-\nu|k|^{2})(t-t^{ \prime})}(F(\mu_{1},\gamma_{1})-F(\mu_{2},\gamma_{2}))\ dt^{\prime}\right\|_{L _{\delta}^{\infty}H_{x,y}^{s}}\] \[=\sup_{t\in[0,\delta]}\left(\sum_{k}\left|\int_{0}^{t}e^{(i\frac {k_{1}}{|k|^{2}}-\nu|k|^{2})(t-t^{\prime})}(F(\mu_{1},\widehat{\gamma_{1})-F( \mu_{2},\gamma_{2})})\ dt^{\prime}\right|^{2}\langle k\rangle^{2s}\right)^{ \frac{1}{2}}.\] We observe that \[A_{\gamma_{1}} =\gamma_{1\,x}\partial_{y}\nabla^{-2}\gamma_{1};\ \ \ \ A_{\gamma_{2}}=\gamma_{2\,x} \partial_{y}\nabla^{-2}\gamma_{2}.\] \[A_{\gamma_{1}}-A_{\gamma_{2}} =\gamma_{1\,x}\partial_{y}\nabla^{-2}\gamma_{1}-\gamma_{2\,x} \partial_{y}\nabla^{-2}\gamma_{2}\] \[=(\gamma_{1\,x}-\gamma_{2\,x})\partial_{y}\nabla^{-2}\gamma_{1}+ \gamma_{2\,x}\partial_{y}\nabla^{-2}(\gamma_{1}-\gamma_{2}).\] \[B_{\gamma_{1}} =\gamma_{1\,y}\partial_{x}\nabla^{-2}\gamma_{1};\ \ \ \ B_{\gamma_{2}}= \gamma_{2\,y}\partial_{x}\nabla^{-2}\gamma_{2}.\] \[B_{\gamma_{1}}-B_{\gamma_{2}} =\gamma_{1\,y}\partial_{x}\nabla^{-2}\gamma_{1}-\gamma_{2\,y} \partial_{x}\nabla^{-2}\gamma_{2}\] \[=(\gamma_{1\,y}-\gamma_{2\,y})\partial_{x}\nabla^{-2}\gamma_{1}+ \gamma_{2\,y}\partial_{x}\nabla^{-2}(\gamma_{1}-\gamma_{2}).\] \[C_{\gamma_{1}} =\partial_{y}(\overline{\gamma_{1}\partial_{x}\nabla^{-2}\gamma_ {1}});\ \ \ \ C_{\gamma_{2}}=\partial_{y}(\overline{\gamma_{2}\partial_{x}\nabla^{-2} \gamma_{2}}).\] \[C_{\gamma_{1}}-C_{\gamma_{2}} =\partial_{y}(\overline{\gamma_{1}\partial_{x}\nabla^{-2}\gamma_ {1}})-\partial_{y}\overline{(\gamma_{2}\partial_{x}\nabla^{-2}\gamma_{2})}\] \[=\partial_{y}\overline{((\gamma_{1}-\gamma_{2})\partial_{x}\nabla ^{-2}\gamma_{1})}+\partial_{y}\overline{(\gamma_{2}\partial_{x}\nabla^{-2}( \gamma_{1}-\gamma_{2}))}\] \[D_{\mu_{1},\gamma_{1}} =\mu_{1}\gamma_{1\,x};\ \ \ \ D_{\mu_{2},\gamma_{2}}=\mu_{2}\gamma_{2\,x}.\] \[D_{\mu_{1},\gamma_{1}}-D_{\mu_{2},\gamma_{2}} =\mu_{1}\gamma_{1\,x}-\mu_{2}\gamma_{2\,x}\] \[=\mu_{1}(\gamma_{1}-\gamma_{2})_{x}+(\mu_{1}-\mu_{2})\gamma_{2\,x}\] \[E_{\gamma_{1}} =c_{0}\gamma_{1\,x};\ \ \ \ E_{\gamma_{2}}=c_{0}\gamma_{2\,x}.\] \[E_{\gamma_{1}}-E_{\gamma_{2}} =c_{0}\left(\gamma_{1\,x}-\gamma_{2\,x}\right).\] Then, \[F(\mu_{1},\gamma_{1})-F(\mu_{2},\gamma_{2}) =(\gamma_{1\,x}-\gamma_{2\,x})\partial_{y}\nabla^{-2}\gamma_{1}+ \gamma_{2\,x}\partial_{y}\nabla^{-2}(\gamma_{1}-\gamma_{2})\] \[-(\gamma_{1\,y}-\gamma_{2\,y})\partial_{x}\nabla^{-2}\gamma_{1}- \gamma_{2\,y}\partial_{x}\nabla^{-2}(\gamma_{1}-\gamma_{2})\] \[+\partial_{y}\overline{((\gamma_{1}-\gamma_{2})\partial_{x} \nabla^{-2}\gamma_{1})}+\partial_{y}\overline{(\gamma_{2}\partial_{x}\nabla^{ -2}(\gamma_{1}-\gamma_{2}))}\] \[-\mu_{1}(\gamma_{1}-\gamma_{2})_{x}-(\mu_{1}-\mu_{2})\gamma_{2\,x }+c_{0}\left(\gamma_{1\,x}-\gamma_{2\,x}\right),\] and \[\left(F(\mu_{1},\widehat{\gamma_{1}})\widehat{-F}(\mu_{2},\gamma_ {2})\right)(k)\] \[=ik_{1}(\widehat{\gamma_{1}-\gamma_{2}})(k)*(-i\frac{k_{2}}{|k|^{ 2}}\widehat{\gamma_{1}})(k)+(ik_{1}\widehat{\gamma_{2}})(k)*(-i\frac{k_{2}}{|k |^{2}}(\widehat{\gamma_{1}-\gamma_{2}}))(k)\] \[-ik_{2}(\widehat{\gamma_{1}-\gamma_{2}})(k)*(-i\frac{k_{1}}{|k|^{ 2}}\widehat{\gamma_{1}})(k)-(ik_{2}\widehat{\gamma_{2}})(k)*(-i\frac{k_{1}}{| k|^{2}}(\widehat{\gamma_{1}-\gamma_{2}}))(k)\] \[+ik_{2}\left((\widehat{\gamma_{1}-\gamma_{2}})(k)*(-i\frac{k_{1}}{| k|^{2}}\widehat{\gamma_{1}})(k)+\widehat{\gamma_{2}}(k)*(-i\frac{k_{1}}{|k|^{2}})( \widehat{\gamma_{1}-\gamma_{2}})(k)\right)\biggr{|}_{[0,k_{2}]}\] \[-\widehat{\mu_{1}}(k_{2})*ik_{1}(\widehat{\gamma_{1}-\gamma_{2}}) (k)-(\widehat{\mu_{1}-\mu_{2}})(k_{2})*ik_{1}\widehat{\gamma_{2}}(k)+c_{0}ik_{1} (\widehat{\gamma-\gamma_{2}})(k)\] \[=\sum_{k=h+m}\frac{m_{1}h_{2}-m_{2}h_{1}}{|h|^{2}}\big{[}(\widehat{ \gamma_{1}-\gamma_{2}})(m)\widehat{\gamma_{1}}(h)+(\widehat{\gamma_{1}-\gamma_{2 }})(h)\widehat{\gamma_{2}}(m)\big{]}\] \[+\sum_{(0,k_{2})}\frac{k_{2}h_{1}}{|h|^{2}}\big{[}(\widehat{ \gamma_{1}-\gamma_{2}})(m)\widehat{\gamma_{1}}(h)+(\widehat{\gamma_{1}-\gamma_ {2}})(h)\widehat{\gamma_{2}}(m)\big{]}\] \[-\sum_{\begin{subarray}{c}k_{1}=h_{1}+0\\ k_{2}=h_{2}+m_{2}\end{subarray}}ih_{1}\big{[}\widehat{\mu_{1}}(m_{2})( \widehat{\gamma_{1}-\gamma_{2}})(h)+(\widehat{\mu_{1}-\mu_{2}})(m_{2})\widehat {\gamma_{2}}(h)\big{]}\] \[+c_{0}ik_{1}(\widehat{\gamma_{1}-\gamma_{2}})(k).\] Therefore, \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L _{q}^{\infty}H_{xy}^{s}}\] \[=\sup_{t\in[0,\delta]}\left(\sum_{k}\bigg{|}\int_{0}^{t}e^{(i \frac{k_{1}}{|k|^{2}}-\nu|k|^{2})(t-t^{\prime})}\right.\] \[\qquad\left.\cdot\left(\sum_{k=h+m}\frac{m_{1}h_{2}-m_{2}h_{1}}{|h |^{2}}\big{[}(\widehat{\gamma_{1}-\gamma_{2}})(m)\widehat{\gamma_{1}}(h)+( \widehat{\gamma_{1}-\gamma_{2}})(h)\widehat{\gamma_{2}}(m)\big{]}\right.\right.\] \[\qquad\qquad\qquad\left.\left.+\sum_{\begin{subarray}{c}k_{1}=0 \\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{k_{2}h_{1}}{|h|^{2}}\big{[}(\widehat{ \gamma_{1}-\gamma_{2}})(m)\widehat{\gamma_{1}}(h)+(\widehat{\gamma_{1}- \gamma_{2}})(h)\widehat{\gamma_{2}}(m)\big{]}\right.\right.\] \[\qquad\qquad\left.-\sum_{\begin{subarray}{c}k_{1}=h_{1}+0\\ k_{2}=h_{2}+m_{2}\end{subarray}}ih_{1}\big{[}\widehat{\mu_{1}}(m_{2})( \widehat{\gamma_{1}-\gamma_{2}})(h)+(\widehat{\mu_{1}-\mu_{2}})(m_{2}) \widehat{\gamma_{2}}(h)\big{]}\right.\] \[\qquad\qquad\left.+c_{0}ik_{1}(\widehat{\gamma_{1}-\gamma_{2}})( k)\right)\!dt^{\prime}\!\bigg{|}^{2}\langle k\rangle^{2s}\right)^{\frac{1}{2}}.\] As in the proof of Proposition 3.4, by duality it is sufficient to show \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}2\frac{\langle k\rangle^{s}}{|k|^{2\alpha}} \sum_{k=h+m}\frac{|m|}{|h|}\frac{1}{\langle h\rangle^{s}\langle m\rangle^{s}} f_{1}(h)f_{2}(m)g(k)\lesssim\|f_{1}\|_{l^{2}}\|f_{2}\|_{l^{2}}\|g\|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}2\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{k=h+m}\frac{|m|}{|h|}\frac{1}{\langle h\rangle^{s}\langle m\rangle ^{s}}f_{2}(h)f_{3}(m)g(k)\lesssim\|f_{2}\|_{l^{2}}\|f_{3}\|_{l^{2}}\|g\|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{\begin{subarray}{c}k_{1}=0\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|\langle h\rangle^{s}\langle m \rangle^{s}}f_{1}(h)f_{2}(m)g(k)\lesssim\|f_{1}\|_{l^{2}}\|f_{2}\|_{l^{2}}\|g \|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|k|}{|h|\langle h\rangle^{s}\langle m \rangle^{s}}f_{2}(h)f_{3}(m)g(k)\lesssim\|f_{2}\|_{l^{2}}\|f_{3}\|_{l^{2}}\|g \|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2 \alpha}}\sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h|}{\langle h\rangle^{s}\langle m_{2} \rangle^{s}}f_{2}(h)f_{4}(m_{2})g(k)\lesssim\|f_{2}\|_{l^{2}}\|f_{4}\|_{l^{2}} \|g\|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}\frac{\langle k\rangle^{s}}{|k|^{2\alpha}} \sum_{\begin{subarray}{c}k_{1}=h_{1}\\ k_{2}=h_{2}+m_{2}\end{subarray}}\frac{|h|}{\langle h\rangle^{s}\langle m_{2} \rangle^{s}}f_{3}(h)f_{5}(m_{2})g(k)\lesssim\|f_{3}\|_{l^{2}}\|f_{5}\|_{l^{2}} \|g\|_{l^{2}}\] \[\sup_{\|g\|_{l^{2}}\leq 1}\sum_{k}c_{0}\frac{|k|}{|k|^{2\alpha}}f_{2}(k)g(k) \lesssim\|f_{2}\|_{l^{2}}\|g\|_{l^{2}},\] where we define \[f_{1}(k):= \langle k\rangle^{s}|\widehat{\gamma_{1}}(k)|\] \[f_{2}(k):= \langle k\rangle^{s}|(\widehat{\gamma_{1}-\gamma_{2}})(k)|\] \[f_{3}(k):= \langle k\rangle^{s}|\widehat{\gamma_{2}}(k)|\] \[f_{4}(k_{2}):= \langle k_{2}\rangle^{s}|\widehat{\mu_{1}}(k_{2})|\] \[f_{5}(k_{2}):= \langle k_{2}\rangle^{s}|(\widehat{\mu_{1}-\mu_{2}})(k_{2})|.\] Since the coefficients of the sum are exactly those of Proposition 3.4, we have \[s\geq 0,\] \[\alpha>\frac{3}{4}\] and \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2}) \|_{H^{s}_{xy}}\lesssim\int_{0}^{t}\frac{1}{(t-t^{\prime})^{\alpha }}\ dt^{\prime}(2\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}}\|\gamma_{1}\|_{H^{s}_{ xy}}\] \[\qquad+2\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}}\|\gamma_{2}\|_{H^ {s}_{xy}}+\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}}\|\mu_{1}\|_{H^{s}_{y}}\] \[\qquad+\|\gamma_{2}\|_{H^{s}_{xy}}\|\mu_{1}-\mu_{2}\|_{H^{s}_{y}} +\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}})\] \[\sim t^{1-\alpha}\left(\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}}(\| \gamma_{1}\|_{H^{s}_{xy}}+\|\gamma_{2}\|_{H^{s}_{xy}}+\|\mu_{1}\|_{H^{s}_{xy}}+1)\right.\] \[\qquad\left.+\|\gamma_{2}\|_{H^{s}_{xy}}\|\mu_{1}-\mu_{2}\|_{H^{s} _{y}}\right)\] \[\lesssim(1+ 3R)t^{1-\alpha}\left(\|\gamma_{1}-\gamma_{2}\|_{H^{s}_{xy}}+\| \mu_{1}-\mu_{2}\|_{H^{s}_{y}}\right).\] Then, \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L^{\infty}_{ \delta}H^{s}_{x,y}}\lesssim(1+3R)\delta^{1-\alpha}\|(\mu_{1},\gamma_{1})-(\mu _{2},\gamma_{2})\|_{X^{s,\delta}}.\] with \(s\geq 0\) and \(\frac{3}{4}<\alpha<1\). Finally, combining Proposition 3.7 and 3.8 we get **Proposition 3.9**.: _Let \((\mu_{1},\gamma_{1}),\ (\mu_{2},\gamma_{2})\in B(0,R)\subset X^{s,\delta}\) be vector functions and let \(s\geq 0\) and \(\alpha\in(\frac{3}{4},1)\). Then_ \[\|\Phi(\mu_{1},\gamma_{1})-\Phi(\mu_{2},\gamma_{2})\|_{X^{s,\delta}}\leq C(1+ 3R)\delta^{1-\alpha}\|(\mu_{1},\gamma_{1})-(\mu_{2},\gamma_{2})\|_{X^{s,\delta }}. \tag{3.21}\] Proof.: Since \[\|\Phi(\mu_{1},\gamma_{1})-\Phi(\mu_{2},\gamma_{2})\|_{X^{s,\delta }} =\|\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})\|_{L^{ \infty}_{\delta}H^{s}_{y}}\] \[+\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L^ {\infty}_{\delta}H^{s}_{xy}},\] and recalling that from Propositions 3.7 and 3.8 we have \[\|\Phi_{1}(\mu_{1},\gamma_{1})-\Phi_{1}(\mu_{2},\gamma_{2})\|_{L_{ \delta}^{\infty}H_{y}^{s}}\leq C_{1}\delta^{1-\alpha}R\|(\mu_{1},\gamma_{1})-( \mu_{2},\gamma_{2})\|_{X^{s,\delta}}\] \[\|\Phi_{2}(\mu_{1},\gamma_{1})-\Phi_{2}(\mu_{2},\gamma_{2})\|_{L_ {\delta}^{\infty}H_{xy}^{s}}\leq C_{2}(1+3R)\delta^{1-\alpha}\|(\mu_{1},\gamma_ {1})-(\mu_{2},\gamma_{2})\|_{X^{s,\delta}}\] we get (3.21). If we set \(\delta\) sufficiently small such that \[C(1+3R)\delta^{1-\alpha}\leq\frac{1}{2},\] then \(\Phi\) is a contraction on \(B(0,R)\). #### 3.2.2 Well-Posedness on \(X^{s,\delta}\) **Corollary 3.10**.: _Under the previous assumption,_ \[\exists!\ (\mu,\gamma)\in B(0,R)\subset X^{s,\delta}\ |\ (\mu,\gamma)=\Phi(\mu, \gamma).\] Proof.: By fixed point theorem. Next step is to extend the existence and uniqueness of the solution from \(B(0,R)\) to the whole space \(X^{s,\delta}\). **Proposition 3.11**.: _Suppose there exists \((\tilde{\mu},\tilde{\gamma})\in X^{s,\delta}\), \(s\geq 0\), solution of (3.2) such that_ \[(\tilde{\mu},\tilde{\gamma})= \left(e^{\nu t\partial_{yy}}\mu_{0}+\int_{0}^{t}e^{\nu(t-t^{ \prime})\partial_{yy}}G(\tilde{\gamma})\ dt^{\prime};\right.\] \[\left.e^{\tilde{D}^{2}t}\gamma_{0}+\int_{0}^{t}e^{\tilde{D}^{2}( t-t^{\prime})}F(\tilde{\mu},\tilde{\gamma})\ dt^{\prime}\right)\ \text{on }X^{s,\delta}.\] _Let \((\mu,\gamma)\in B(0,R)\) be the functions defined in Corollary 3.10. Then,_ \[(\tilde{\mu},\tilde{\gamma})=(\mu,\gamma)\ \text{on }X^{s,\delta}.\] Proof.: Fix \(0<\varepsilon<\delta\). By (3.21) \[\|(\mu,\gamma)-(\tilde{\mu},\tilde{\gamma})\|_{X^{s,\varepsilon}} =\|\Phi(\mu,\gamma)-\Phi(\tilde{\mu},\tilde{\gamma})\|_{L_{\varepsilon }^{\infty}H_{xy}^{s}}\] \[\leq C(1+3R)\delta^{1-\alpha}\|(\mu_{1},\gamma_{1})-(\mu_{2}, \gamma_{2})\|_{X^{s,\varepsilon}}.\] Set \(\varepsilon>0\) sufficiently small such that \[C(1+3R)\varepsilon^{1-\alpha}\leq\frac{1}{2}.\] Then, \[\|(\mu,\gamma)-(\tilde{\mu},\tilde{\gamma})\|_{X^{s,\varepsilon}}\leq\frac{1} {2}\|(\mu,\gamma)-(\tilde{\mu},\tilde{\gamma})\|_{X^{s,\varepsilon}}\] which is possible if and only if \((\tilde{\mu},\tilde{\gamma})=(\mu,\gamma)\) on \(X^{s,\varepsilon}\). In particular \((\tilde{\mu}(\varepsilon),\tilde{\gamma}(\varepsilon))=(\mu(\varepsilon), \gamma(\varepsilon))\), therefore we can repeat the same argument on interval \([\varepsilon,2\varepsilon]\) until we cover \([0,\delta]\) ## 4 Global Well-Posedness for Eddy-Mean Vorticity System ### Upper Bounds of \(L^{2}\)-Norms by Initial Data To provide global well-posedness of (3.2), an upper bound for the \(L^{2}\) norms of \(\overline{u}\) and \(\zeta^{\prime}\) is required, as mentioned in Step 4) of the introduction. This upper bound can be obtained from the norm of the initial data, as follows. Let us start by recalling the equation (1.1) that governs the dynamics of the problem, which is given by \[\partial_{t}\zeta+J(\psi,\zeta+\beta y)=\nu\nabla^{2}\zeta,\] where \(J(\psi,\zeta+\beta y)=u(\zeta+\beta y)_{x}+v(\zeta+\beta y)_{y}\), \(u=-\psi_{y},\;v=\psi_{x}\) and \(\zeta=\nabla^{2}\psi\). **Energy equation:** Multiplying the above equation by the stream function \(\psi\) and integrating in space we get \[\left\langle\psi\partial_{t}\zeta\right\rangle+\left\langle\psi J(\psi,\zeta+ \beta y)\right\rangle=\left\langle\nu\psi\nabla^{2}\nabla^{2}\psi\right\rangle,\] where \[\left\langle\cdot\right\rangle=\frac{1}{l^{2}}\iint\cdot dxdy.\] _Remark 4.1_.: 1. \(div(f\nabla v)=\nabla f\cdot\nabla v+f\nabla^{2}v\); \(f,v:\mathbb{R}^{n}\rightarrow\mathbb{R}\). 2. \(J(\psi,\psi(\zeta+\beta y))=\psi J(\psi,\zeta+\beta y)\). In fact, \[J(\psi,\psi(\zeta+\beta y)) =u(\psi(\zeta+\beta y))_{x}+v(\psi(\zeta+\beta y))_{y}\] \[=u\psi_{x}(\zeta+\beta y)+u\psi(\zeta+\beta y)_{x}+v\psi_{y}( \zeta+\beta y)+v\psi(\zeta+\beta y)_{y}\] \[=J(\psi,(\zeta+\beta y))\psi.\] Then, \[\oint\psi\nabla\psi_{t}\cdot\widehat{n}dl-\left\langle\nabla \psi\cdot\nabla\psi_{t}\right\rangle+\left\langle J( \psi,\psi(\zeta+\beta y))\right\rangle \tag{4.1}\] \[=\nu\oint\psi\nabla\zeta\cdot\widehat{n}dl-\left\langle\nu\nabla \psi\cdot\nabla\nabla^{2}\psi\right\rangle.\] Since we have periodic boundary conditions and incompressibility, equation (4.1) becomes \[\partial_{t}\left(\frac{1}{2}\left\langle|\nabla\psi|^{2}\right\rangle\right)= -\nu\left\langle\psi_{xx}^{2}+2\psi_{xy}^{2}+\psi_{yy}^{2}\right\rangle.\] In particular \[\partial_{t}\left(\frac{1}{2}\left\langle|\nabla\psi|^{2}\right\rangle\right) \leq 0,\] so that \[\left\langle|\nabla\psi|^{2}\right\rangle\leq\left\langle|\nabla\psi_{0}|^{2} \right\rangle.\] We can now separate mean and eddy contributions: \[\left\langle|\nabla\psi|^{2}\right\rangle =\left\langle u^{2}+v^{2}\right\rangle=\left\langle(\overline{u}+u^{ \prime})^{2}+v^{\prime 2}\right\rangle\] \[=\left\langle\overline{u}^{2}+2u^{\prime}\overline{u}+u^{\prime 2 }+v^{\prime 2}\right\rangle\] \[=\left\langle\overline{u}^{2}\right\rangle+\left\langle u^{ \prime 2}+v^{\prime 2}\right\rangle\leq\left\langle|\nabla\psi_{0}|^{2}\right\rangle. \tag{4.2}\] This implies that both \(\left\langle\overline{u}^{2}\right\rangle\) and \(\left\langle u^{\prime 2}+v^{\prime 2}\right\rangle\) are bounded. **Enstrophy equation:** Similarly to the previous case, we have \[\left\langle\zeta\zeta_{t}\right\rangle+\left\langle\zeta J(\psi,\zeta+\beta y )\right\rangle=\nu\left\langle\zeta\nabla^{2}\zeta\right\rangle\] \[\Rightarrow\partial_{t}\left(\frac{1}{2}\left\langle\zeta^{2}\right\rangle \right)=-\nu\left\langle|\nabla\zeta|^{2}\right\rangle.\] Then, \[\left\langle\zeta^{2}\right\rangle\leq\left\langle\zeta_{0}^{2}\right\rangle.\] Separating mean and eddy contributions \[\left\langle\zeta^{2}\right\rangle =\left\langle(-\overline{u}_{y}+\zeta^{\prime})^{2}\right\rangle\] \[=\left\langle\overline{u}_{y}^{2}-2\overline{u}_{y}\zeta^{\prime }+\zeta^{\prime 2}\right\rangle\] \[=\left\langle\overline{u}_{y}^{2}\right\rangle+\left\langle \zeta^{\prime 2}\right\rangle\leq\left\langle\zeta_{0}^{2}\right\rangle \tag{4.3}\] and therefore \(\left\langle\zeta^{\prime 2}\right\rangle\) is also bounded. We have thus proved **Proposition 4.2**.: _Let \((\mu_{0}(y),\gamma_{0}(x,y))\in H^{s}(\mathbb{T}_{l})\cross H^{s}(\mathbb{T}_{ l}^{2})\) be the initial data of problem (3.2). Then, using notation (3.3), we have_ \[\left\|\gamma(t)\right\|_{L^{2}_{xy}}\leq\left\|\gamma_{0}\right\|_{L^{2}_{xy} }\text{ and }\left\|\mu(t)\right\|_{L^{2}_{y}}\lesssim\left\|\mu_{0}\right\|_{L^{2}_{y}} \quad\forall\ t\geq 0, \tag{4.4}\] _with \(C>0\)._ ### Global Well-Posedness In this section we complete the proof of Theorem 1.1. In Section 3, we established a priori upper bounds for the solution over a finite time interval and proved local well-posedness in time and space using the contraction theorem. We also obtained spatial extension of the solution through Proposition 3.11. The final step to be addressed is time extension. More specifically, we proved that the problem (3.2) has a unique solution on \[X^{s,\delta}=\left\{(f,g)\in L^{\infty}_{\delta}(H^{s}_{y}\times H^{s}_{xy}); \widehat{f}(0)=0,\ \widehat{g}(0)=0\right\},\] with \[s\geq 0\text{ and }\delta\sim(\left\|\gamma_{0}\right\|_{H^{s}_{xy}}+\left\| \mu_{0}\right\|_{H^{s}_{y}})^{-\frac{1}{1-\alpha}}\ \ \alpha\in\left(\frac{3}{4},1\right),\] which we can explicitly write as \[(\mu,\gamma)=\left(e^{\nu t\partial_{yy}}\mu_{0}+\int_{0}^{t}e^{( -\nu\partial_{yy})(t-t^{\prime})}G(\gamma)\ dt^{\prime},\right.\] \[\left.e^{\hat{D}^{2}t}\ \gamma_{0}+\int_{0}^{t}e^{\hat{D}^{2}(t-t^{ \prime})}F(\mu,\gamma)\ dt^{\prime}\right) \tag{4.5}\] where \(\tilde{D}^{2}=\nu\nabla^{2}-\partial_{x}\nabla^{-2}\). We are now ready to close the proof of Theorem 1.1. Proof of Theorem 1.1.: We start by choosing \[\delta\sim(\left\|\gamma_{0}\right\|_{L^{2}_{xy}}+\left\|\mu_{0}\right\|_{L^{2 }_{y}})^{-\frac{1}{1-\alpha}} \tag{4.6}\] instead of (3.20). By previous computations we have existence and uniqueness of the solution to (3.2) on \(X^{s,\delta}\) with \(\delta\) as in (4.6). Moreover, by Proposition 4.2, we have \[\left\|\gamma(t)\right\|_{L^{2}_{xy}}+\left\|\mu(t)\right\|_{L^{2}_{y}}\leq \left(\left\|\gamma_{0}\right\|_{L^{2}_{xy}}+C\left\|\mu_{0}\right\|_{L^{2}_{y }}\right)\quad\forall\ t\geq 0. \tag{4.7}\] Then, if we set as initial data \[(\mu(\delta),\gamma(\delta))\,,\] we can use the same techniques as in previous sections to establish the existence and uniqueness of a solution on \(X^{s,[\delta,\delta+\delta^{*}]}\) with a new parameter, which we call \(\delta^{*}\), of the following form \[\delta^{*}\sim(\left\|\gamma(\delta)\right\|_{L^{2}_{xy}}+\left\|\mu(\delta) \right\|_{L^{2}_{y}})^{-\frac{1}{1-\alpha}}. \tag{4.8}\] Due to (4.7), \[\delta\leq\delta^{*},\] and in particular we have well-posedness of (3.2) on \(X^{s,[\delta,2\delta]}\). Iterating in this way, we obtain the solution on \(X^{s}\). **Acknowledgment** The author would like to thank G. Staffilani and R. Ferrari for suggesting the problem and for many useful conversations during the preparation of this paper. The author would also like to thank MIT for its hospitality.
2309.15313
M$^{3}$3D: Learning 3D priors using Multi-Modal Masked Autoencoders for 2D image and video understanding
We present a new pre-training strategy called M$^{3}$3D ($\underline{M}$ulti-$\underline{M}$odal $\underline{M}$asked $\underline{3D}$) built based on Multi-modal masked autoencoders that can leverage 3D priors and learned cross-modal representations in RGB-D data. We integrate two major self-supervised learning frameworks; Masked Image Modeling (MIM) and contrastive learning; aiming to effectively embed masked 3D priors and modality complementary features to enhance the correspondence between modalities. In contrast to recent approaches which are either focusing on specific downstream tasks or require multi-view correspondence, we show that our pre-training strategy is ubiquitous, enabling improved representation learning that can transfer into improved performance on various downstream tasks such as video action recognition, video action detection, 2D semantic segmentation and depth estimation. Experiments show that M$^{3}$3D outperforms the existing state-of-the-art approaches on ScanNet, NYUv2, UCF-101 and OR-AR, particularly with an improvement of +1.3\% mIoU against Mask3D on ScanNet semantic segmentation. We further evaluate our method on low-data regime and demonstrate its superior data efficiency compared to current state-of-the-art approaches.
Muhammad Abdullah Jamal, Omid Mohareri
2023-09-26T23:52:09Z
http://arxiv.org/abs/2309.15313v1
M\({}^{3}\)3D: Learning 3D priors using Multi-Modal Masked Autoencoders for 2D image and video understanding ###### Abstract We present a new pre-training strategy called M\({}^{3}\)3D (Multi-Modal Masked 3D) built based on Multi-modal masked autoencoders that can leverage 3D priors and learned cross-modal representations in RGB-D data. We integrate two major self-supervised learning frameworks; Masked Image Modeling (MIM) and contrastive learning; aiming to effectively embed masked 3D priors and modality complementary features to enhance the correspondence between modalities. In contrast to recent approaches which are either focusing on specific downstream tasks or require multi-view correspondence, we show that our pre-training strategy is ubiquitous, enabling improved representation learning that can transfer into improved performance on various downstream tasks such as video action recognition, video action detection, 2D semantic segmentation and depth estimation. Experiments show that M\({}^{3}\)3D outperforms the existing state-of-the-art approaches on ScanNet, NYUv2, UCF-101 and OR-AR, particularly with an improvement of +1.3% mIoU against Mask3D on ScanNet semantic segmentation. We further evaluate our method on low-data regime and demonstrate its superior data efficiency compared to current state-of-the-art approaches. ## 1 Introduction Pre-training Vision Transformers [24] (ViTs) using Masked Autoencoders (MAE) followed by fine-tuning gives rise to the state-of-the-art results for various computer vision tasks such as image classification [9, 33, 66], video activity recognition [62, 26, 60], semantic segmentation [27, 36, 45] and 3D scene understanding [70, 14, 52]. Inspired by BERT [21], MAE masks high number of patches in an input image or video clip and predicts such missing regions. It usually consists of an asymmetric encoder-decoder architecture, where the encoder objective is to encode the unmasked patches into latent representations while the masked patches are predicted by the decoder using these representations. However, MAEs are mostly focused on learning from single-modality data (images or video clips) and can't leverage other data modalities that are present in commonly used RGB-D and Time-of-Flight (ToF) cameras. The depth and point cloud data available in such cameras can provide an opportunity to learn geometric priors and avoid view-dependent effects for efficient representation learning. Particularly, various 3D understanding approaches [73, 18, 37, 65] have been leveraging RGB-D datasets for high-level scene understanding and low-level point matching [7, 72] tasks using contrastive learning. However, very little work has been done in exploring 3D priors for 2D image understanding with limited focus on tasks like semantic segmentation and depth estimation. Our goal is to embed 3D priors into ViTs based backbones for various downstream tasks that include but not limited to video activity recognition, video action detection, semantic segmentation and depth estimation, to effectively learn geometric and structural priors as well as cross-modal features. Recently, Mask3D [36] effectively learns masked 3D priors from a single-view RGB-D frame by only reconstructing the depth through masking out patches in RGB and depth modalities. However, the approach is limited to 2D image understanding tasks and relies on MAE pre-training for any cross-modal representation learning. Pri3D [38] explores to embed the 3D prior by leveraging multi-view RGB-D data to introduce geometric constraints in a contrastive learning scheme for ResNet backbones. However, it relies on camera registration between multiple views for each frame. Instead, we consider to empower ViTs with such 3D priors using MAEs and cross-modal training strategies such as constrastive learning and matching loss that transfer well to various video and image based downstream tasks. Cross-modal learning and Masked Image Modeling are complementary to each other. While the former would leverage the very useful visual-depth pair information to encode modality-invariant features, the latter forces its representation to encode the majority of the input information. This motivates us to integrate both the pre-training strategies to extract different discriminative features. In this paper, we present M\({}^{3}\)3D, a multi-modal masked autoencoder based approach which learns to embed 3D priors in a self-supervised manner by pre-training on RGB-D image pair or video clips and also integrates cross-modal learning. To train M\({}^{3}\)3D, we randomly mask patches in both RGB and depth modalities and encode the unmasked patches by passing them through either modality-specific transformer encoders or modality-agnostic transformer encoder which is shared between both the modalities. The representations from the encoder are then appended with learned masked tokens to reconstruct masked regions in depth and RGB input. We equipped our approach with contrastive learning and RGB-D matching which are used to enhance cross-modal correspondence during pre-training. The objective of contrastive learning is to align the visual scene and the corresponding depth data by pulling the feature associated with the RGB patch and its corresponding depth while pushing away the other depth patches in the corresponding depth map. Finally, to further enhance the correspondence, the matching loss predicts whether the RGB-depth pair is matched (positive) or not matched (negative) by applying a linear layer followed by softmax on the top of the encoder features to predict a two-class probability. Our experiments demonstrate the effectiveness of M\({}^{3}\)3D on a variety of datasets for video and image understanding. We pre-train on UCF-101 [57], OR-AR [35] and fine-tune for video activity recognition, and video action detection respectively. We also pre-train the model with ScanNet [20] and fine-tune it for semantic segmentation. We also show the generalizibility of the model on NYUv2 [49] data for depth estimation besides semantic segmentation. In summary, the main contributions of our paper are: * We propose a new self-supervised pre-training approach called M\({}^{3}\)3D which is equipped with masked autoencoders and cross-modal training for RGB-D representation learning. Our method can be applied to various video and image understanding tasks based on single view RGB-D image pair or video clips. * Our approach learns to embed 3D priors without any camera pose registration, while enhancing the cross-modal correspondence between RGB and depth modalities using contrastive learning and matching loss. * We demonstrate the efficacy of M\({}^{3}\)3D for current ViT backbones on variety of video (UCF-101, OR-AR surgical dataset) and image understanding (ScanNet, NYUv2) datasets. * We further evaluate our method on low-data regime and demonstrate its superior data efficiency compared to current state-of-the-art approaches. Several ablation studies have been conducted and presented here to prove the effectiveness of various aspects of M\({}^{3}\)3D. ## 2 Related Work Our work proposes a self-supervised learning algorithm for general computer vision tasks. We will review some of the approaches that are closely related to our approach from different aspects - Masked Autoencoder based pre-training for vision transformers, Multi-modal learning and pre-training by learning 3D priors. Masked Autoencoder based pre-training for transformers.Self-supervised learning paradigm learns visual features based on some pre-text task using large-scale unlabeled dataset, before fine-tuning for multiple downstream tasks. Earlier work on pre-text task includes colorization [71], jigsaw puzzle [50], and predicting image rotation [28]. Instance-discrimination based learning uses two different views or augmentations of an image to learn view-invariant features by pulling them closer in the embedding space while pushing away the negative pair. SimCLR [16], MoCo [69] uses contrastive learning, BYOL [31] uses online and target network while SwAV [11], DeepCluster [10] uses clustering to discriminate between the clusters. Recently, the momentum in self-supervised learning has been shifted towards the masked image modeling which learns useful representations by learning to reconstruct the masked image patches. Inspired by the success in NLP domain such as bidirectional encoder (BERT) [21] and Generative Pre-Training (GPT) [54], several masked image prediction methods for pre-training vision transformers have been proposed with various reconstruction targets such as pixels [4, 15, 24, 25, 33, 66], features [6, 63], discrete tokens via dVAE [74, 9]. They have shown to outperform constrastive learning based approaches on various downstream tasks. One particular approach is MAE [33] which first masks large portion of the input (e.g., 75%) and then passes the unmasked patches to the large encoder followed by a lightweight decoder that reconstruct the masked patches. By masking out large portion of input patches, it accelerates the pre-training of vision transformers. Moreover, these approaches have been extended to the video domain where most of focus is to design a masking strategy that includes random masking [26], tube masking [60] and adaptive masking based on motion [41, 58] or a separate network [8]. On the other-hand, our work proposes multi-modal masked auto-encoder pre-training by learning 3D priors for several downstream tasks such as video action detection, video activity recognition, 2D semantic segmentation and depth estimation. Multi-Modal Learning.It involves training the models by leveraging data from multiple modalities. The learning may involve training modality-agnostic unified encoder or modality-specific encoders using modalities such as im ages and text [2, 13, 19, 39, 42, 59, 68], video and audio [3, 40, 51, 48], video, text and audio [1], and images and depth [29]. One particular approach is MultiMAE [5] which extends MAE to multi-modal multi-task settings and includes depth modality. However, it requires depth during the fine-tuning stage as well as semantic segmentation labels during the pre-training. Our approach doesn't rely on semantic segmentation, it only uses depth during the pre-training and it is not limited to the image domain only. Please refer to experimental section 4 for more results on MultiMAE without semantic segmentation modality. Pre-training by learning 3D priors.There is a recent surge of learning cross-modal features particularly between languages and images. CLIP [53] learns visual concepts from natural language supervision, showing impressive zero-shot performance for image classification. Pri3D [38] learns view-invariant and geometry-aware representations using multi-view consistency during pre-training for 2D image understanding. It leverages constrastive learning for 2D-3D correspondence and embed 3D priors to pre-train ResNet [34] architectures. However, our approach is more ubiquitous, it is not limited to 2D image understanding tasks and doesn't require camera pose registration across views. Mask3D [36] proposes a masked auto-encoder based self-supervised learning for ViT backbones to learn masked 3D priors for 2D image understanding without the reconstruction of RGB masked patches. In contrast, we formulate a self-supervised pre-training that can operate on both single-view images and videos and leverages masked 3D prior for several downstream tasks. Our approach also enhances the cross-modal learning and 2D-3D correspondence with contrastive learning and matching loss during pre-training which consequently boost the transfer learning performance. ## 3 Method We propose M\({}^{3}\)3D to embed 3D priors, capture modality complementary feature and learn cross-modal representation by self-supervised pre-training using a single-view RGB-D image or a video clip. To learn such priors and representations, we formulate the problem from Masked autoencoder perspective and cross-modal training strategies. Given the RGB-D input, we first the masked the modalities, and pass them through the modality-specific or modality-agnostic encoder. The latent representations from the encoder(s) are then used to reconstruct the masked RGB and depth input. To learn the cross-modal representations, we use contrastive learning and matching loss which are the two most common cross-modal training strategies. The objective of depth reconstruction task is to embed geometric awareness into the encoder while cross-modal training enhances cross-modal representation learning. After pre-training, the encoder can be fine-tuned for downstream tasks such video action recognition, 2D semantic segmentation, depth estimation etc. An overview of our approach is show in Figure 1. Below, we first explain the masking image modeling strategy from multi-modal input perspective. Then, we explain two types of cross-modal training strategies: contrastive, and matching loss. Finally, we describe the overall pre-training pipeline. ### Multi-modal Masked Autoencoder Our approach is not limited to the images and can be extended to the videos as well. Please see section 4 for performance on the video downstream tasks. For ease of notations, we describe our approach with a single-view RGB-D frame as an input. Given a RGB frame I\({}_{R}\) of size 3 x H x W and its corresponding depth map I\({}_{D}\) of size 1 x H x W, we first divide them into P x P patches from which we randomly keep a percentage of patches by masking out others for both RGB and depth input. P x P patches are projected to correct dimension using a different linear projection layer for each modality. Please refer to supplementary material on masking strategies. We use Conv2d(Conv3d in case of videos) as a linear projection layer. We then add positional embeddings and pass the tokens correspond to the unmasked patches to the modality-specific encoders or shared encoder between the modalities to get the latent representations. We use ViT-B as an encoder. To reconstruct the masked-out tokens, we use a lightweight shared decoder based on transformers with two prediction head; one for each modality. The input to the decoder is the latent representations of the unmasked patches and the set of mask tokens which are actually placeholders for the decoder to reconstruct the masked patches at the original image resolution. Following [5], we add sine-cosine positional embeddings before passing them to the decoder. From the reconstruction output, we compute the losses only on the masked tokens. Reconstruction Losses.Our model has two reconstruction loss; one for each modality. Following [33], we first normalize the output patches as well as target patches, and then used compute the MSE loss between the ground truth and the predicted pixels. \[\mathcal{L}_{rgb}=\frac{1}{\omega}\sum_{p\in\omega}||\text{I}_{R}(p)-\hat{ \text{I}_{R}}(p)||_{2} \tag{1}\] where \(p\) is the token index, \(\omega\) is the set of masked tokens. \(\hat{\text{I}_{R}}\) corresponds to the reconstruction or the prediction of the model. Similarly, for depth reconstruction task, we follow MultiMAE [5] to use L1 loss for image based pre-training. For video based pre-training, we use MSE Loss. \[\mathcal{L}_{depth}=\frac{1}{\lambda}\sum_{m\in\lambda}||\mathrm{I}_{D}(m)-\hat{ \mathrm{I}_{D}}(m)|| \tag{2}\] where \(m\) is the token index, \(\lambda\) is the set of masked tokens. \(\hat{\mathrm{I}_{D}}\) corresponds to the reconstruction or the prediction of the model. ### Cross-Modal Representation Learning Besides pre-training our model using masked image modeling, we also propose to use two cross-modal training strategies i.e. RGB-Depth contrastive learning and RGB-Depth matching. The objective of these strategies is to enhance the cross-modal representation learning. Contrastive Loss.Inspired by the success of contrastive learning [47, 53], we present our cross-modal contrastive pre-training. For each RGB-Depth image pair, we use the latent representations from the transformer encoder. Specifically, we use InfoNCE loss [61] with temperature \(\tau\) to pre-train the model. The goal of RGB-D contrastive learning is to align the RGB patch and the corresponding depth patch by pulling them closer and repelling unpaired ones apart by minimizing the similarity with the other patches in the corresponding map. Indeed, one can use instance-level contrastive learning, but we hypothesize it will fail to capture high-level information as it is relatively an easy pre-text task. \[\mathcal{L}_{\text{c}}=-\frac{1}{N}\sum_{i=1}^{N}\log\left[\frac{\exp(s_{i,i} /\tau)}{\sum_{k\neq i}\exp(s_{i,k}/\tau)+\exp(s_{i,i}/\tau)}\right] \tag{3}\] where \(s_{i,j}=\|X_{i}^{rgb}\|^{T}\|X_{j}^{d}\|\) and \(\tau\) is the temperature. \(\|X_{i}^{rgb}\|\) and \(|X_{j}^{d}\|\) corresponds to the encoder features for RGB and depth respectively for a patch \(i\). Matching Loss.Inspired by video-text matching, we also propose to use RGB-Depth matching to further enhance the cross-modal correspondence. It predicts whether the RGB-Depth pair is matched (positive) or not (negative). Specifically, we reuse the features from the encoder, and then passed them to the linear layer followed by a softmax classifier to solve two-class classification problem. \[\mathcal{L}_{\text{m}}=-\frac{1}{M}\sum_{i}^{M}\sum_{t=0}^{1}y_{it}^{\text{m}} log(q_{t}^{\text{m}}(X_{i}^{rgb},X_{i}^{d})) \tag{4}\] where \(y_{it}^{\text{m}}\) is sign function which outputs 1 if value of t is 1 else 0. \(q_{t}^{\text{m}}(X_{i}^{rgb},X_{i}^{d})\) denotes the probability score of \(t\) from the softmax. \(M\) is the total number of RGB-Depth pairs in the batch. We specifically used this loss for video based pre-training. ### Pre-training Setup The final pre-training loss of M\({}^{3}\)3D for video based understanding is, \[\mathcal{L}_{video}=\alpha\mathcal{L}_{rgb}+\beta\mathcal{L}_{depth}+\gamma \mathcal{L}_{\text{c}}+\eta\mathcal{L}_{\text{m}} \tag{5}\] Figure 1: Overview of M\({}^{3}\)3D pre-training. We introduce two self-supervised learning frameworks; Masked Image Modeling and contrastive learning. For MIM pre-training, we first mask out patches from RGB and depth input and then linearly project the unmasked patches to tokens with a fixed dimension. The tokens are then encoded using a Transformer encoder. The decoder takes the latent representations from the encoder and reconstruct the mask-out patches. Finally, we apply contrastive loss and matching loss on the encoded representations to enable cross-modal correspondence. **(Left)** M\({}^{3}\)3D pre-training setup for video understanding. **(Right)** M\({}^{3}\)3D two-stage pre-training setup for image understanding. where \(\alpha\), \(\beta\), \(\gamma\), \(\eta\) are the hyper-parameters tuned during pre-training. However, for image based understanding tasks, we pre-train our model in two-stages as shown in Figure 1. In the first stage, we pre-train the encoder using contrastive loss for few epochs and then in the second stage, we initialize the weights of the encoder with stage-1 weights, and then pre-train the encoder and decoder using masked image modeling. \[\mathcal{L}_{stage1}=\mathcal{L}_{c} \tag{6}\] \[\mathcal{L}_{stage2}=\alpha\mathcal{L}_{rgb}+\beta\mathcal{L}_{ depth} \tag{7}\] ## 4 Experiments M\({}^{3}\)3D aims to learn masked 3D priors to embed to ViT backbones and enhance the fine-tuning performance with further cross-modal learning using contrastive learning and matching loss. In this section, we first briefly describe the experimental setup for pre-training stage. Then, we provide a transfer study to measure the effectiveness of our approach across various downstream tasks. Moreover, we show the data-efficient nature of our approach by evaluating it under low-data regime setting. Finally, we present some ablation studies to study the effectiveness of each component in our approach. ### Experimental Setup Pre-train:We use vision-transformer base (ViT-B) as a encoder for all the experiments. For video activity recognition, we pre-train the model on UCF-101 [57] dataset, one of the benchmark datasets for video action recognition. We use monodepth2 [30] to extract the depths from the videos as original dataset contains RGB videos only. Please refer to [30] for more details on the approach. We use a lightweight decoder which consists of 4 blocks following VideoMAE [60]. We initialize the model weights with network weights trained on Kinetics-400 [43]. To maintain the self-supervised paradigm, we initialize the model with weights obtained by self-supervised Kinetics-400 pre-training [60]. We also show the efficacy of our approach by pre-training it from scratch. For surgical video action detection, we pre-train the model on OR-AR [35] dataset which consists of 820 full videos each having 9 temporal workflow phase labels. This dataset is collected using ToF cameras placed in two operating rooms in a single hospital. For 2D image understanding task, we pre-train the model on the ScanNet [20] dataset. ScanNet contains 2.5M RGB-D frames from 1513 video sequences. We regularly sample every 25\({}^{th}\) frame without any filtering during pre-training. We initialize the encoder weights with the self-supervised ImageNet pre-training [33]. Please refer to the supplementary material for details on datasets, pre-training and hyper-parameters. ### Results #### 4.2.1 Fine-tuning for video action recognition We fine-tune M\({}^{3}\)3D based ViT-B encoder on UCF-101. Table 1 shows the top-1 accuracy comparison with the recent state-of-the-art video based pre-training approaches. It can be seen from the table that our approach achieves 96.3% top-1 accuracy outperforming all the baselines including VideoMAE [60] which shows the effectiveness of leveraging RGB-D data to embed 3D priors for video understanding. Moreover, we also report the results when we pre-train the model from scratch. Although, it achieves the same performance as VideoMAE, M\({}^{3}\)3D is computationally less expensive than VideoMAE as it requires less epochs and higher masking ratio during pre-training. For fair comparison, we also report the performance of VideoMAE when it is pre-trained for less epochs with higher masking ratio (e.g. 0.90). It is clearly seen from the table that our approach outperforms VideoMAE under the same pre-training epochs and masking ratio. To the best of our knowledge, M\({}^{3}\)3D provides the first ViT-B backbone that is pre-trained from scratch under self-supervision using RGB-D for video action recognition. It is worth mentioning that the depth maps are extracted from an off-the-shelf model and we hypothesize that the performance will further enhance with an improved depth estimation model or readily available depth data. #### 4.2.2 Fine-tuning for surgical video action detection We fine-tune M\({}^{3}\)3D based ViT-B encoder on OR-AR, a benchmark dataset for surgical video action detection. Table 2 shows the results compared to the recent state-of-the-art video based masked autoencoder pre-training approaches. Following common practice [35, 56], we report the mean average precision (mAP). Our approach consistently outperforms the existing video based MAE models on low-data regime. More specifically, M\({}^{3}\)3D achieves 80.90 mAP when fine-tuned using 5% labeled data which also shows that it is a more data-efficient learner than VideoMAE. Moreover, we report the results of VideoMAE when it is initialized using Kinetics-400 pre-trained weights. The weights can be found from VideoMAE's github repository. Similar to video action recognition, it is the first multi-modal based transformer for surgical action detection which shows the effectiveness of the M\({}^{3}\)3D on datasets in the medical domain. Finally, we report the results of Spatio-temporal MAE [26] with different masking strategies such as random, frame, etc. and they are termed as MAE-Random, MAE-Frame in the table. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Methods & Masking ratio & Backbone & Pre-train & 5\% & 10\% & 20\% & 100\% \\ \hline MaskFeat [63] & 0.90 & MViT-S & Kinetics-400 & 62.35 & 78.88 & - & - \\ \hline MAE-Random [26] & 0.90 & ViT-B & Kinetics-400 & 64.66 & 81.48 & 84.93 & 94.8 \\ \hline MAE-Random [26] & 0.90 & ViT-B & OR-AR & 66.58 & 81.87 & 84.97 & **96.3** \\ \hline MAE-Frame [26] & 0.90 & ViT-B & OR-AR & 63.44 & 78.89 & 81.45 & - \\ \hline VideoMAE [60] & 0.90 & ViT-B & OR-AR & 65.57 & 81.74 & 83.89 & 94.9 \\ \hline VideoMAE [60] & 0.90 & ViT-B & Kinetics-400 + OR-AR & 70.93 & 83.73 & 86.33 & 95.9 \\ \hline **M\({}^{3}\)3D** & 0.90 & ViT-B & Kinetics-400 + OR-AR & **80.90** & **85.31** & **89.88** & 96.1 \\ \hline \hline \end{tabular} \end{table} Table 1: **Video action recognition accuracy on UCF-101 [57]. \({}^{\dagger}\) means we re-run the code using the original repository provided by VideoMAE.** \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Reconstruction task & Backbone & Pre-train & Fine-tune Modality & mIoU \\ \hline Scratch & - & ViT-B & None & RGB & 32.6 \\ \hline MultiMAE* [5] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 64.0 \\ \hline Pri3D [38] & & ViT-B & ImageNet+ScanNet & RGB & 59.3 \\ \hline Pri3D [38] & - & ResNet-50 & ImageNet+ScanNet & RGB & 60.2 \\ \hline DINO [12] & - & ViT-B & ImageNet+ScanNet & RGB & 58.1 \\ \hline MAE [33] & RGB & ViT-B & ImageNet & RGB & 64.8 \\ \hline MAE [33] & RGB & ViT-B & ImageNet+ScanNet & RGB & 64.5 \\ \hline Mask3D [36] & Depth & ViT-B & ImageNet+ScanNet & RGB & 66.0 \\ \hline Mask3D [36] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 65.5 \\ \hline **M\({}^{3}\)3D** & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & **67.3** \\ \hline \hline MultiMAE [5] & RGB + Depth*Segmentation & ViT-B & ImageNet & RGB & 66.4 \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparison of M\({}^{3}\)3D with the other state-of the art methods under different data-regime setting on the OR-AR dataset [35].** \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Masking ratio & Backbone & Pre-train & Fine-tune Modality & mIoU \\ \hline MultiMAE* [5] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 49.1 \\ \hline MAE [33] & RGB & ViT-B & ImageNet & RGB & 46.9 \\ \hline MAE [33] & RGB & ViT-B & ImageNet+ScanNet & RGB & 48.3 \\ \hline Mask3D [36] & Depth & ViT-B & ImageNet+ScanNet & RGB & 50.5 \\ \hline M\({}^{3}\)3D & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & **51.2** \\ \hline **M\({}^{3}\)3D** & **RGB + Depth + Segmentation** & ViT-B & ImageNet & RGB & **51.5** \\ \hline \hline \end{tabular} \end{table} Table 4: **NYUv2 Semantic Segmentation. M\({}^{3}\)3D outperforms state-of-the-art approaches that leverage RGB-D data during pre-training. It demonstrate the effectiveness in transferring to out-domain dataset. \({}^{\dagger}\) means we re-run the code using the original repository provided by MultiMAE.** \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Masking ratio & Backbone & Pre-train & Fine-tune Modality & mIoU \\ \hline Scratch & - & ViT-B & ImageNet+ScanNet & RGB & 49.1 \\ \hline OPN [44] & - & VGG & ImageNet & RGB & 46.9 \\ \hline VCOP [67] & - & R(2+1)D & UCF-101 & N/A & 72.4 \\ \hline CoCLR [32] & - & S3D-G & UCF-101 & 32 & 81.4 \\ \hline ViTCLR [22] & - & S3D & UCF-101 & 32 & 82.8 \\ \hline CoCLR [32] & - & S3D-G & Kinetics-400 & 32 & 87.9 \\ \hline ViTCLR [22] & - & S3D & Kinetics-400 & 32 & 89.1 \\ \hline MoCov3 [17] & - & ViT-B & & UCF-101 & 16 & 81.7 \\ \hline VideoMAE [60] & & ViT-B & - & Kinetics-400 & 16 & 96.0\({}^{\dagger}\) \\ \hline **M\({}^{3}\)3D** & 0.90 & ViT-B & 100 & Kinetics-400 + UCF-101 & 16 & **96.3** \\ \hline VideoMAE [60] & 0.75 & ViT-B & 800 & UCF-101 & 16 & 90.1 \\ \hline VideoMAE [60] & 0.90 & ViT-B & 3200 & UCF-101 & 16 & 90.7 \\ \hline VideoMAE [60] & 0.75 & ViT-B & 3200 & UCF-101 & 16 & **91.2** \\ \hline **M\({}^{3}\)3D** & 0.90 & ViT-B & 800 & UCF-101 & 16 & **91.1** \\ \hline \hline \end{tabular} \end{table} Table 1: **Video action recognition accuracy on UCF-101 [57]. \({}^{\dagger}\) means we re-run the code using the original repository provided by VideoMAE.** \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Reconstruction task & Backbone & Pre-train & Fine-tune Modality & mIoU \\ \hline Scratch & - & ViT-B & None & RGB & 32.6 \\ \hline MultiMAE* [5] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 64.0 \\ \hline Pri3D [38] & & ViT-B & ImageNet+ScanNet & RGB & 59.3 \\ \hline Pri3D [38] & - & ResNet-50 & ImageNet+ScanNet & RGB & 60.2 \\ \hline DINO [12] & - & ViT-B & ImageNet+ScanNet & RGB & 58.1 \\ \hline MAE [33] & RGB & ViT-B & ImageNet & RGB & 64.8 \\ \hline MAE [33] & RGB & ViT-B & ImageNet+ScanNet & RGB & 64.5 \\ \hline Mask3D [36] & Depth & ViT-B & ImageNet+ScanNet & RGB & 66.0 \\ \hline Mask3D [36] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 65.5 \\ \hline **M\({}^{3}\)3D** & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & **67.3** \\ \hline \hline MultiMAE [5] & RGB + Depth*Segmentation & ViT-B & ImageNet & RGB & 66.4 \\ \hline \hline \end{tabular} \end{table} Table 3: **ScanNet Semantic Segmentation. M\({}^{3}\)3D outperforms Mask3D and other state-of-the-art approaches that leverage RGB-D data during pre-training.** #### 4.2.3 Fine-tuning for 2D semantic segmentation In this section, we show that our pre-training approach is capable of transferring the learned representations to the 2D image understanding tasks by performing the 2D semantic segmentation task. Following MultiMAE [5], we use segmentation head based on ConvNext architecture [46] on top of the ViT-B encoder. We use mean Intersection over Union (mIoU) as an evaluation metric. For fine-tuning, we sample every 100\({}^{th}\) frame, resulting in 20,000 training images and 5000 validation images which is common protocol of the ScanNet benchmark [20]. Table 3 shows the results compared to the recent state-of-the-art pre-training approaches for ScanNet semantic segmentation. It can be observed from the table that M\({}^{3}\)3D significantly outperforms Mask3D [36] (**+1.3** mIoU), a state-of-the-art approach. More notably, our approach improves over MAE [33] which is also pre-trained with ImageNet and ScanNet (+2.8 mIoU). Moreover, compared to Pri3D [38], a 3D-based pre-training method, our approach outperforms by a significant margin of **7.1** mIoU which shows that multi-view based 3D pre-training does not effectively embed 3D priors to ViT backbones, rather degrades the performance compared to ResNet backbone. Furthermore, we also report the results of MultiMAE [5] when it is pre-trained on ScanNet and it is denoted as MultiMAE* in the table. For fair comparison, we only use RGB and depth as the reconstruction task. We observe that M\({}^{3}\)3D outperforms MultiMAE* by a margin of **3.3** mIoU as well as original MultiMAE. Finally, we demonstrate the generalizability of M\({}^{3}\)3D across datasets by fine-tuning the pre-trained model on NYUv2 [49] following the same setup. Table 4 shows the results compared to the recent state-of-the-art pre-training approaches for NYUv2 semantic segmentation. We draw the same observation that our pre-trained approach outperforms the competing baselines showing how well it transfers across datasets. #### 4.2.4 Fine-tuning for depth estimation In this section, we study how M\({}^{3}\)3D transfers the representation to the dense regression tasks. We use NYUv2 for the depth estimation and report \(\delta_{1}\) on the NYUv2 test set. \(\delta_{1}\) is the percentage of pixels that have an error ratio (\(\max\{\frac{\hat{y}_{p}}{y_{p}},\frac{y_{p}}{y_{p}}\}\)) below 1.25 [23]. Following MultiMAE, we use DPT [55] as dense prediction head on the top of the ViT encoder. Table 5 shows that M\({}^{3}\)3D outperforms recent state-of-the-art approaches including MultiMAE, Mask3D. Notably, it outperforms CroCo [64] which is a cross view completion pre-training strategy based on MIM specifically designed for 3D vision tasks with 86.7% vs 85.6%. Although our approach is marginally outperforming original MultiMAE, but we want to reiterate that it is pre-trained with three different tasks. Moreover, when multiMAE is pre-trained on ScanNet with RGB and depth only, the performance drops to 85.3%. ### Data-Efficient Learner We fine-tune M\({}^{3}\)3D based ViT-B encoder on ScanNet for 2D semantic segmentation mainly under low-data regime setting to study how data-efficient learner our approach is. Figure 2 shows that our approach consistently outperforms the competing approaches by a considerable margin across different percentage of available labeled training data. It is worth mentioning that M\({}^{3}\)3D recovers more than 80% performance of the full labeled training set performance when fine-tune with only 20% of training data. ### Ablation studies In this section, we perform in-depth ablation studies on ScanNet 2D semantic segmentation. Effect of masking ratio during pre-training.We study the influence of masking ratio and report the results in Table 6 on ScanNet semantic segmentation. It is clearly shown from the table that by masking more patches in RGB and depth modality, M\({}^{3}\)3D achieves the best performance. Without ImageNet Initialization.We observe a performance drop when the model is not initialized using ImageNet pre-trained weights in the pre-training stage as shown in Table 7. Because ScanNet has a relatively small amount of indoor data, so it's harder to pre-train the ViT backbones from scratch. Since, ImageNet weights are readily available, so we initialize our models with it following [36, 38]. Figure 2: M\({}^{3}\)3D is a data-efficient learner. We compare to the recent state-of-the-art pre-training approaches on ScanNet 2D semantic segmentation under limited labeled data scenarios. Notably, our approach improves +2.7% mIoU over Mask3D [36] at 20% training data. Effect of each loss function.We also study the effect of loss functions during pre-training and report the performance on ScanNet 2D semantic segmentation. We compare the performance of model without contrastive loss and without RGB reconstruction task. The results are reported in Table 8. From the results, we observe that contrastive loss overall improves the mIoU compared to Mask3D which suggests that cross-modal learning is an important component besides Masked Image Modeling. ## 5 Conclusion In this paper, we present M\({}^{3}\)3D, a new self-supervised pre-training technique with two main functions: (1) learning to embed geometric priors into 2D representations using Masked Image Modeling, (2) learning cross-modal representations in RGB-D data using contrastive learning. M\({}^{3}\)3D is a general pre-training method applicable to a variety of image/video understanding tasks, doesn't require camera registration between multi-view input as found in recent self-supervised approaches such as Pri3D, and works well in low-data regime. Our extensive experiments on downstream tasks such as video action recognition, video action detection, 2D semantic segmentation and depth estimation show the superiority of M\({}^{3}\)3D compared to current state-of-the-art 2D representation learning approaches. Future work includes extending the approach to applications with datasets that have more than two modalities. \begin{table} \begin{tabular}{|c|c|c|} \hline RGB ratio & Depth ratio & mIoU \\ \hline 20.0\% & 20.0\% & 65.5 \\ \hline 20.0\% & 50.0\% & 65.4 \\ \hline 20.0\% & 80.0\% & 65.4 \\ \hline 50.0\% & 20.0\% & 66.0 \\ \hline 50.0\% & 50.0\% & 65.3 \\ \hline 80.0\% & 20.0\% & 66.1 \\ \hline 80.0\% & 80.0\% & **67.3** \\ \hline \end{tabular} \end{table} Table 6: We study the effect of different masking ratio for RGB and depth input on ScanNet 2D semantic segmentation where each ratio indicates the percentage of masked patches. \begin{table} \begin{tabular}{|c|c|c|} \hline Loss & Backbone & mIoU \\ \hline w/out contrastive loss & ViT-B & 65.5 \\ \hline w/out RGB task & ViT-B & 66.8 \\ \hline M\({}^{3}\)3D & ViT-B & **67.3** \\ \hline \end{tabular} \end{table} Table 7: Results on ScanNet 2D semantic segmentation without ImageNet initialization during pre-training. Figure 3: **Qualitative Results on ScanNet.** We visualize the predictions of various approaches on 2D Semantic Segmentation task. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Methods & Reconstruction task & Backbone & Pre-train & Fine-tune Modality & \(\delta_{1}\) \\ \hline MultiMAE [5] & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & 85.3 \\ \hline MAE [33] & RGB & ViT-B & ImageNet & RGB & 85.1 \\ \hline Mask3D [36] & Depth & ViT-B & ImageNet+ScanNet & RGB & 85.4 \\ \hline Croo [64] & RGB + Depth & ViT-B & Habitat & RGB & 85.6 \\ \hline MultiMAE [5] & RGB + Depth + Segmentation & ViT-B & ImageNet & RGB & 83.0 \\ \hline M\({}^{3}\)3D & RGB + Depth & ViT-B & ImageNet+ScanNet & RGB & **86.7** \\ \hline **MultiMAE**[5] & **RGB + Depth + Segmentation** & ViT-B & ImageNet & RGB & **86.4** \\ \hline \end{tabular} \end{table} Table 5: **NYUv2 Depth Estimation.** M\({}^{3}\)3D outperforms Mask3D and other state-of-the-art approaches that leverage RGB-D data during pre-training. It demonstrate the effectiveness in transferring to dense prediction task for out-domain dataset.* means the number are reported from CroCo [64] paper. \begin{table} \begin{tabular}{|c|c|c|} \hline RGB ratio & Depth ratio & mIoU \\ \hline 20.0\% & 20.0\% & 65.5 \\ \hline 20.0\% & 50.0\% & 65.4 \\ \hline 20.0\% & 80.0\% & 65.4 \\ \hline 50.0\% & 20.0\% & 66.0 \\ \hline 50.0\% & 50.0\% & 65.3 \\ \hline 80.0\% & 20.0\% & 66.1 \\ \hline 80.0\% & 80.0\% & **67.3** \\ \hline \end{tabular} \end{table} Table 6: We study the effect of different masking ratio for RGB and depth input on ScanNet 2D semantic segmentation where each ratio indicates the percentage of masked patches.
2309.13762
Veer: Verifying Equivalence of Dataflow Versions in Iterative Data Analytics (Extended Version)
Data analytics using GUI-based dataflows is an iterative process in which an analyst makes many iterations of changes to refine the dataflow, generating a different version at each iteration. In many cases, the result of executing a dataflow version is equivalent to a result of a prior executed version. Identifying such equivalence between the execution results of different dataflow versions is important for optimizing the performance of a dataflow by reusing results from a previous run. The size of the dataflows and the complexity of their operators often make existing equivalence verifiers (EVs) not able to solve the problem. In this paper, we present "Veer," which leverages the fact that two dataflow versions can be very similar except for a few changes. The solution divides the dataflow version pair into small parts, called windows, and verifies the equivalence within each window by using an existing EV as a black box. We develop solutions to efficiently generate windows and verify the equivalence within each window. Our thorough experiments on real dataflows show that Veer is able to not only verify the equivalence of dataflows that cannot be supported by existing EVs but also do the verification efficiently.
Sadeem Alsudais, Avinash Kumar, Chen Li
2023-09-24T21:50:18Z
http://arxiv.org/abs/2309.13762v3
# Veer: Verifying Equivalence of Dataflow Versions in Iterative Data Analytics (Extended Version) ###### Abstract. Data analytics using GUI-based workflows is an iterative process in which an analyst makes many iterations of changes to refine the dataflow, generating a different version at each iteration. In many cases, the result of executing a dataflow version is equivalent to a result of a prior executed version. Identifying such equivalence between the execution results of different dataflow versions is important for optimizing the performance of a dataflow by reusing results from a previous run. The size of the dataflows and the complexity of their operators (e.g., UDF and ML models) often make existing equivalence verifiers (EVs) unable to solve the problem. In this paper, we present "Veer," which leverages the fact that two dataflow versions can be very similar except for a few changes. The solution divides the dataflow version pair into small parts, called _windows_, and verifies the equivalence within each window by using an existing EV as a black box. We develop solutions to efficiently generate windows and verify the equivalence within each window. Our thorough experiments on real dataflows show that Veer is able to not only verify the equivalence of dataflows that cannot be supported by existing EVs but also do the verification efficiently. iterative data analysis, dataflow equivalence verification + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + FootnoteFootnote †: journal: Computer Science + Footnote †: journal: Computer Science execution has been studied extensively in the literature [21; 42]. One optimizing technique is by leveraging the iterative nature of data analytics to reuse previously materialized results [20; 30]. _Use case 2: Reducing storage space._ The execution of a dataflow may produce a large number of results and storing the output of all generated jobs is impractical [22]. Due to the nature of the overlap and equivalence of consecutive versions, one line of work [3; 20] periodically performs a view de-duplication to remove duplicate stored results. Identifying the equivalence between the dataflow versions can be used to avoid storing duplicate results and helps in avoiding periodic clean-up of duplicate results. These use cases show the need for effective and efficient solutions to decide the equivalence of two dataflow versions. We observe the following two unique traits of these GUI-based iterative dataflows. (_T1_) these dataflows can be large and complex, with operators that are semantically rich [6; 20; 55]. For example, 8 of randomly selected Alteryx's dataflows [6] had an average of 48 operators, with one of the dataflows containing 102 operators, and comprised of mostly non-relational operators 1. Real dataflows in Texera [49] had an average size of 23 operators, and most of them had visualization and UDF operators. Some operators are user-defined functions (UDF) that implement highly customized logic including machine learning techniques for analyzing data of different modalities such as text, images, audios, and videos [55]. For instance, the dataflows in the running example contain two non-relational operators, namely a Dictionary Matcher and a Classifier. (_T2_) Those adjacent versions of the same dataflow tend to be similar, especially during the phase where the developer is refining the dataflow to do fine tuning [21; 55]. For example, 50% of the dataflows belonging to the benchmarks that simulated real iterative tasks on video [55] and TPC-H [21] data had overlap. The refinements between the successive versions comprised of only a few changes over a particular part of the dataflow. Thus, we want to study the following: Footnote 1: Details of these dataflows are in Appendix B **Problem Statement:** Given two similar versions of a complex dataflow, verify if they produce the same results. **Limitations of existing solutions.** dataflows include relational operators and UDFs [34]. Thus, we can view the problem of checking the equivalence of two dataflow versions as the problem of checking the equivalence of two SQL queries. The latter is undecidable in general [1] (based on the reduction from First-order logic). There have been many Equivalence Verifiers (EVs) proposed to verify the equivalence of two SQL queries [58; 15; 59]. These EVs have _restrictions_ on the type of operators they can support, and mainly focus on relational operators such as SPJ, aggregation, and union. They cannot support many semantically rich operators common in dataflows, such as dictionary matching and classifier operators in the running example, and other operators such as unnest and sentiment analyzer. To investigate their limitations, we analyzed the SQL queries and dataflows from 6 workloads, and generated an equivalent version by adding an empty filter operator. Then, we used EVs from the literature [58; 59; 15; 53] to test the equivalence of these two versions. Table 1 shows the average percentage of pairs for each workload that can be verified by these EVs, which is very low. **Our Approach.** To solve the problem of verifying the equivalence of two dataflow versions, we leverage the fact that the two dataflow versions are almost identical except for a few local changes (_T2_). In this paper, we present Veer2, a verifier to test the equivalence of two dataflow versions. It addresses the aforementioned problem by utilizing existing EVs as a black box. In SS3, we give an overview of the solution, which divides the dataflow version pair into small parts, called "windows", so that each window satisfies the EV's restrictions in order to push testing the equivalence of a window to the EV. Our approach is simple yet highly effective in solving a challenging problem, making it easily applicable to a wide range of applications. Footnote 2: It stands for “Versioned Execution Equivalence Verifier.” **Why not develop a new EV?** A natural question arises: why do we choose to use existing EVs instead of developing a new one? Since the problem itself is undecidable, any developed solution will inherently have limitations and incompleteness. Our goal is to create a general-purpose solution that maximizes completeness by harnessing the capabilities of these existing EVs. This approach allows us to effectively incorporate any new EVs that may emerge in the future, ensuring the adaptability and flexibility of our solution. **Challenges and Contributions.** During the exploration of the proposed idea, we encountered several challenges in developing Veer: 1) How can we enhance the completeness of the solution while maintaining efficiency and effectively handling the incompleteness of the EVs? 2) How do we efficiently handle dataflow versions with a single edit and perform the verification? 3) How can we effectively handle dataflow versions with multiple edits, and can the windows overlap? We thoroughly investigate these challenges and present the following **contributions**. 1. We formulate the problem of verifying the equivalence of two complex dataflow versions in iterative data analytics. To the best of our knowledge, Veer is the first work that studies this problem by incorporating the knowledge of user edit operations into the solution (SS2). 2. We give an overview of the solution and formally define the "window" concept that is used in the equivalence verification algorithm (SS 3). 3. We first consider the case where there is a single edit. We analyze how the containment between two windows is related to their equivalence results, and use this analysis to derive the concept of "maximal covering window" for an efficient verification. We give insights on how to use EVs and the subtle cases of the EVs' restrictions and completeness (SS4). 4. We study the general case where the two versions have multiple edits. We analyze the challenges of using overlapping windows, and propose a solution based on the "decomposition" concept. We discuss the correctness and the completeness of our algorithm (SS5). 5. We provide a number of optimizations in Veer\({}^{+}\) to improve the performance of the baseline algorithm (SS 7). 6. We report the results of a thorough experimental evaluation of the proposed solutions. The experiments show that the proposed solution is not only able to verify dataflows that cannot be verified by existing EVs, but also able to do the verification efficiently (SS 9). ## 2. Problem Formulation In this section, we use an example dataflow to describe the setting. We also formally define the problem of verifying equivalence of two dataflow versions. Table 2 shows a summary of the notations used in this section. **Data processing workflow**. We consider a data processing workflow \(W\) as a directed acyclic graph (DAG), where each vertex is an operator and each link represents the direction of data flow. Each operator contains a computation function, we call it a _property_ such as a predicate condition, e.g., Price \(<20\). Each operator has outgoing links, and its produced data is sent on each outgoing link. An operator without any incoming links is called a Source. An operator without any outgoing links is called a Sink, and it produces the final results as a table to be consumed by the user. A dataflow may have multiple data source operators denoted as \(\mathbb{D}_{W}=\{D_{1},\ldots,D_{l}\}\) and multiple sink operators denoted as \(\mathbb{S}_{W}=\{s_{1},\ldots,s_{n}\}\). For example, consider a dataflow in Figure 0(a). It has two source operators "Tweets" and "Users" and two sink operators \(s_{i}\) and \(s_{p}\) to show a tabular result and a scatterplot visualization, respectively. The Outjoin operator has two outgoing links to push its processed tuples to the downstream Aggregate and Sink operators. The Filter operator's properties include the boolean selection predicate. ### Dataflow Version Control A dataflow \(W\) undergoes many edits from the time it was first constructed as part of the iterative process of data analytics (Srivastava et al., 2016; Wang et al., 2017). A dataflow \(W\) has a list of versions \(V_{W}=[v_{1},\ldots,v_{m}]\) along a timeline in which the dataflow changes. Each \(v_{j}\) is an immutable version of dataflow \(W\) in one time point following version \(v_{j-1}\), and contains a number of edit operations to transform \(v_{j-1}\) to \(v_{j}\). Definition 2.1 (Dataflow edit operation).: We consider the following edit operations on a dataflow: * An addition of a new operator. * A deletion of an existing operator. * A modification of the properties of an operator while the operator's type remains the same, e.g., changing the predicate condition of a Select operator. * An addition of a new link. * A removal of an existing link. 3 Footnote 3: We assume links do not have properties. Our solution can be generalized to the case where links have properties. A combination of these edit operations is a _transformation_, denoted as \(\delta_{j}\). The operation of applying the transformation \(\delta_{j}\) to a dataflow version \(v_{j}\) is denoted as \(\oplus\), which produces a new version \(v_{j+1}\). Formally, \[v_{j+1} = v_{j}\quad\oplus\quad\delta_{j}. \tag{1}\] In the running example, the analyst makes edits to revise the dataflow version \(v_{1}\) in Figure 0(a). In particular, she (1) deletes the Filter\({}_{\text{o}}\) operator; (2) adds a new Filter\({}_{\text{h}}\) operator; (3) and adds a new Filter\({}_{\text{g}}\) operator. These operations along with the necessary link changes to add those operators correspond to a transformation, \(\delta_{1}\) and applying it on \(v_{1}\) will result in a new version \(v_{2}\), illustrated in Figure 0(b). **Dataflow edit mapping**. Given a pair of versions \((P,Q)\) and an edit mapping \(\mathcal{M}\), there is a corresponding transformation from \(P\) to \(Q\), which aligns every operator in \(P\) to at most one operator in \(Q\). Each operator in \(Q\) is mapped onto by at most one operator in \(P\). A link between two operators in \(P\) maps to a link between the corresponding operators in \(Q\). Those operators and links in \(P\) that are not mapped to any operators and links in \(Q\) are assumed to be deleted. Similarly, those operators and links in \(Q\) that are not mapped onto by any operators and links in \(P\) are assumed to be inserted. Figure 2 shows an example edit mapping between the two versions \(v_{1}\) and \(v_{2}\) in the running example. As Filter\({}_{\text{y}}\) from \(v_{1}\) is deleted, the opertor is not mapped to any operator in \(v_{2}\). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Workload/EV** & **\# of pairs** & **UDP (Datta et al., 2018)** & **Equitas (Srivastava et al., 2016)** & **Spes (Srivastava et al., 2016)** & **WeTune (Srivastava et al., 2016)** & **AVG. \% of pairs** **supported by existing EVs** \\ \hline \hline Calcite benchmark (Chen et al., 2018) & 232 & 39 & 91 & 120 & 73 & 34.81\% \\ \hline \hline \multicolumn{2}{|l|}{Krime workflows hub (K Figure 3 illustrates an example to show the relation between edit mapping and edit operations. Suppose two versions \(P\) and \(Q\) as follows: \[P=\{Project(all)\to Filter(age>24)\to Aggregate(\text{count by age})\}.\] \[Q=\{Aggregate(\text{count by age})\to Filter(age>24)\to Project(all)\}.\] There can be many different mappings that correspond to different set of edit operations. Consider a mapping \(\mathcal{M}_{1}\) where Project from \(P\) is mapped to Aggregate in \(Q\), Filter in \(P\) is mapped to Filter in \(Q\), and Aggregate in \(P\) is mapped to Project in \(Q\). This mapping yields the set of edit operations of (swapping project and aggregate in both versions). A different mapping \(\mathcal{M}_{2}\), which maps Aggregation in \(P\) to that in \(Q\) yields the edit operations of deleting both Project and Filter in \(P\) and inserting them after Aggregate in version \(Q\). ### Dataflow's Execution and Results A user submits an execution request to run a dataflow version. The execution produces _result_ of each sink in the version. We make the following assumption: Assumption.: _Multiple executions of a dataflow (or a portion of the dataflow) will always produce the same results 4._ Footnote 4: This assumption is valid in many real-world applications as we detail in the experiment Section 9 **Result equivalence of dataflow versions.** The execution request for the version \(v_{j}\) may produce a sink result equivalent to the corresponding sink of a previous executed version \(v_{j-k}\), where \(k<j\). For example, in Figure 0(b), executing the dataflow version \(v_{2}\) produces a result of the scatterplot sink \(s_{2}\) equivalent to the result of the corresponding scatterplot of \(v_{1}\). In particular, \(v_{2}\)'s edit is pushing down the Filter operator and the scatterplot result remains the same. Notice however that the result of \(s_{i}\) in \(v_{2}\) is not equivalent to the result of \(s_{i}\) in \(v_{1}\) because of the addition of the new \(\text{Filter}_{\text{h}}\) operator. Now, we formally define "sink equivalence." Definition 2.2 (Sink Equivalence and Version-Pair Equivalence).: Consider two dataflow versions \(P\) and \(Q\) with a set of edits \(\delta=\{c_{1}\dots c_{n}\}\) and the corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). Each version can have multiple sinks. For each sink \(s\) of \(P\), consider the corresponding sink \(\mathcal{M}(s)\) of \(Q\). We say \(s\) is _equivalent to \(\mathcal{M}(s)\)_, denoted as "\(s\equiv\mathcal{M}(s)\)," if for every instance of data sources of \(P\) and \(Q\), the two sinks produce the same result under the application's specified table semantics. In other words, for every tuple \(t\) in \(s\)_exists_ in \(\mathcal{M}(s)\) under Set table semantics, every tuple \(t\) in \(s\) has an _identical_ tuple in \(\mathcal{M}(s)\) under Bag table semantics, or every tuple \(t\) in \(s\) has an _identical_ tuple and the same _order_ in \(\mathcal{M}(s)\) under Ordered Bag table semantics. We say \(s\)_is inequivalent to \(\mathcal{M}(s)\)_, denoted as "\(s\equiv\mathcal{M}(s)\)," if there exists an instance of data sources of \(P\) and \(Q\) where the two sinks produce different results. The two versions are called _equivalent_, denoted as "\(P\equiv Q\)", if each pair of their sinks under the mapping is equivalent. The two versions are called _inequivalent_, denoted as "\(P\equiv Q\), if any pair of their sinks under the mapping is inequivalent. In this paper, we study the problem where the two versions have a single sink. We generalize the solution to the case of multiple sinks in a followup work (Borda et al., 2017). **Expressive power of dataflows and SQL queries.** Dataflows may involve complex operations, such as user-defined functions (UDFs). We consider dataflow DAGs that can be viewed as a class of SQL queries that do not contain recursion. In this paper, we focus on dataflows using relational operators including union, selection, projection, join, and aggregation (USPJA) possibly with arithmetic expressions, as well as UDFs with deterministic functions under the application's specific table semantics (sets, bags, or ordered bags). Thus, the problem of testing the equivalence of two dataflow versions can be treated as testing the equivalence of two SQL queries without recursion. In the remaining of the paper we use "query" and a "dataflow version" interchangeably. ### Equivalence Verifiers (EVs) An equivalence verifier (or "EV" for short) takes as an input a pair of SQL queries \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\). An EV returns \(\mathsf{True}\) when \(\mathbf{Q}_{1}\equiv\mathbf{Q}_{2}\), \(\mathsf{False}\) when \(\mathbf{Q}_{1}\equiv\mathbf{Q}_{2}\), or \(\mathsf{Unknown}\) when the EV cannot determine the equivalence of the pair under a specific table semantics (Borda et al., 2017; Borda et al., 2017; Borda et al., 2017; Borda et al., 2017; Borda et al., 2017). For instance, UDP (Borda et al., 2017) and Equitas (Borda et al., 2017) are two EVs. The former uses \(\mathsf{U}\)-expressions to model a query while the latter uses a symbolic representation. Both EVs internally convert the expressions to a first-order-logic (FOL) formula and then push the formula to a solver such as an SMT solver (Borda et al., 2017) to decide its satisfiability. An EV requires two queries to meet certain requirements (called "restrictions") in order to test their equivalence. We will discuss these restrictions in detail in Section 4.2. Problem Statement.: _Given an EV and two dataflow versions \(P\) and \(Q\) with their mapping \(\mathcal{M}\), verify if the two versions are equivalent._ Figure 3. An example of two edit mappings (\(\mathcal{M}_{1}\) on the left and \(\mathcal{M}_{2}\) on the right) leading to two different sets of edit operations. Update edit operation is highlighted in orange, removal in red, and addition in green. Figure 2. Example of an edit mapping between version \(v_{1}\) and \(v_{2}\). Portions of the dataflows are omitted for clarity. ## 3. Veer: Verifying Equivalence of a Version Pair In this section, we give an overview of Veer for checking equivalence of a pair of dataflow versions (Section 3.1). We formally define the concepts of "window" and "covering window" (Section 3.2). ### Veer: Overview To verify the equivalence of a pair of sinks in two dataflow versions, Veer leverages the fact that the two versions are mostly identical except for a few places with edit operations. It uses existing EVs as a black box. Given an EV, our approach is to break the version pair into multiple "windows," each of which includes local changes and satisfies the EV's restrictions to verify if the pair of portions of the dataflow versions in the window is equivalent, as illustrated in Figure 4. We consider different semantics of equivalence between two tuple collections, including sets, bags, and lists, depending on the application of the dataflow and the given EV. Veer is agnostic to the underlying EVs, making it usable for any EV of choice. Next we define concepts used in this approach. ### Windows and Covering Windows Definition 3.1 (Window).: Consider two dataflow versions \(P\) and \(Q\) with a set of edits \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\) and a corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). A _window_, denoted as \(\omega\), is a pair of sub-DAGs \(\omega(P)\) and \(\omega(Q)\), where \(\omega(P)\) (respectively \(\omega(Q)\)) is a connected induced sub-DAG of \(P\) (respectively \(Q\)). Each pair of operators/links under the mapping \(\mathcal{M}\) must be either both in \(\omega\) or both outside \(\omega\). The operators in the sub-DAGs \(\omega(P)\) and \(\omega(Q)\) without outgoing links are called their _sinks_. Recall that we assume each dataflow has a single sink. However, the sub-DAG \(\omega(P)\) and \(\omega(Q)\) may have more than one sink. This can happen, for example, when the window contains a Replicate operator. Figure 5 shows a window \(\omega\), where each sub-DAG includes the Classifier operator and two downstream operators Left-Outerjoin and Join, which are two sinks of the sub-DAG. We omit portions of the dataflows in all the figures throughput the paper for clarity. Definition 3.2 (Neighboring Window).: Consider two dataflow versions \(P\) and \(Q\) with a set of edits \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\) and a corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). We say two windows, \(w_{1}\) and \(w_{2}\), are _neighbors_ if there exists a sink (or source) operator of the sub-DAGs in one of the windows that is a direct upstream (respectively downstream) operator in the original DAG to a source (respectively sink) operator in of the sub-DAGs in the other window. Figure 6 shows two neighboring windows, \(\omega_{1}\) and \(\omega_{2}\). The sink operator of \(\omega_{1}\) (Filter) is a direct upstream operator to the source operator of \(\omega_{2}\) (Sort) in the original DAG. Definition 3.3 (Covering window).: Consider two dataflow versions \(P\) and \(Q\) with a set of edits \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\) and a corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). A _covering window_, denoted as \(\omega_{C}\), is a window to cover a set of changes \(C\subseteq\delta\). That is, the sub-DAG in \(P\) (respectively sub-DAG in \(Q\)) in the window includes the operators/links of the edit operations in \(C\). When the edit operations are clear in the context, we will simply write \(\omega\) to refer to a covering window. Figure 7 shows a covering window for the change of adding the operator \(\mathsf{Filter_{h}}\) to \(v_{2}\). The Figure 4. Overview of Veer. Given an EV and two versions with their mapping, Veer breaks (decomposes) the version pair into small windows, each of which satisfies the EV’s restrictions. It finds different possible decompositions until it finds one with each of the windows verified as equivalent by the EV. Figure 5. An example window \(\omega\) (shown as “\(\cdots\)”) and each sub-DAG of \(\omega(v_{1})\) and \(\omega(v_{2})\) contains two sinks (shown as “\(\oplus\)”). \(v_{1}\) is above the horizontal line and \(v_{2}\) is below the line. Figure 6. An example showing two neighboring windows. covering window includes the sub-DAG \(\omega(v_{1})\) of \(v_{1}\) and contains the Aggregate operator. It also includes the sub-DAG \(\omega(v_{2})\) of \(v_{2}\) and contains the Filter\({}_{\text{n}}\) and Aggregate operators. Definition 3.4 (Equivalence of the two sub-DAGs in a window)We say the two sub-DAGs \(\omega(P)\) and \(\omega(Q)\) of a window \(\omega\) are _equivalent_, denoted as "\(\omega(P)\equiv\omega(Q)\)," if they are equivalent as two stand-alone DAG's, i.e., without considering the constraints from their upstream operators. That is, for every instance of source operators in the sub-DAGs (i.e., those operators without ancestors in the sub-DAGs), each sink of \(\omega(P)\) and the corresponding \(\mathcal{M}(s)\) in \(\omega(Q)\) produces the same results. In this case, for simplicity, we say this window is equivalent. Figure 8 shows an example of a covering window \(\omega^{\prime}\), where its sub-DAGs \(\omega^{\prime}(v_{1})\) and \(\omega^{\prime}(v_{2})\) are equivalent. Notice that for each sub-DAG in the window \(\omega\), the results of its upstream operators are the input to the sub-DAG. The equivalence definition considers all instances of the sources of the sub-DAG, without considering the constraints on its input data as the results of upstream operators. For instance, consider the two dataflow versions in Figure 9. The two sub-DAGs of the shown window \(\omega\) are clearly not equivalent as two general dataflows, as the top sub-DAG has a filter operator, while the bottom one does not. However, if we consider the constraints of the input data from the upstream operators, the sub-DAGs in \(\omega\) are indeed equivalent, because each of them has an upstream filter operator with a predict \(age<50\), making the predicate \(age<55\) redundant. We use this definition of sub-DAG equivalence despite the above observation, because we treat the sub-DAGs in a window as a pair of stand-alone dataflow DAGs to pass to the EV for verification (see Section 4.1). Definition 3.5 (Window containment)We say a window \(\omega\) is _contained_ in a window \(\omega^{\prime}\), donated as \(\omega\subseteq_{w}\omega^{\prime}\), if \(\omega(P)\) (respectively \(\omega(Q)\)) of \(\omega\) is a sub-DAG of the corresponding one in \(\omega^{\prime}\). In this case, we call \(\omega\) a _sub-window_ of \(\omega^{\prime}\), and \(\omega^{\prime}\) a _super-window_ of \(\omega\). For instance, the window \(\omega\) in Figure 7 is contained in the window \(\omega^{\prime}\) in Figure 8. ## 4. Two versions with a single edit In this section, we study how to verify the equivalence of two dataflow versions \(P\) and \(Q\) with a single change \(c\) of the corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). We leverage a given EV \(\gamma\) to verify the equivalence of two queries. We discuss how to use the EV to verify the equivalence of the version pair in a window (Section 4.1), and discuss the EV's restrictions (Section 4.2). We present a concept called "maximal covering window", which helps in improving the performance of verifying the equivalence (Section 4.3), and develop a method to find maximal covering windows to verify the equivalence of the two versions (Section 4.4). ### Verification Using a Covering Window We show how to use a covering window to verify the equivalence of a version pair. Lemma 4.1 ().: _Consider a version pair \((P,Q)\) with a single edit \(c\) operation between them. If there is a covering window \(\omega=(\omega(P),\omega(Q))\) of the edit operation such that the sub-DAGs of the window are equivalent, then the version pair is equivalent._ Proof.: Suppose \(\omega(P)\equiv\omega(Q)\). From the definition of a covering window, every operator in one sub-DAG of the window \(\omega\) has Figure 8. An example covering window \(\omega^{\prime}\) showing its pair of sub-DAGs are equivalent. Figure 7. A covering window \(\omega\) for adding Filter\({}_{\text{n}}\). Figure 9. Two sub-DAGs in the window \(\omega\) are not equivalent, as sub-DAG equivalence in Definition 3.4 does not consider constraints from the upstream operators. But the two complete dataflow versions are indeed equivalent. its corresponding mapped operator in the other sub-DAG of the window, and the change \(c\) is included in the window. This means that the sub-DAGs of \(P\) and \(Q\) that precede the window \(\omega\) are isomorphic (structurally identical) and the sub-DAGs of \(P\) and \(Q\) that follow the window are isomorphic as shown in Figure 9(a). Following the assumption that multiple runs of a dataflow produce the same result, this infers that given an instance of input sources \(\mathsf{D}\), the sub-DAGs before the window would produce equivalent results according to definition 2.2. This result becomes the input source for the window \(\omega\) and given that the sub-DAGs in \(\omega\) are equivalent, this means that each sink of \(\omega(P)\) is equivalent to the corresponding sink (according to the mapping) of \(\omega(Q)\). Hence, the output of the window, which is the input to the pair of sub-DAGs following the window, is identical, and since the operators are isomorphic, the result of the sub-DAGs following the window is equivalent. Thus, \(P\equiv Q\). Based on this lemma, we can verify the equivalence of a pair of versions as follows: We consider a covering window and check the equivalence of its sub-DAGs by passing each pair of sinks and the sink's ancestor operators in the window (to form a query pair) to an EV. To pass a pair of sub-DAGs to the EV, we need to complete it by attaching virtual sources and sinks, which are extracted from the original DAGs. This step is vital for determining schema information. The sub-DAGs are then transformed into a representation understandable by the EV, e.g., logical DAG. If the EV shows that all the sink pairs of the sub-DAGs are equivalent, then the two versions are equivalent. A key question is how to find such a covering window. Notice that the two sub-DAGs in Figure 7 are not equivalent. However, if we include the downstream Filter\({}_{\mathsf{k}}\) in the covering window to form a new window \(\omega^{\prime}\) (shown in Figure 8) with a pair of sub-DAGs \(\omega^{\prime}(P)\) and \(\omega^{\prime}(Q)\), then the two sub-DAGs in \(\omega^{\prime}\) are equivalent. This example suggests that we may need to consider multiple windows in order to find one that is equivalent. ### EV Restrictions and Valid Windows We cannot give an arbitrary window to the EV, since each EV has certain restrictions on the sub-DAGs to verify their equivalence. Definition 4.2 (EV's restrictions).: _Restrictions_ of an EV are a set of conditions such that for each query pair if this pair satisfies these conditions, then the EV is able to determine the equivalence of the pair without giving an Unknown answer. We will relax this definition in Section 8, discuss the consequences of relaxing the definition, and propose solutions. There are two types of restrictions. * Restrictions due to the EV's explicit assumptions: For example, UDP[(15)] and Equitas[(59)] support reasoning of certain operators, e.g., Aggregate and SPJ, but not other operators such as Sort. * Restrictions that are derived due to the modules used by the EV: For example, Equitas[(59)], Spes[(58)], and Spark Verifier[(25)] use an SMT solver[(18)] to determine if a FOL formula is satisfiable or not. SMT solver is not complete for determining the satisfiability of formulas when their predicates have non-linear conditions[(8)]. Thus, these EVs require the predicate conditions in their expressions to to be linear to make sure to receive an answer from the solver. As an example, the following is an example of the explicit and _derived_ restrictions of the Equitas[(59)] to test the equivalence of two queries 1. Footnote 1: Applications that wish to use Veer need to extend it to include their EV of choice if it is not Equitas[(59)] or Spes[(58)], and incorporate the restrictions specific to those EVs. 1. The table semantics has to be set semantics.2 2. All operators have to be any of the following types: SPJ, Outer join, and/or Aggregate. Footnote 2: In this work, the application determines the desired table semantics, and Veer decides to use an EV that supports the specified table semantics requested by the application by checking the restriction. 3. The predicate conditions of SPJ operators have to be linear. 4. Both queries should have the same number of Outer join operators, if present. 5. Both queries should have the same number of Aggregate operators, if present. 6. If they use an Aggregate operator with an aggregation function that depends on the cardinality of the input tuples, e.g., COUNT, then each upstream operator of the Aggregate operator has to be an SPJ operator, and the input tables are not scanned more than once. Definition 4.3 (Valid window w.r.t an EV).: We say a window is _valid_ with respect to an EV if it satisfies the EV's restrictions. In order to test if a window is valid, we pass it to a "validator", in which checks if the window satisfies the EV restrictions or not. Testing window validity is challenging due to the complexity of exhaustively listing all potential sets of an EV's restrictions. These restrictions are designed to facilitate the evaluation of a broader range of covering windows using the provided EV (without encountering Unknown' outcomes). To maximize completeness and identify valid windows, we can adopt a set of sufficient syntactic conditions, which may not be necessary conditions. If these conditions are met, we can test the equivalence of valid windows. However, this conservative approach may hinder the ability of Veer to identify an equivalence due to the potential of more covering windows violating the rigid restrictions. It is acknowledged that there may be additional conditions not covered that can lead to the consideration of more possible valid windows. Figure 10. Conceptual examples to explain the relation between a “covering window” and version pair equivalence. ### Maximal Covering Window (MCW) A main question is how to find a valid covering window with respect to the given EV using which we can verify the equivalence of the two dataflow versions. A naive solution is to consider all the covering windows of the edit change \(c\). For each of them, we check its validity, e.g., whether they satisfy the constraints of the EV. If so, we pass the window to the EV to check the equivalence. This approach is computationally costly, since there can be many covering windows. Thus our focus is to reduce the number of covering windows that need to be considered without missing a chance to detect the equivalence of the two dataflow versions. The following lemma helps us reduce the search space. **Lemma 4.4**.: _Consider a version pair \((P,Q)\) with a single edit \(c\) between them. Suppose a covering window \(\omega\) of \(c\) is contained in another covering window \(\omega^{\prime}\). If the sub-DAGs in window \(\omega\) are equivalent, then the sub-DAGs of \(\omega^{\prime}\) are also equivalent._ Proof.: Suppose \(\omega(P)\equiv\omega(Q)\). Suppose a window \(\omega^{\prime}\) consists of the sub-DAGs of the entire version pair, i.e. \(\omega^{\prime}(P)=P\) and \(\omega^{\prime}(Q)=Q\). This means that \(\omega\subseteq\omega^{\prime}\) as \(\omega(P)\subseteq\omega^{\prime}(P)\) and \(\omega(Q)\subseteq\omega^{\prime}(Q)\). Given that the sub-DAGs in \(\omega\) are equivalent, from Lemma 4.1, we can infer the version pair is equivalent, which means the sub-DAGs in the window \(\omega^{\prime}\) are equivalent. Based on Lemma 4.4, we can just focus on covering windows that have as many operators as possible without violating the constraints of the EV. If the EV shows that such a window is not equivalent, then none of its sub-windows can be equivalent. Based on this observation, we introduce the following concept. **Definition 4.5** (Maximal Covering Window (MCW)).: Given a dataflow version pair \((P,Q)\) with a single edit operation \(c\), a valid covering window \(\omega\) is called _maximal_ if it is not properly contained by another valid covering window. The change \(c\) may have more than one MCW, For example, suppose the EV is Equitas (Spielman, 2007). Figure 11 shows two MCWs to cover the change of adding the Filter\({}_{\text{h}}\) operator. One maximal window \(\omega_{1}\) includes the change Filter\({}_{\text{h}}\) and Left Outerjoin on the left of the change. The window cannot include the Classifier operator from the left side because Equitas cannot reason its semantics (Spielman, 2007). Similarly, the Aggregate operator on the right cannot be included in \(\omega_{1}\) because one of Equitas (Spielman, 2007) restrictions is that the input of an Aggregate operator must be an SPJ operator and the window already contains Left Outerjoin. To include the Aggregate operator, a new window \(\omega_{2}\) is formed to exclude Left Outerjoin and include Filter\({}_{\text{h}}\) on the right but cannot include Sort because this operator cannot be reasoned by Equitas (Spielman, 2007). The MCW \(\omega_{2}\) is verified by Equitas (Spielman, 2007) to be equivalent, whereas \(\omega_{1}\) is not. Notice that one equivalent covering window is enough to show the equivalence of the two dataflow versions. ### Finding MCWs to Verify Equivalence Next we study how to efficiently find an MCW to verify the equivalence of two dataflow pairs. We present a method shown in Algorithm 1. Given a version pair \(P\) and \(Q\) and a single edit operation \(c\) based on the mapping \(\mathcal{M}\), the method finds an MCW that is verified by the given EV \(\gamma\) to be equivalent. ``` Input: A version pair \((P,Q)\); A single edit \(c\); A mapping \(\mathcal{M}\); An EV \(\gamma\) Output: A flag to indicate if they are equivalent // a True value to indicate the pair is equivalent, a False value to indicate the pair is not equivalent, or Unknown when the pair cannot be verified \(\omega\leftarrow\) create an initial window to include the source and the corresponding target (operator/link) of the edit \(c\) \(\Omega=\{\omega\}\) // initialize a set for exploring windows // using memization, a window is explored only once while\(\Omega\) is not empty do \(\omega_{i}\leftarrow\) remove one window from \(\Omega\) for every neighbor of \(\omega_{i}\)do ifadding neighbor to \(\omega_{i}\) meets EV's restrictionsthen add \(\omega_{i}^{\prime}\) (including the neighbor) to \(\Omega\) end if ifnone of the neighbors were added to \(\omega_{i}\)then // the window is maximal if\(\omega_{i}\) is verified equivalent by the EVthen return True if\(\omega_{i}\) is verified not equivalent by the EV and the window is the entire version pairthen return False return Unknown ``` **Algorithm 1**Verifying equivalence of two dataflow versions with a single edit We use the example in Figure 12 to explain the details of Algorithm 1. The first step is to initialize the window to cover the source and target operator of the change only (line 1). In this example, for the window \(\omega_{1}\), its sub-DAG \(\omega_{1}(\omega_{2})\) contains only Filter\({}_{\text{h}}\) and its corresponding operator using the mapping \(\mathcal{M}\) in \(\omega_{1}(v_{1})\). Then we expand all the windows created so far, i.e., \(\omega_{1}\) in this case (line 2). To expand the window, we enumerate all possible combinations of including the neighboring operators on both \(\omega_{1}(v_{1})\) and \(\omega_{1}(v_{2})\) Figure 11. Two MCW \(\omega_{1}\) and \(\omega_{2}\) satisfying the restrictions of Equitas (Spielman, 2007) to cover the change of adding Filter\({}_{\text{h}}\) to \(v_{2}\). using the mapping. For each neighbor, we form a new window and check if it has not been explored yet. If not, then we check if the newly formed window is valid (lines 5-6). In this example, we create the two windows \(\omega_{2}\) and \(\omega_{3}\) to include the operators Outer-join and Aggregate in each window, respectively. We add those windows marked as valid in the traversal list to be further expanded in the following iterations (line 7). We repeat the process on every window. After all the neighbors are explored to be added and we cannot expand the window anymore, we mark it as maximal (line 9). A subtle case arises when adding a single neighbor yields an invalid window, but adding a combination of neighbors yields a valid window. This is discussed in Section 5.5. Then we test the equivalence of this maximal window by calling the EV. If the EV says it is equivalent, the algorithm returns TRUE to indicate the version pair is equivalent (line 10). If the EV says that it is not equivalent and the window's sub-DAGs are the complete version pair, then the algorithm returns False (line 13). Otherwise, we iterate over other windows until there are no other windows to expand. In that case, the algorithm returns Unknown to indicate that the version equivalence cannot be verified as in line 15. Some EVs (Kang et al., 2015; Wang et al., 2016; Wang et al., 2017) return False to indicate that the equivalence of the version pair cannot be verified, but it does not necessarily mean that the pair is inequivalent. We take note of these EVs, and in the algorithm mentioned above, we only report False if the EV is capable of proving the inequivalence of the pair, such as COSETTE (Kang et al., 2015). ## 5. Two versions with multiple Edits In the previous section, we assumed there is a single edit operation to transform a dataflow version to another version. In this section, we extend the setting to discuss the case where multiple edit operations \(\delta=\{c_{1}\ldots c_{n}\}\) transform a version \(P\) to a version \(Q\). A main challenge is finding covering windows for multiple edits (Section 5.1). We address the challenge by decomposing the version pair into a set of _disjoint_ windows. We formally define the concepts of "decomposition" and "maximal decomposition" (Section 5.2). We explain how to find maximal decompositions to verify the equivalence of the version pair and prove the correctness of our solution (Section 5.4). We analyze the completeness of the proposed algorithm (Section 5.5). ### Can we use overlapping windows? When the two versions have more than one edit, they can have multiple covering windows. A natural question is whether we can use covering windows that overlap with each other to test the equivalence of the two versions. Definition 5.1 (Overlapping windows).: We say that two windows, \(\omega_{1}\) and \(\omega_{2}\), _overlap_ if at least one operator is included in any of the sub-DAGs of both windows. We will use an example to show that we cannot do that. The example, shown Figure 13, is inspired from the NY Taxi dataset (Zhou et al., 2017) to calculate the trip time based on the duration and starting time. Suppose the Select\({}_{\text{x}}\) and Select\({}_{\text{z}}\) operators are deleted from a version \(v_{1}\) and Select\({}_{\text{y}}\) operator is added to transform the dataflow to version \(v_{2}\). The example shows two overlapping windows \(\omega\) and \(\omega^{\prime}\), each window is equivalent. We cannot say the version pair in the example above is equivalent. The reason is that for the pair of sub-DAGs in \(\omega^{\prime}\) to be equivalent, Figure 12. Example to illustrate the process of finding MCWs for the change of adding Filter\({}_{\text{h}}\) to \(v_{2}\). Figure 13. In this example, the blue window \(\omega\) is equivalent and the purple window \(\omega^{\prime}\) is also equivalent. But the version pair is not equivalent. The shaded gray area is the input to window \(\omega^{\prime}\). the input sources have to be the same (the shaded area in grey in the example). However, we cannot infer the equivalence of the outcome of that portion of the sub-DAG. In fact, the pair of sub-DAGs in the shaded area in this example produce different results. This problem does not exist in the case of a single edit, because the input sources to any _covering_ window (in a single edit case) will always be a one-to-one mapping of the two sub-DAGs and there is no other change outside the covering window. The solution in Section 4 finds _any_ window such that its sub-DAGs are equivalent and cannot be directly used to solve the case of verifying the equivalence of the version pair when there are multiple edits. To overcome this challenge and enable using windows to check the equivalence of the version pair, we require the covering windows to be _disjoint_. In other words, each operator should be included in one and only one window. A naive solution is to do a simple exhaustive approach of decomposing the version pair into all possible combinations of disjoint windows. Next, we formally define a version pair decomposition and how it is used to check the equivalence of a version pair. ### Version Pair Decomposition Definition 5.2 (Decomposition).: For a version pair \(P\) and \(Q\) with a set of edit operations \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\), a _decomposition_, \(\theta\) is a set of windows \(\{\omega_{1},\ldots,\omega_{m}\}\) such that: * Each edit is in one and only one window in the set; * All the windows are disjoint; * The union of the windows is the version pair. Figure 14 shows a decomposition for the three changes in the running example. The example shows two covering windows \(\omega_{1}\) and \(\omega_{2}\), each covers one or more edits 7. Next, we show how to use a decomposition to verify the equivalence of the version pair by generalizing Lemma 4.1 as follows. Footnote 7: For simplicity, we only show covering windows of a decomposition in the figures throughout this section. Lemma 5.3 ().: _(Corresponding to Lemma 4.1) For a version pair \(P\) and \(Q\) with a set of edit operations \(\delta=\{c_{1}\ldots c_{n}\}\) to transform \(P\) to \(Q\), if there is a decomposition \(\theta\) such that every covering window in \(\theta\) is equivalent, then the version pair is equivalent._ Proof.: Suppose every covering window \(\omega_{i}\) in a decomposition \(\theta\) is equivalent. Every other window that is not covering, its sub-DAGs are structurally identical, according to Definition 3.2. Given an instance of input sources \(\mathbb{D}\), we can have the following two cases. (CASE1:) the input is processed by a pair of structurally identical sub-DAGs that are in a non-covering window. In this case, the pair of sub-DAGs produce an equivalent result since every operator is deterministic according to Assumption 2.2. (CASE2:) the input is processed by a pair of sub-DAGs in a covering window. In this case, the pair of sub-DAGs produce equivalent result because we assumed each covering window is equivalent. In both cases, the output acts as the input to the following portion of the sub-DAGs (either non-covering or a covering window). This propagation continues along the pair of DAGs until the end, thus the version pair produces equivalent results as shown in Figure 15. A natural question is how to find a decomposition where each of its windows is equivalent. We could exhaustively consider all the possible decompositions, but the number can grow exponentially as the size of the dataflow and the number of changes increase. The following "decomposition containment" concept, defined shortly, helps us reduce the number of decompositions that need to be considered. Definition 5.4 (Decomposition containment).: We say a decomposition \(\theta\) is _contained_ in another decomposition \(\theta^{\prime}\), denoted as \(\theta\subseteq_{\theta}\theta^{\prime}\), if every window in \(\theta\), there exists a window in \(\theta^{\prime}\) that contains it. Figure 16 shows an example of a decomposition \(\theta^{\prime}\) that contains the decomposition \(\theta\) in Figure 14. We can see that in general, if a decomposition \(\theta\) is contained in another decomposition \(\theta^{\prime}\), then each window in \(\theta^{\prime}\) is a concatenation of one or multiple windows in \(\theta\). The following lemma, which is a generalization of Lemma 4.4, can help us prune the search space by ignoring decompositions that are properly contained by other decompositions. Lemma 5.5 (Corresponding to Lemma 4.4) Consider a version pair \(P\) and \(Q\) with a set of edit operations \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\). Suppose a decomposition \(\theta\) is contained in another decomposition \(\theta^{\prime}\). If each window in \(\theta\) is equivalent, then each window in \(\theta^{\prime}\) is also equivalent. Proof.: Suppose each window in a decomposition \(\theta\) is equivalent and the decomposition is contained in another decomposition \(\theta^{\prime}\) Figure 14. A decomposition \(\theta\) with two covering windows \(\omega_{1}\) and \(\omega_{2}\) that cover the three edits. Figure 15. Using multiple covering windows on multiple edits to check the equivalence of two versions. Based on the Definition of decomposition containment 5.2, we know that each window in \(\theta\) is contained in a window in \(\theta^{\prime}\). According to Lemma 4.4, if a window is equivalent then a window that contains it is also equivalent. We can deduce that every window in \(\theta^{\prime}\) is equivalent, therefore the version pair is equivalent as per Lemma 5.3. ### Maximal Decompositions w.r.t. an EV Lemma 5.5 shows that we can safely find decomposition that contain other ones to verify the equivalence of the version pair. At the same time, we cannot increase each window arbitrarily, since the equivalence of each window needs to be verified by the EV, and the window needs to satisfy the restrictions of the EV. Thus we want decompositions that are as containing as possible while each window is still valid. We formally define the following concepts. Definition 5.6 (Valid Decomposition).: We say a decomposition \(\theta\) is _valid_ with respect to an EV if each of its covering windows is valid with respect to the EV. Definition 5.7 (Maximal Decomposition (MD)).: We say a valid decomposition \(\theta\) is _maximal_ if no other valid decomposition \(\theta^{\prime}\) exists such that \(\theta^{\prime}\) properly contains \(\theta\). The decompositions w.r.t an EV form a unique graph structure, where each decomposition is a node. It has a single root corresponding to the decomposition that includes every operator as a separate window. A downward edge indicates a "contained-in" relationship. A decomposition can be contained in more than one decomposition. Each leaf node at the bottom of the hierarchy is an MD as there are no other decompositions that contain it and the hierarchy may not be balanced. If the entire version pair satisfies the EV's restrictions, then the hierarchy becomes a lattice structure with a single leaf MD being the entire version pair. Each branching factor depends on the number of changes, the number of operators, and the EV's restrictions. Figure 17 shows the hierarchical relationships of the valid decompositions of the running example when the EV is Equitas (Sen and Tou, 2018). The example shows two MD \(\theta_{12}\) and \(\theta_{16}\). ### Finding a Maximal Decomposition to Verify Equivalence (A Baseline Approach) Now we present an algorithm for finding maximal decompositions shown in Algorithm 2. We will explain it using the example in Figure 17. We return \(\mathsf{True}\) to indicate the pair is equivalent if there are no changes and the two versions exactly match (Line 1-2). Otherwise, we add the initial decomposition, which includes each operator as a window, to a set of decompositions to be expanded (line 3). In each of the following iterations, we remove a decomposition from the set, and iteratively expand its windows. To expand a window, we follow the procedure as in Algorithm 1 to expand its neighbors. The only difference is that the neighbors in this case are windows, and we merge windows if their union is valid (line 10). If a window cannot be further expanded, then we mark the window as maximal to avoid checking it again (line 14). A subtle case arises when the merge of two windows yields an invalid window but the merge of a combination of more windows produces a valid window. We discuss this in Section 5.5. If all of the windows in the decomposition are maximal, we mark the decomposition as maximal, and verify whether each covering window is equivalent by passing it to the given EV (line 17). If all of the windows are verified to be equivalent, we return \(\mathsf{True}\) to indicate that the version pair is equivalent (line 18). If in the decomposition there is only a single window, which includes the entire version pair, and the EV decides that the window is not equivalent, then the algorithm returns \(\mathsf{False}\) (line 20). Otherwise, we continue exploring other decompositions until there are no more decompositions to explore. In that case, we return Unknown to indicate that the equivalence of the version pair cannot be determined (line 22). This algorithm generalizes Algorithm 1 to handle cases of two versions with multiple edits. For an efficient exploration, we only expand and maximize covering windows and verify them. Theorem 5.8 (Correctness).: _Given a dataflow version pair (\(P\), \(Q\)), an edit mapping, and a sound EV, \(i\)) if \(\mathsf{Veer}\) returns \(\mathsf{True}\), then \(P\equiv Q\), and \(2)\) if \(\mathsf{Veer}\) returns \(\mathsf{False}\), then \(P\not\equiv Q\)._ Proof.: 1) Suppose \(P\not\equiv Q\). According to definition 2.2, this means that for a given input sources \(\mathbb{D}\), there is a tuple \(t\) that exists in the Figure 16. Example to show equivalent pair of sub-DAGs of every covering window in a decomposition \(\theta^{\prime}\). Figure 17. Hierarchy of valid decompositions w.r.t an EV. Each letter corresponds to a pair of operators from the running example. We show the containment of covering windows and we omit details of containment of non-covering windows. sink of \(P\) but does not exist in the sink of \(Q\). Following Assumption 2.2 that multiple runs of a dataflow produce the same result, we can infer that there must be a set of edit operations \(\delta=\{c_{1},\ldots,c_{n}\}\) to transform \(P\) to \(Q\) that caused the sink of \(P\) to contain the tuple \(t\) but does not exist in the sink of \(Q\). Veer must find a valid maximal decomposition \(\theta\) following Algorithm 2. There are four cases the procedure terminates and returns the result: ``` Input: A version pair (\(P\), \(Q\)): A set of edit operations \(\delta\) and a mapping \(\mathcal{M}\) from \(P\) to \(Q\); An EV \(\gamma\) Output: A version pair equivalence flag \(EQ\) // A True value indicates the pair is equivalent, a False value indicates the pair is not equivalent, and an Unknown value indicates the pair cannot be verified 1if\(\delta\) is emptythen 2returnTrue 3\(\leftarrow\) decomposition with each operator as a window 4\(\Theta=\{\theta\}\) // initial set of decompositions 5while\(\Theta\) is not emptydo 6 Remove a decomposition \(\theta_{i}\) from \(\Theta\) 7forevery covering window \(\omega_{j}\) (in \(\theta_{i}\)) not markeddo 8foreach neighbor \(\omega_{k}\) of \(\omega_{j}\)do 9if\(\omega_{k}\cup\omega_{j}\) is valid and not explored beforethen 10\(\theta^{\prime}_{i}\leftarrow\theta-\omega_{k}-\omega_{j}+\omega_{k}\cup \omega_{j}\) 11 add \(\theta^{\prime}_{i}\) to \(\Theta\) 12 13 end if 14ifnone of the neighbor windows can be mergedthen 15 mark \(\omega_{j}\) 16 17 end if 18ifevery covering \(\omega\in\theta_{i}\) is markedthen 19if\(\gamma\) verifies each covering window in \(\theta_{i}\) to be equivalentthen 20returnTrue 21if\(\theta_{i}\) has only one window and \(\gamma\) verifies it not to be equivalentthen 22returnFalse 23 24 end if 25returnUnknown ``` **Algorithm 2**Verifying the equivalence of a dataflow version pair with one or multiple edits (Baseline) CASE1: The set of calls is empty, because none of the decompositions satisfies the EV's restrictions or none of the decompositions were verified equivalent. In this case, Veer returns Unknown. CASE3: There is a decomposition that is verified to be equivalent by a correct EV, which according to Lemma 5.3, implies that the version pair is equivalent given the assumptions in our setting. However, this is not the case because we assumed \(P\not\equiv Q\). CASE4: There is a single window in the decomposition and it is verified by the EV to be not equivalent, when the EV can verify the inequivalence of the pair, in this case Veer returns False. In all cases, Veer did not return True, by contraposition, this proves that \(P\equiv Q\). 2) We follow the same approach as above to prove the second case. ### Improving the Completeness of Algorithm 2 In general, the equivalence problem for two dataflow versions is undecidable [1, 25] (reduced from First-order logic). So there is no verifier that is complete [16]. However, there are classes of queries that are decidable such as SPJ [58]. In this section, we show factors that affect the completeness of Algorithm 2 and propose ways to improve its completeness. **1) Window validity.** In line 13 of Algorithm 2, if none of the neighbor windows of \(\omega_{j}\) can be merged with \(\omega_{j}\) to become a valid window, we mark \(\omega_{j}\) and stop expanding it, hoping it might be a maximal window. The following example shows that this approach could miss some opportunity to find the equivalence of two versions. **Example 1**: _Consider \(\mathcal{M}_{1}\) of the two versions \(P\) and \(Q\) from Example 3. Suppose the EV is Equitas [59] and a covering window \(\omega\) contains the Pro ject from \(P\) and its mapped operator Aggregate from \(Q\). Consider the window expansion procedure in Algorithm 2. If we add filter operator of both versions to the window, then the merged window is not valid. The reason is that it violates Equitas's restriction RS in Section 4.2, i.e., both DAGs should have the same number of Aggregate operators. The algorithm thus stops expanding the window. However, if we continue expanding the window till the end, the final window with three operators is still valid._ Using this final window, we can see that the two versions are equivalent, but the algorithm missed this opportunity. This example shows that even though the algorithm is correct in terms of claiming the equivalence of two versions, it may miss opportunities to verify their equivalence. A main reason is that the Equitas [59] EV does not have the following property. **Definition 5** (EV's Restriction Monotonicity): We say an EV is _restriction monotonic_ if for each version pair \(P\) and \(Q\), for each invalid window \(\omega\), each window containing \(\omega\) is also invalid. Intuitively, for an EV with this property, if a window is not valid (e.g., it violates the EV's restrictions), we cannot make it valid by expanding the window. For an EV that has this property such as Spes [58], when the algorithm marks the window \(\omega_{j}\) (line 14), this window must be maximal. Thus further expanding the window will not generate another valid window, and the algorithm will not miss this type of opportunity to verify the equivalence. If the EV does not have this property such as Equitas [59], we can improve the completeness of the algorithm as follows. We modify line 9 by not checking if the merged window \(\omega_{j}\cup\omega_{k}\) is valid or not. We also modify line 13 to test if the window \(\omega_{j}\) is maximal with respect to the EV. This step is necessary in order to be able to terminate the expansion of a window. We assume there is a procedure for each EV that can test if a window is maximal by reasoning the EV's restrictions. **2) Different edit mappings.** Consider two different edit mappings, \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), as shown in Figure 3. Let us assume the given EV is Equitas [59]. If we follow the baseline Algorithm 2, mapping \(\mathcal{M}_{1}\) results in a decomposition that violates Equitas's **R2** restriction. On the other hand, mapping \(\mathcal{M}_{2}\) satisfies the restrictions and allows the EV to test their equivalence. This example shows that different edit mappings can lead to different decompositions. Notably, the edit distance of the first mapping is 2, while the edit distance of the second one is 4. This result shows that a minimum-distance edit mapping does not always produce the best decomposition to show the equivalence. One way to address this issue is to enumerate all possible edit mappings (Sutton et al., 2016) and perform the decomposition search by calling Algorithm 2 for each edit mapping. If the changes between the versions are tracked, then the corresponding mapping of the changes can be treated as the first considered edit mapping before enumerating all other edit mappings. ## 6. Completeness of Veer In this section, we first discuss the completeness of Veer and the dependence on the completeness of its internal components (SS6.1). Then we show examples of using different EVs to illustrate the restrictions that a workflow version pair needs to satisfy for Veer to be complete for verifying the pair's equivalence and formally prove its completeness (SS6.2). ### Veer's Completeness Dependency on Internal Components For any pair of workflow versions, if the pair is equivalent and there is a valid decomposition w.r.t. to a given EV where each of its covering windows is verified as equivalent by the given EV, Veer returns True. Recall that Veer considers all possible edit mappings and explores all possible valid decompositions for each mapping. If there is a valid decomposition under a mapping, Veer guarantees to find it. For any pair of workflow versions, if the pair is inequivalent and there exists a valid decomposition that includes a single window consisting of the entire version pair, and this window is verified as inequivalent by the EV, then Veer returns False. For simplicity, throughout this section, in both cases we say there is a valid decomposition whose equivalence is determined by the given EV. In both cases, Veer does not return Unknown. Note that the completeness of Veer relies on the completeness of the given EV. If the EV is incomplete and returns Unknown to all possible valid decompositions generated by Veer, accordingly Veer returns Unknown. The completeness of modern EVs (Sutton et al., 2016; Sutton et al., 2016; Sutton et al., 2016; Sutton et al., 2016) depends on the internal components used. For instance, most EVs (Sutton et al., 2016; Sutton et al., 2016; Sutton et al., 2016; Sutton et al., 2016) model queries as expressions, such as FOL formulas, and utilize a solver, e.g., SMT, to determine the satisfiability of formulas. SMT solvers (Sutton et al., 2016) are complete for testing the satisfiability of linear formulas (Bauer et al., 2016). Therefore, EVs that use SMT solvers in their internal verification procedure are incomplete for verifying the equivalence of two queries (workflows or SQL) when the two queries include non-linear conditions in their predicates. Likewise, Veer is complete for verifying the equivalence of two workflow versions that satisfy the EV's restrictions. Figure 18 illustrates the internal components Veer uses and how these components contribute to Veer's overall completeness. ### Restrictions of Some EVs and Veer's Completeness We use the following examples on three EVs (summarized in Table 3) to explain Veer's completeness process. Suppose a given EV is Spes (Sutton et al., 2016). Spes determines the equivalence of two queries under the "Bag" table semantics. Spes is complete for determining the equivalence of two queries that satisfy the following restrictions (Sutton et al., 2016): 1) the two queries should contain only SPJ operators; 2) the selection predicates in every query should not include non-linear conditions. Theorem 6.1 (Completeness.): _Veer is complete for determining the equivalence of two workflow versions (P,Q) if the pair satisfies the restrictions of a given EV._ Proof.: Suppose the two versions satisfy the restrictions of the given EV. Since Veer considers all possible mappings and all possible decompositions that satisfy the EV's restrictions, it will find a decomposition with a single window that includes the entire pair because the given pair satisfies the EV's restriction. According to Definition 4.2, the EV is able to determine the equivalence of a pair if the pair satisfies the EV's restriction and the EV is sound. Veer returns the equivalence result from the EV. ## 7. Veer+: Improving Verification Performance In this section, we develop four techniques to improve the performance of the baseline algorithm for verifying the equivalence of two dataflow versions. We show how to reduce the search space of the decompositions by dividing the version pair into segments (Section 7.1). We present a way to detect and prune decompositions that are not equivalent (Section 7.2). We also discuss how to rank the decompositions to efficiently explore their search space (Section 7.3). Lastly, we propose a way to efficiently identify the inequivalence of two dataflow versions (Section 7.4). ### Reducing Search Space Using Segmentations The size of the decomposition structure in Figure 17 depends on a few factors, such as the number of operators in the dataflow, the number of changes between the two versions, and the EV's restrictions. When the number of operators increases, the size of the possible decompositions increases. Thus we want to reduce the search space to improve the performance of the algorithm. The purpose of enumerating the decompositions is to find all possible cuts of the version pair to verify their equivalence. In some cases a covering window of one edit operation will never overlap with a covering of another edit operation, as shown in Figure 19. In this case, we can consider the covering windows of those never overlapping separately. Based on this observation, we introduce the following concepts. Definition 7.1 (Segment and segmentation).: Consider two dataflow versions \(P\) and \(Q\) with a set of edits \(\delta=\{c_{1},\ldots,c_{n}\}\) from \(P\) to \(Q\) and a corresponding mapping \(\mathcal{M}\) from \(P\) to \(Q\). A _segment_\(\mathcal{S}\) is a window of \(P\) and \(Q\) under the mapping \(\mathcal{M}\). A _segmentation_\(\psi\) is a set of disjoint segments, such that they contain all the edits in \(\delta\), and there is no valid covering window that includes operators from two different segments. A version pair may have more than one segmentation. For example, consider a version pair with a single edit. One segmentation has a single segment, which includes the entire version pair. Another segmentation includes a segment that was constructed by finding the union of MCWs of the edit. **Computing a segmentation.** We present two ways to compute a segmentation. _1) Using unions of MCWs_: For each edit \(c_{i}\in\delta\), we compute all its MCWs, and take their union, denoted as window \(U_{i}\). We iteratively check for each window \(U_{i}\) if it overlaps with any other window \(U_{j}\), and if so, we merge them. We repeat this step until no window overlaps with other windows. Each remaining window becomes a segment and this forms a segmentation. Notice that a segment may not satisfy the restrictions of the given EV. _2) Using operators not supported by the EV_: We identify the operators not supported by the given EV. For example, a Sort operator cannot be supported by Equitas [59]. Then we mark these operators as the boundaries of segments. The window between two such operators forms a segment. Compared to the second approach, the first one produces fine-grained segments, but is computationally more expensive. **Using a segmentation to verify the equivalence of the version pair.** As there is no valid covering window spanning over two segments, we can divide the problem of checking the equivalence of \(P\) and \(Q\) into sub-problems, where each sub-problem is to check the equivalence of the two sub-DAGs in a segment. Then to prove the equivalence of a version pair, each segment in a segmentation needs to be equivalent. A segment is equivalent, if there is any decomposition such that every covering window in the decomposition is equivalent. We can organize the components of the version pair verification problem as an AND/OR tree as shown in the Figure 20. **Lemma 7.2**: _For a version pair \(P\) and \(Q\) with a set of edit operations \(\delta=\{c_{1}\ldots c_{n}\}\) from \(P\) to \(Q\), if every segment \(\mathbf{S}\) in a segmentation \(\psi\) is equivalent, then the version pair is equivalent._ Suppose every segment \(S_{i}\) in a segmentation \(\psi\) is equivalent. Since according to Definition 7.1 a segment is a window and each change is covered in all of the segments in a segmentation, then we can infer that any part of the version pair that is not in a segment is structurally identical. Following the same procedure of the proof for Lemma 5.3, we can say that the result is either from a structurally identical pair of sub-DAGs or from a segment, which is said to be equivalent. We can deduce that the version pair is equivalent. Algorithm 3 shows how to use a segmentation to check the equivalence of two versions. We first construct a segmentation. For each segment we find if its pair is equivalent by calling Algorithm 2. If any segment is not equivalent, we can terminate the procedure early. We repeat this step until all of the segments are verified equivalent and we return True. Otherwise we return Unknown. For the case where there is a single segment consisting of the entire version pair and Algorithm 2 returns False, the algorithm returns False. Figure 19: An example where any covering window of an edit operation \(c_{1}\) never overlaps with a covering window of another edit operation \(c_{2}\) or \(c_{3}\). \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **EV** & **EV’s restrictions** & **Explanation of why** **V**eer is complete \\ \hline **Spes [58]** & 1. the pair should not include other than Select-Project-Join (SPJ.) & Veer finds all possible windows that satisfy the EV’s restrictions, and in this example, the given pair satisfies the restrictions, so Veer can find it. \\ \hline **UDP [15]** & 1. the pair should not include other than Union-SPJ (USPI.) & Veer finds all possible windows that satisfy the EV’s restrictions under all possible mappings, and in this example, Veer finds a bijective mapping if there exists one. \\ \hline **Spark** & 1. the pair should not include other than SPJ-Aggregate (SPJA.) & Veer finds all possible windows that satisfy the EV’s restrictions, and in this example, the given pair satisfies the restrictions, so Veer finds it. \\ \hline **Verifier [25]** & 2. there should not be more than one aggregate operator in each version such that the aggregate is without grouping and outputs a primitive, e.g., MAX and MIN. & Veer finds all possible windows that satisfy the EV’s restrictions, and in this example, the given pair satisfies the restrictions, so Veer finds it. \\ \hline \end{tabular} \end{table} Table 3: Example EVs and their restrictions along with how **V**eer is complete for verifying a version pair that satisfy the EV’s restrictions. Figure 20: A sample abstract AND/OR tree to organize the components of the version pair verification problem. Figure 21 shows the segments of the running example when using Equitas (Sutton et al., 2017) as the EV. Using the second approach for computing a segmentation, we know Equitas (Sutton et al., 2017) does not support the Sort operator, so we divide the version pair into two segments. The first one \(\mathcal{S}_{1}\) includes those operators before Sort, and the second one \(\mathcal{S}_{2}\) includes those operators after the Sort. The example shows the benefit of using segments to reduce the decomposition-space to a total of 8 (the sum of number of decompositions in every segment) compared to 16 (the number of all possible combinations of decompositions across segments) when we do not use segments. ### Pruning Stale Decompositions Another way to improve the performance is to prune _stale_ decompositions, i.e., those that would not be verified equivalent even if they are further expanded. For instance, Figure 22 shows part of the decomposition hierarchy of the running example. Consider the decomposition \(\theta_{2}\). Notice that the first window, \(\omega_{1}(f,h)\), cannot be further expanded and is marked "maximal" but the decomposition can still be further expanded by the other two windows, thus the decomposition is not maximized. After expanding the other windows and reaching a maximal decomposition, we realize that the decomposition is not equivalent because one of its windows, e.g., \(\omega_{1}\), is not equivalent. Based on this observation, if one of the windows in a decomposition becomes maximal, we can immediately test its equivalence. If it is not equivalent, we can terminate the traversal of the decompositions after this one. To do this optimization, we modify Algorithm 2 to test the equivalence of a maximal window after Line \(14\)8. If the window is equivalent, we continue the search as before. Footnote 8: We can test the equivalence of the other windows for early termination. ### Ranking-Based Search **Ranking segments within a segmentation.** Algorithm 3 needs an order to verify those segments in a segmentation one by one. If any segment is not equivalent, then there is no need for verifying the other segments. We want to rank the segments such that we first evaluate the smallest one to get a quick answer for a possibility of early termination. We consider different signals of a segment \(S\) to compute its score. Various signals and ranking functions can be used. An example scoring function is \(\mathcal{F}(S)=m_{S}+n_{S}\), where \(m_{S}\) is its number of operators and \(n_{S}\) is its number of changes. A segment should be ranked higher if it has fewer changes. The reason is that fewer changes lead to a smaller number of decompositions, and consequently, testing the segment's equivalence takes less time. Similarly, if a segment's number of operators is smaller, then the number of decompositions is also smaller and would produce the result faster. For instance, the numbers of operators in \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) in Figure 21 are 4 and 3, respectively. Their numbers of changes are 1 and 2, respectively. The ranking score for both segments is the total of both metrics, which is 5. Then any of the two segments can be explored first, and indeed the example shows that the number of decompositions in both segments is the same. **Ranking decompositions within a segment.** For each segment, we use Algorithm 2 to explore its decompositions. The algorithm needs an order (line 6) to explore the decompositions. The order, if not chosen properly, can lead to exploring many decompositions before finding an equivalent one. We can optimize the performance by ranking the decompositions and performing a best-first search exploration. Again, various signals and ranking functions can be used to rank a decomposition. An example ranking function for a decomposition \(d\) is \(\mathcal{G}(d)=o_{d}-w_{d}\), where \(o_{d}\) is the average number of operators in its covering windows, and \(w_{d}\) is the number of its unmerged windows (those windows that include a single operator and are not merged with a covering window). A decomposition is ranked higher if it is closer to reaching an MD for a chance of finding an equivalent one. Intuitively, if the number of operators in Figure 22. Example to show the pruned paths after verifying the maximal window highlighted in blue to be not equivalent. every covering window is large, then it may be closer to reaching an MD. Similarly, if there are only a few remaining unmerged windows, then the decomposition may be close to reaching its maximality. For instance, decomposition \(\theta_{3}\) in Figure 17 has \(11\) unmerged windows, and the average number of operators in its covering windows is \(1\). While \(\theta_{6}\) has \(10\) unmerged windows, and the average number of operators in its covering windows is \(2\). Using the example ranking function, the score of \(\theta_{3}\) is \(1-11=-10\) and the score of \(\theta_{6}\) is \(2-10=-8\). Thus, \(\theta_{6}\) is ranked higher, and it is indeed closer to reaching an MD. ### Identifying Inequivalent Pairs Efficiently In this section, we use an example to show how to quickly detect the inequivalence between two dataflow versions using a symbolic representation to represent partial information of the result of the sink of each version. Consider the case where two dataflow versions \(P\) and \(Q\) are inequivalent, as shown in Figure 23a. The approach discussed so far attempts to find a decomposition in which all its windows are verified equivalent. However, in cases where the version pair is inequivalent, as in this example, such a decomposition does not exist, and the search framework would continue to look for one unsuccessfully. Moreover, detecting the inequivalence of the pair happens if there exists a decomposition that includes the entire version pair and satisfies the given EV's restrictions. Although the cost of maximizing the window and testing if it is valid could be low, testing the equivalence of maximal decompositions by pushing it to the EV incurs an overhead due to the EV's reasoning about the semantics of the window. Thus, we want to avoid sending a window to the EV if we can quickly determine beforehand that the version pair is not equivalent. To quickly identify the inequivalence between two dataflow versions, our approach is to create a lightweight representation that allows us to partially reason about the semantics of the version pair. This approach relies on a symbolic representation similar to other existing methods (Sutskever et al., 2015; Wang et al., 2016), denoted as \((\vec{5},\vec{\mathcal{O}})\). In this representation, \(\vec{S}\) and \(\vec{\mathcal{O}}\) are lists that represent the projected columns in the table and the columns on which the result table is sorted, respectively. To construct the representation, we follow the same techniques in existing literature (Sutskever et al., 2015; Wang et al., 2016) by using predefined transformations for each operator. Operators inherit the representation from their upstream/parent operator and update the fields based on their internal logic. In this way, if the list of projected columns (based on \(\vec{S}\)) of version \(P\) is different from those in \(Q\), as in Figure 23b, we can know the two versions do not produce the same results. We can apply the same check to the sorted columns. ## 8. Extensions In this section, we discuss relaxing the definition of EV's restrictions then discuss the consequences of relaxing the definition and propose extending Algorithm 2 to handle incomplete EVs and handle multiple given EVs. **Relaxing EV's restrictions.** Recall an EV's restrictions are conditions that a query pair must satisfy to guarantee the EV's completeness for determining the equivalence of the query pair. This definition of restrictions limits the opportunity to cover more query pairs. Thus, we relax the definition of EV's restrictions as follows. **Definition 8.1** (Relaxed EV's restrictions).: _Restrictions_ of an EV are a set of conditions such that for each query pair if this pair satisfies these conditions, then the EV can attempt to determine the equivalence of the pair. However, relaxing the definition of an EV's restriction may not guarantee the completeness of the EV and may introduce the following implication. **Handling greedy window verification when an EV is incomplete.** In Line 16 of Algorithm 2 we push testing the equivalence of a decomposition to the given EV only when the decomposition is marked maximal. The following example shows that this approach could miss some opportunity to find the equivalence of two versions, because the EV is not able to verify the equivalence of the two sub-DAGs in the maximal window. **Example 2**.: _Consider two dataflow versions \(P\) and \(Q\) with a single edit \(c\) based on a given mapping \(\mathcal{M}\). Let \(\gamma\) be a given EV. Suppose a covering window \(\omega_{c}\) satisfies the restrictions of \(\gamma\), and the EV is able to verify the equivalence of the two sub-DAGs in \(\omega_{c}\). According to Algorithm 2, we do not check the equivalence of this window if it is not marked maximal. Let \(\omega^{\prime}\) be the only MCW that contains \(\omega\). Following Line 16 in Algorithm 2, if a window is maximal (\(\omega^{\prime}\) in this example) we push testing its equivalence to the EV. However, suppose in this case, the EV returns Unknown, because EVs are mostly incomplete (Kolmogorov, 1999) for verifying two general relational expressions. Since there is no other MCW to test,_ Veer _accordingly returns_ Unknown _for verifying the equivalence of the sub-DAGs in \(\omega^{\prime}\). However, if we pushed testing the equivalence of the smaller window \(\omega_{c}\), then_ Veer _would have been able to verify the equivalence of the pair._ This example highlights the significance of verifying the equivalence of sub-DAGs within smaller windows before expanding to larger windows. The challenge arises when an EV can verify the equivalence of a small window but fails to do so for a larger one. To address this, we modify Line 16 in Algorithm 2 to check the equivalence of smaller windows by backtracking when the maximal window is not verified. This modification ensures that we do have more opportunities to verify the equivalence of the version pair. Note that this approach may introduce a computational overhead due to the repeated checking of each window and not just the maximal ones. **Using multiple EVs.** As mentioned earlier, the problem of verifying the equivalence of two relational expressions is undecidable (Becker et al., 2015). Figure 23. Example of two inequivalent dataflow versions and their partial symbolic representation. Thus, any given EV would have limitations and is incomplete for solving the problem of deciding the equivalence of two queries. To harness the capabilities of different EVs, we extend Veer to take in a set of EVs and their associated restrictions. We do not modify Algorithm 2. However, we extend the 'isValid' function in Line 9 to encode a window is valid w.r.t which EV so that when we verify the equivalence of the sub-DAGs in the window in Line 17, we call the corresponding EV the window satisfies. ## 9. Experiments In this section, we present an experimental evaluation of the proposed solutions. We want to answer the following questions: Can our solution verify the equivalence of versions in a real-world pipelines workload? How does our solution perform compared to other verifiers? How the optimization techniques help in the performance? What are the parameters that affect the performance? ### Experimental Setup **Synthetic workload.** We constructed four dataflows \(W1-W4\) on TPC-DS (Zheng et al., 2017) dataset as shown in Table 4. For example, dataflow \(W1\)'s first version was constructed based on TPC-DS Q40, which contains 17 operators including an outer join and an aggregate operators. dataflow \(W2\)'s first version was constructed based on TPC-DS Q18, which contains 20 operators. We omit details of other operators included in the dataflows such as Unnest, UDF, and Sort as these do not affect the performance of the experimental result as we explain in each experiment. **Real workload.** We analyzed a total of 179 real-world pipelines from Texera (Talmar et al., 2017). Among the dataflows, 81% had deterministic sources and operators, and we focused our analysis on these dataflows. Among the analyzed dataflows, 8% consisted primarily of 8 operators, and another 8 had 12 operators. Additionally, 33% of the dataflows contained 3 different versions, while 19% had 35 versions. 58% of the versions had a single edit, while 22% had two edits. We also observed that the UDF operator was changed in 17% of the cases, followed by the Projection operator (6% of the time) and the Filter operator (6% of the time). From this set of dataflows, we selected four as a representative subset, which is presented as \(W5\ldots W8\) in Table 4 and we used IMDB (Zheng et al., 2017) and Twitter (Zheng et al., 2017) datasets. **Edit operations.** For each real-world dataflow, we used the edits performed by the users. For each synthetic dataflow, we constructed versions by performing edit operations. We used two types of edit operations. (1) Calcite transformation rules (Cheng et al., 2017) for equivalent pairs: These edits are common for rewriting and optimizing dataflows, so these edits would produce a version that is _equivalent_ to the first version. For example, 'testEmptyProject' is a single edit of adding an empty projection operator. In addition, 'testPushProjectPastFilter' and 'testPushFilterPastAgg' are two example edits that produce more than a single change, in particular, one for deleting an operator and another is for pushing it past other operator. We used a variation of different numbers of edits, different placements of the edits, etc., for each experiment. Thus, we have numbers of pairs as shown in Table 4. For each pair of versions, one of the versions is always the original one. (2) TPC-DS V2.1 (Zheng et al., 2017) iterative edits for inequivalent pairs: These edits are common for exploratory and iterative analytics, so they may produce a version that is _not equivalent_ to the first version. Example edits are adding a new filtering condition or changing the aggregate function as in TPC-DS queries. We constructed one version for each dataflow using two edit operations from this type of transformations to test our solution when the version pair is not equivalent. We randomized the edits and their placements in the dataflow DAG, such that it is a valid edit. Unless otherwise stated, we used two edit operations from Calcite in all of the experiments. Appendix A shows a sample of these dataflows. **Implementation.** We implemented the baseline (Veer) and an optimized version (Veer\({}^{+}\)) in Java8 and Scala. We implemented Equitas (Zheng et al., 2017) as the EV in Scala. We implemented Veer\({}^{+}\) by including the optimization techniques presented in Section 7. We evaluated the solution by comparing Veer and Veer\({}^{+}\) against a state of the art verifier (Spes (Zheng et al., 2017)). We ran the experiments on a MacBook Pro running the MacOS Monterey operating system with a 2.2GHz Intel Core i7 CPU, 16GB DDR3 RAM, and 256GB SSD. Every experiment was run three times. ### Comparisons with Other EVs To our best knowledge, Veer is the first technique to verify the equivalence of complex dataflows. To evaluate its performance, we compared Veer and Veer\({}^{+}\) against Spes (Zheng et al., 2017), known for its proficiency in verifying query equivalence compared to other solutions. We chose one equivalent pair and one inequivalent pair of versions with two edits from each dataflow. Among the 8 dataflows examined, Spes (Zheng et al., 2017) failed to verify the equivalence and inequivalence of any of the pairs, because all of the dataflow versions included operators not supported by Spes. In contrast, Veer and Veer\({}^{+}\) successfully verified the equivalence of 62% (\(W1,W2,W3,W5\)) and 78% (\(W1\ldots W6\)), respectively, of the equivalent pairs. Both Veer and Veer did not verify the equivalence of the versions in \(W7\) because none of the constructed decompositions were verified as equivalent by the EV. Moreover, Veer and Veer did not verify the equivalent pairs of \(W8\) because the change to its versions was made on a UDF \begin{table} \begin{tabular}{|l|l|l|r|r|r|} \hline **Work** & **Description** & **Type of operators** & **\# of operators** & **\# links** & **\# of versions** \\ \hline \hline \(W1\) & TPC-DS Q40 & 4 joins and 1 aggregate operators & 17 & 16 & 5 \\ \hline \(W2\) & TPC-DS Q18 & 5 joins and 1 aggregate operators & 20 & 20 & 9 \\ \hline \(W3\) & TPC-DS Q71 & 1 replicate, union, operations & 23 & 23 & 4 \\ \hline \(W4\) & TPC-DS Q33 & 3 replicates, union, operations & 28 & 34 & 3 \\ \hline \(W5\) & IMDB ratio of non-original to original movie titles & 1 replicate, 2 joins, aggregate operators & 12 & 12 & 3 \\ \hline \(W6\) & IMDB all movies of directors with certain criteria & 2 replicates, 4 joins, 2 unmet operators & 18 & 20 & 3 \\ \hline \(W7\) & Tobacco Twitter analysis & 1 outer join, 1 aggregate, classifier & 14 & 13 & 3 \\ \hline \(W8\) & Wildfire Twitter analysis & 1 join, 1 UDF & 13 & 12 & 3 \\ \hline \end{tabular} \end{table} Table 4. Workloads used in the experiments. operator, resulting in the absence of a valid window that satisfies the EV's restrictions used in our experiments. Veer\({}^{+}\) was able to detect the inequivalence (using the heuristic discussed in Section 7.4) of about 50% of the inequivalent pairs (\(W5\ldots W8\)). We note that Veer and Veer\({}^{+}\) can be made more powerful if we employ an EV that can reason the semantics of a UDF operator. Table 5 summarizes the evaluation of the compared techniques. ### Evaluating Veer\({}^{+}\) Optimizations We used dataflow \(W3\) for evaluating the first three optimization techniques discussed in Section 7. We used three edit operations: one edit was after the Union operator (which is not supported by Equitas (Speer et al., 2017)) and two edits (pushFilterPastJoin) were before the Union. We used the baseline to verify the equivalence of the pairs, and we tried different combinations of enabling the optimization techniques. We want to know the effect of these optimization techniques on the performance of verifying the equivalence. Table 6 shows the result of the experiments. The worst performance was the baseline itself when all of the optimization techniques were disabled, resulting in a total of \(19,656\) decompositions explored in \(27\) minutes. When only "pruning" was enabled, it was slower than all of the other combinations of enabling the techniques because it tested 108 MCWs for possibility of pruning them. Its performance was better than the baseline thanks to the early termination, where it resulted in \(3,614\) explored decompositions in \(111\) seconds. When "segmentation" was enabled, there were only two segments, and the total number of explored decompositions was lower. In particular, when we combined "segmentation" and "ranking", one of the segments had 8 explored decompositions while the other had 13. If "segmentation" was enabled without "ranking", then the total number of explored decompositions was 430, which was only 2% of the number of explored decompositions when "segmentation" was not enabled. The time it took to construct the segmentation was negligible. When "ranking" was enabled, the number of decompositions explored was around 21. It took an average of 0.04 seconds for exploring the decompositions and 0.40 for testing the equivalence by calling the EV. Since the performance of enabling all of the optimization techniques was the best, in the remaining experiments we enabled all of them for Veer\({}^{+}\). ### Verifying Two Versions with Multiple Edits We compared the performance of the baseline and Veer\({}^{+}\). We want to know how much time each approach took to test the equivalence of the pair and how many decompositions each approach explored. We used dataflows \(W1-W8\) with two edits. We used one equivalent pair and one inequivalent pair from each dataflow to evaluate the performance in these two cases. Most dataflows in the experiment had one segment, except dataflows \(W3\), \(W5\), and \(W6\), each of which has two segments. The overhead for each of the following steps, 'is maximal' (line 13), 'is valid' (line 9), and'merge' (line 10) in Algorithm 2 was negligible, thus we only report the overhead of calling the EV. **Performance for verifying equivalent pairs.** Figure 1(a) shows the number of decompositions explored by each approach. In general, the baseline explored more decompositions, with an average of \(3,354\) compared to Veer\({}^{+}\)'s average of 16, which is less than 1% of the baseline. The baseline was not able to finish testing the equivalence of \(W3\) in less than an hour. The reason is because of the large number of neighboring windows that were caused by a large number of links in the dataflow. Veer\({}^{+}\) was able to find a segmentation for \(W3\) and \(W6\). It was unable to discover a valid segmentation for \(W5\) because all of its operators are supported by the EV, but we used the second approach of finding a segmentation as we discussed in Section 7.1. We note that the overhead of constructing a segmentation using the second approach was negligible. For dataflow \(W7\), the size of the windows in a decomposition were small because the windows violated the restrictions of the used EV. Therefore, the "expanding decompositions" step stopped early and thus the search space (accordingly the running time) was small for both approaches. For dataflow \(W8\), both Veer and Veer\({}^{+}\) detected that the change was done on a non-supported operator (UDF) by the chosen EV (Equitas (Speer et al., 2017)), thus the decomposition was not expanded to explore other ones and the algorithm terminated without verifying its equivalence. Figure 1(b) shows the running time for each approach to verify the equivalence. The baseline took 2 seconds to verify the equivalence of \(W1\), and 2 minutes for verifying \(W3\). Veer\({}^{+}\), on the other hand, had a running time of a sub-second in verifying the equivalence of all of the dataflows. Veer\({}^{+}\) tested 9 MCWs for a chance of pruning inequivalent decompositions when verifying \(W6\). This caused the running time for verifying \(W6\) to increase due to the overhead of calling the EV. In general, the overhead of calling the EV was about the same for both approaches. In particular, it took an average of 0.04 and 0.10 seconds for both the baseline and Veer\({}^{+}\), respectively, to call the EV. **Performance of verifying inequivalent pairs.** Figure 1(a) shows the number of decompositions explored by each approach. Since the pairs are not equivalent, Veer almost exhaustively explored all \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{**S**} & \multirow{2}{*}{**P**} & **\# of decompositions** & \multirow{2}{*}{**Exploration (s)**} & **Calling EV (s)** & **Total time (s)** \\ \cline{3-5} & & & & & \\ \hline \(\times\) & \(\times\) & \(\times\) & 19,656 & 1,629 & 0.22 & 1,629 \\ \hline \(\times\) & \(\times\) & \(\times\) & 3,614 & 111 & 0.15 & 111 \\ \hline \(\checkmark\) & \(\checkmark\) & \(\times\) & 430 & 0.82 & 0.20 & 1.02 \\ \hline \(\checkmark\) & \(\times\) & \(\times\) & 430 & 0.51 & 0.18 & 0.69 \\ \hline \(\times\) & \(\checkmark\) & \(\checkmark\) & 20 & 0.39 & 0.12 & 0.52 \\ \hline \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 21 & 0.20 & 0.31 & 0.51 \\ \hline \(\times\) & \(\times\) & \(\checkmark\) & 20 & 0.07 & 0.25 & 0.30 \\ \hline \(\checkmark\) & \(\times\) & \(\checkmark\) & 21 & 0.03 & 0.21 & 0.24 \\ \hline \end{tabular} \end{table} Table 6. Result of enabling optimizations (W3 with three edits). “S” indicates segmentation, “P” indicates pruning, and “R” indicates ranking. A \(\checkmark\) means the optimization was enabled, a \(\times\) means the optimization was disabled.The results are sorted based on the worst performance. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Verifier**} & **\% of proved** & \multirow{2}{*}{**Avg. time (s)**} & **\% of proved** & \multirow{2}{*}{**Avg. time (s)**} \\ & **equivalent** & & **avg. time (s)** \\ & **pairs** & & **pairs** \\ \hline \hline Spec & 0.0 & NA & 0.0 & NA \\ \hline Veer & 62.5 & 32.1 & 0.0 & 44.5 \\ \hline Veer\({}^{+}\) & 75.0 & 0.1 & 50.0 & 4.1 \\ \hline \end{tabular} \end{table} Table 5. Comparison evaluation of Veer and Veer\({}^{+}\) against Spes (Speer et al., 2017). of the possible decompositions, trying to find an equivalent one. Veer\({}^{+}\) explored fewer decompositions compared to the baseline when testing \(W3\), thanks to the segmentation optimization. Both approaches were not able to finish testing \(W4\) within one hour because of the large number of possible neighboring windows. Veer\({}^{+}\) was able to quickly detect the inequivalence of the pairs of dataflows \(W5\ldots W8\) thanks to the partial symbolic representation discussed in Section 7.4, resulting in Veer\({}^{+}\) not exploring any decompositions for these dataflows. The result of the running time of each approach is shown in Figure 1(b). Veer's performance when verifying inequivalent pairs was the same as when verifying equivalent pairs because, in both cases, it explored the same number of decompositions. On the other hand, Veer\({}^{+}\)'s running time was longer than when the pairs were equivalent for dataflows \(W1\ldots W4\). We observe that for \(W1\), Veer\({}^{+}\)'s running time was even longer than the baseline due to the overhead of calling the EV up to 130, compared to only 4 times for the baseline. Veer\({}^{+}\) called the EV more as it tried to continuously test MCWs when exploring a decomposition for a chance of pruning inequivalent decompositions. Veer\({}^{+}\)'s performance on \(W3\) was better than the baseline. The reason is that there were two segments, and each segment had a single change. We note that Veer\({}^{+}\) tested the equivalence of both segments, even though there could have been a chance of early termination if the inequivalent segment was tested first. The time it took Veer\({}^{+}\) to verify the inequivalence of the pairs in dataflows \(W5\ldots W8\) was negligible. The heuristic approach was not effective in detecting the inequivalence of the TPC-DS dataflows \(W1\ldots W4\). This limitation arises from the technique's reliance on identifying differences in the _final_ projected columns, which remained the same across all versions of these dataflows (due to the aggregation operator), with most changes occurring in the filtering conditions. ### Effect of the Distance Between Edits We evaluated the effect of the placement of changes on the performance of both approaches. We are particularly interested in how many decompositions would be explored and how long each approach would take if the changes were far apart or close together in the version DAG. We used \(W2\) for the experiment with two edits. We use the 'number of hops' to indicate how far apart the changes were from each other. A 0 indicates that they were next to each other, and a 3 indicates that they were separated by three operators between them. For a fair comparison, the operators that were separating the changes were one-to-one operators, i.e., operators with one input and one output links. Figure 1(a) shows the number of decompositions explored by each approach. The baseline's number of decompositions increased from \(2,770\) to \(11,375\) as the number of hops increased. This is because it took longer for the two covering windows, one for each edit, to merge into a single one. Before the two covering windows merge, each one produces more decompositions to explore due to merging with its own neighbors. Veer\({}^{+}\)'s number of explored decompositions remained the same at 21 thanks to the ranking optimization, as once one covering window includes a neighboring window, its size is larger than the other covering window and would be explored first until both covering windows merge. Figure 1(b) shows the time each approach took to verify the equivalence of a pair. The performance of each approach was proportional to the number of explored decompositions. The baseline took between 9.7 seconds and 3 minutes, while Veer\({}^{+}\)'s performance remained in the sub-second range (0.095 seconds). **Effect of type of changed operators.** We note that when any of the changes were on an unsupported operator by the EV, then both Veer and Veer\({}^{+}\) were not able to verify their equivalence. We also note that the running time to prove the pair's equivalence, was negligible because the exploration stops after detecting an 'invalid' covering window. ### Effect of the Number of Changes In iterative data analytics, when the task is exploratory, there can be many changes between two consecutive versions. Once the analytical task is formulated, there are typically only minimal changes to refine some parameters (Sutton et al., 2017). We want to evaluate the effect of the Figure 24. Comparison between the two algorithms for verifying equivalent pairs with two edits. An “\(\times\)” means the algorithm was not able to finish running within one hour. Figure 25. Comparison between the two algorithms for verifying inequivalent pairs with two edits. An “\(\times\)” sign means the algorithm was not able to finish within an hour. number of changes on the number of decompositions and the time each approach takes to verify a version pair. The number of changes, intuitively, increases the number of initial covering windows, and consequently, the possible different combinations of merging with neighboring windows increases. We used \(W1\) in the experiment. Figure 14(a) shows the number of decompositions explored by each approach and the total number of "valid" decompositions.The latter increased from 356 to 11, 448 as we increased the number of changes from 1 to 4. The baseline explored almost all those decompositions, with an average of 67% of the total decompositions, in order to reach a maximal one. Veer\({}^{+}\)'s number of explored decompositions, on the other hand, was not affected by the increase in the number of changes and remained the same at around 14. The ranking optimization caused a larger window to be explored first, which sped up the merging of the separate covering windows, those that include the changes. Figure 14(b) shows the time taken by each approach to verify the equivalence of a pair. Both approaches' time was proportional to the number of explored decompositions. The baseline showed a performance of around 0.42 seconds when there was a single change, up to slightly more than a minute at 75 seconds when there were four changes. Veer\({}^{+}\), on the other hand, maintained a sub-second performance with an average of 0.1 seconds. ### Effect of the Number of Operators We evaluated the effect of the number of operators. We used \(W2\) with two edits and varied the number of operators from 22 to 25. We varied the number of operators in two different ways. One was varying the number of operators by including only those supported by the EV. These operators may be included in the covering windows, thus their neighbors would be considered during the decomposition exploration. The other type was varying the number of non-supported operators, as their inclusion in the dataflow DAG would not affect the performance of the algorithms. **Varying the number of supported operators.** Figure 15(a) shows the number of explored decompositions. The baseline explored \(6,650\) decompositions when there were 22 operators, and 7, 700 decompositions when there were 25 operators. Veer\({}^{+}\) had a linear increase in the number of explored decompositions from 21 to 24 when we increased the number of operators from 22 to 25. Figure 15(b) shows the results. We observed that the performance of Veer was negatively affected due to the addition of possible decompositions from these operators' neighbors while the performance of Veer\({}^{+}\) remained the same. In particular, Veer verified the pair from a minute up to 1.4 minutes, while Veer\({}^{+}\) verified the pair in a sub-second. ### Limitations of Veer Veer could verify complex dataflow version pairs that other verifiers were unable to verify. However, given the undecidability of the problem of determining the equivalence of two dataflow versions, there are cases where Veer fails to verify due to the following reasons: _(1) Determinism and context:_ Veer focuses on a small portion of the dataflow pair (windows) and ignores the context. It treats the input to the small windows as any instance of input sources. In some cases, these windows may be inequivalent but the entire pair is. Moreover, Veer assumes that the input sources are not changing and that the operator functions are deterministic across different versions. _(2) Dependence on EV:_ Veer is a general-purpose framework that internally depends on existing EVs. When the changes between the versions are performed in a UDF operator, then Veer would need to rely on an EV that can reason about the semantics of a UDF. We will address these limitations in a followup work [(5)]. ## 10. Related Works _Equivalence verification._ There are many studies to solve the problem of verifying the equivalence of two SQL queries under certain assumptions. These solutions were applicable to a small class of SQL queries, such as conjunctive queries [(2; 11; 29; 47)]. With the recent advancement of developing proof assists and solvers [(18; 19)], there have been new solutions [(58; 59; 15)]. UDP [(15)] and WeTune's verifier [(53)] use semirings to model the semantics of a pair of queries and use a proof assist, such as Lean [(19)] to prove if the expressions are equivalent. These two works support reasoning semantics of two queries with integrity constraints. Equitas [(59)] and Spes [(58)] model the semantics of the pair into a First-Order Logic (FOL) formula and push the formula to be solved by a solver such as SMT [(18)]. These two works support queries with three-valued variables. Other works also use an SMT solver to verify the equivalence of a pair of Spark jobs [(25)]. Our solution uses them Figure 15. Effect of the number of changes (W1). Figure 15. Effect of the number of operators (W2 with two edits). as black boxes to verify the equivalence of a version pair. The work in (Krishnan et al., 2012) finds a weighted edit distance based on the semantic equivalence of two queries to grade students queries. _Tracking dataflow executions_. There has been an increasing interest in enabling the reproducibility of data analytics pipelines. These tools track the evolution and versioning of datasets, models, and results. At a high level they can be classified as two categories. The first includes those that track experiment results of different versions of ML models and the corresponding hyper-parameters (Krishnan et al., 2012; Krishnan et al., 2012; Krishnan et al., 2012; Krishnan et al., 2013; Krishnan et al., 2014; Krishnan et al., 2015; Krishnan et al., 2016). The second includes solutions to track results of different versions of data processing dataflows (Krishnan et al., 2012; Krishnan et al., 2013; Krishnan et al., 2014; Krishnan et al., 2015; Krishnan et al., 2016; Krishnan et al., 2017). These solutions are motivations for our work. _Materialization reuse and MQO_. There is a large body of work on answering data processing dataflows using views (Krishnan et al., 2012; Krishnan et al., 2013; Krishnan et al., 2014; Krishnan et al., 2015). Some solutions (Krishnan et al., 2014) focus on deciding which results to store to maximize future reuse. Other solutions (Krishnan et al., 2013; Krishnan et al., 2015) focus on identifying materialization reuse opportunities by relying on finding an exact match of the dataflow's DAG. On the other hand, semantic query optimization works (Krishnan et al., 2012; Krishnan et al., 2013; Krishnan et al., 2015; Krishnan et al., 2016) reason the semantics of the query to identify reuse opportunities that are not limited to structural matching. However, these solutions are applicable to a specific class of functions, such as user defined function (UDF) (Krishnan et al., 2013; Krishnan et al., 2014; Krishnan et al., 2015), and do not generalize to finding reuse opportunities by finding equivalence of any pair of dataflows. ## 11. Conclusion In this paper, we studied the problem of verifying the equivalence of two dataflow versions. We presented a solution called "Veer," which leverages the fact that two workflow versions can be very similar except for a few changes. We analyzed the restrictions of existing EVs and presented a concept called a "window" to leverage the existing solutions for verifying the equivalence. We proposed a solution using the windows to verify the equivalence of a version pair with a single edit. We discussed the challenges of verifying a version pair with multiple edits and proposed a baseline algorithm. We proposed optimization techniques to speed up the performance of the baseline. We conducted a thorough experimental study and showed the high efficiency and effectiveness of the solution. ###### Acknowledgements. This work is supported by a graduate fellowship from King Saud University and was supported by NSF award III 2107150.
2309.04578
Maintaining human wellbeing as socio-environmental systems undergo regime shifts
Global environmental change is pushing many socio-environmental systems towards critical thresholds, where ecological systems' states are on the precipice of tipping points and interventions are needed to navigate or avert impending transitions. Flickering, where a system vacillates between alternative stable states, is touted as a useful early warning signal of irreversible transitions to undesirable ecological regimes. However, while flickering may presage an ecological tipping point, these dynamics also pose unique challenges for human adaptation. In this work, we link an ecological model that can exhibit flickering to a model of human adaptation to a changing environment. This allows us to explore the impact of flickering on the utility of adaptive agents in a coupled socio-environmental system. We highlight the conditions under which flickering causes wellbeing to decline disproportionately, and explore how these dynamics impact the optimal timing of a transformational change that partially decouples wellbeing from environmental variability. The implications of flickering on nomadic communities in Mongolia, artisanal fisheries, and wildfire systems are explored as possible case studies. Flickering, driven in part by climate change and changes to governance systems, may already be impacting communities. We argue that governance interventions investing in adaptive capacity could blunt the negative impact of flickering that can occur as socio-environmental systems pass through tipping points, and therefore contribute to the sustainability of these systems.
Andrew R. Tilman, Elisabeth H. Krueger, Lisa C. McManus, James R. Watson
2023-09-08T20:14:13Z
http://arxiv.org/abs/2309.04578v1
# Maintaining human wellbeing as socio-environmental systems undergo regime shifts ###### Abstract Global environmental change is pushing many socio-environmental systems towards critical thresholds, where ecological systems' states are on the precipice of tipping points and interventions are needed to navigate or avert impending transitions. Flickering, where a system vacillates between alternative stable states, is touted as a useful early warning signal of irreversible transitions to undesirable ecological regimes. However, while flickering may presage an ecological tipping point, these dynamics also pose unique challenges for human adaptation. In this work, we link an ecological model that can exhibit flickering to a model of human adaptation to a changing environment. This allows us to explore the impact of flickering on the utility of adaptive agents in a coupled socio-environmental system. We highlight the conditions under which flickering causes wellbeing to decline disproportionately, and explore how these dynamics impact the optimal timing of a transformational change that partially decouples wellbeing from environmental variability. The implications of flickering on nomadic communities in Mongolia, artisanal fisheries, and wildfire systems are explored as possible case studies. Flickering, driven in part by climate change and changes to governance systems, may already be impacting communities. We argue that governance interventions investing in adaptive capacity could blunt the negative impact of flickering that can occur as socio-environmental systems pass through tipping points, and therefore contribute to the sustainability of these systems. _Keywords_ Social-ecological systems Critical transitions Early-warning signals Wellbeing - Flickering ## Introduction Global change impacts, including those resulting from climate change and socioeconomic transitions, are altering the environment and threatening human livelihoods. For example, worsening drought conditions lead to an increased risk of wildfires (McKenzie et al., 2004), and global mean sea-level rise threatens the habitability of low-elevation coastal zones (Vitousek et al., 2017). In ecological systems, these environmental changes are linked to phenomena known as regime shifts, wherein there is a "large persistent change in the structure and function of an ecosystem" (Biggs et al., 2012). Prominent case studies include shifts from productive coral-dominated reefs to degraded systems dominated by macroalgae (Mumby et al., 2007), and shifts from a highly vegetated to a barren state in arid landscapes (Rietkerk et al., 2004). Because these transitions have implications for the ability for humans to thrive in these systems, much attention has been focused on identifying indicators to serve as 'early warning signals' for impending catastrophic changes (Scheffer et al., 2009; Bauch et al., 2016). Whether informed by early warning signals or not, people experiencing ecological regime shifts can attempt to adapt to these changing conditions to maintain their wellbeing. These adaptation measures could include minor changes to practices such as switching target species in fisheries (Katsukawa and Matsuda, 2003), or more major changes such as migration to pursue an alternative livelihood elsewhere (Adamo, 2010). While identifying and attempting to avoid a transition to an undesirable alternative state remains a challenge, in this paper we focus on how people navigate ecological regime shifts. Adapting to environmental change while maintaining wellbeing poses unique challenges, especially in noisy systems. The global extent of human societies demonstrates the ability of people to adapt to (and prosper under) a broad range of ecological regimes. Other organisms also demonstrate this adaptive capacity in their evolutionary response to environmental change (Carlson et al., 2014). However, in social and environmental systems, stochasticity can result in a phenomenon known as "flickering" when the system approaches a regime shift (Taylor et al., 1993; Wang et al., 2012; Gatfaoui and De Peretti, 2019). Flickering describes how a system switches between alternative stable states as a result of stochasticity. In the context of an impending regime shift, flickering leads to periods of time which resemble the status quo alternating with times defined by a novel socio-environmental state. This presents a unique challenge for adaptive agents: which regime should one adapt to and when should one shift practices to align with the expected post-regime-shift environment? There may be cases where agents themselves flicker between alternative livelihoods in an attempt to adapt to the intermittent shifts in environmental regimes. Our results suggest that in systems where people have limited adaptive capacity, flickering can yield marked declines in utility. The importance of adaptation to flickering and nonlinear ecological dynamics can be illus Figure 1: Illustrations showing how alternative stable environmental states can be conceptualized as basins where a ball subject to stochastic perturbations (shown by arrows) may settle, as illustrated in the first panel. Noise may cause the environmental state to tip from a low to a high potential state or vice versa, a process termed flickering. In agricultural systems there are often a range of technologies / approaches / strategies that have environmentally dependent utility curves (3 are illustrated in the lower panel). As environmental states shift, the most favorable strategy changes. In this paper, we consider the case where a continuum of environmentally dependent utility curves exist and individuals adapt to environmental change by shifting the peak of their utility curve in an attempt to track changing environmental conditions. We show that environmental regimes with flickering pose unique challenges for adaptive agents and can lead to troughs in average utility. trated through environmentally dependent utility functions. Such utility functions can arise when they depend on environmental productions functions. These functions provide a mapping between some measure of the environmental state and production or output. They are typically approximated by hump-shaped functions (Schlenker and Roberts, 2009) (see Figure 1b). In agriculture, for example, a production function for a particular crop may be dependent on temperature. Multiple functions with peaks that span different ranges of the environmental state (along the x-axis) represent production under different strategies. In this context, climate adaptation can be thought of as people shifting their production strategy, and thereby transitioning across the different production functions in order to maintain high levels of output despite environmental change. In agriculture, this could be achieved by choosing different varieties of crops as temperature increases. However, these production functions do not explicitly account for the non-linear dynamics associated with coupled social-ecological systems. Whereas average temperature in a region may slowly increase in response to climate change and lead to the expectation of a steady advance through a series of production strategies, underlying environmental conditions that shape productivity may exhibit far more complex dynamics in response to gradual global change. To integrate these effects, we consider an environmental model that has the potential for nonlinear dynamics in response to gradual change in an underlying parameter. These nonlinear dynamics associated with changes in the environmental state are often depicted using a well-potential diagram (Figure 1a). Well-potential diagrams can illustrate how ecosystems (and social-ecological systems) exhibit alternative stable states, and how the resilience of a particular state may be eroded by a relatively slow-changing parameter like average temperature. As the resilience of one basin of attraction is diminished and an alternative basin arises, flickering can occur prior to a tipping point being crossed. After the tipping point is crossed the old basin ceases to exist and the system transitions to the alternative stable state. In contrast to production functions that are dependent on a gradually changing environmental parameter, the utility functions we model depend on an environment with complex dynamics and can lead to highly stochastic utility as the environment flickers between alternative stable states and as people struggle to adapt to this volatility. Additionally, due to the hump-shaped relationship between the environment and productivity, environmental variability will tend to depress average productivity due to non-linear averaging. By integrating the non-linear dynamics of coupled social-ecological systems - especially the dynamical flickering associated with some regime shifts - with the economic concept of environmentally-dependent production functions, we provide new insight into the potential impacts of regime shifts on human wellbeing. We explore the impact of people's adaptive capacity on their ability to track environmental change and maintain their wellbeing. We also examine when transformational change to novel strate gies which buffer individuals against environmental change should be adopted given flickering dynamics. We develop a mathematical model to describe the impact of alternative stable states and flickering on the utility of adaptive agents in a coupled socio-environmental system. We use this model to highlight the conditions wherein flickering has the largest negative impact on people's wellbeing and explore how the timing of people's transitions to new strategies should relate to the timing to environmental transitions caused by flickering and tipping points. We discuss several possible case studies that illustrate how these dynamics could or may be impacting communities, and primarily contextualize our model based on the response of Mongolian nomadic pastoralists to global change. Nomadic pastoralist communities have been among the hardest hit by the consequences of global change. New political regimes have shifted borders, fundamental changes to economic systems have limited the prospects of nomadic ways of life, and extended drought periods have put the health of livestock herds at risk. Using available data from the literature and other publicly available sources, we discuss how global change impacted these communities, with long-term effects that include the migration of people away from their traditional homes. We also discuss how flickering dynamics could have similar consequences for artisanal fishing communities impacted by marine climate shocks and communities impacted by wildfires. ## Socio-environmental model The model has two components: an ecological component where nonlinear dynamics (and in particular flickering) occur, and a social component, where agents adapt their production frontier to align with the environment and maximize their utility. ### Ecological dynamics In our model, we assume that there is an ecological state, \(x\), that can be described by logistic growth and experiences sigmoidal harvest rate \(c\) following a type-3 functional response (Holling, 1959). This approach forms the basis for well-studied ecological models that can exhibit alternative stable states and hysteresis (May, 1977; Scheffer, 1989); models of this form have also been used to study flickering dynamics (Dakos et al., 2012). The ecological state, \(x\), could represent the abundance of forage plants in the context of grazing systems, fish biomass in the case of fisheries, or forest biomass in the case of wildfires. The discrete-time stochastic dynamics of \(x\) are described by \[x_{t+1}=\left(rx_{t}\left(1-\frac{x_{t}}{K}\right)-c\frac{x_{t}^{2}}{x_{t}^{2}+h^ {2}}\right)+\left(1+i_{t}\right)x_{t} \tag{1}\] where \(r\) is intrinsic growth rate of \(x\), \(K\) is its carrying capacity, \(h\) is the half-saturation constant (i.e., the resource level at which half of the maximum extraction rate is reached), and \(i_{t}\) a noise term that models environmental shocks. We assume that \(i_{t}\) is time-correlated red noise governed by \[i_{t+1}=\left(\left(1-\frac{1}{T}\right)i_{t}+\eta_{t}\right), \tag{2}\] where \(i_{t}x_{t}\) is the magnitude of the stochastic reduction or increase of the resource at time \(t\), \(T\) is the time scale over which noise becomes uncorrelated, and \(\eta_{t}\sim\mathcal{N}(0,\beta^{2})\) is an element of a series of independent identically distributed normal error terms. In the absence of noise (i.e., when \(\beta=0\)), this system can exhibit a range of dynamics. For large values of \(r\), discrete-time logistic systems such as this can exhibit cyclic or chaotic dynamics (May, 1974). To simplify our analyses, we restrict our attention to those cases where cyclic and chaotic dynamics do not occur in the absence of noise. For harvesting rates \(c\), that are low, the system has a single stable equilibrium corresponding to an environmental state of abundance. For high values of \(c\), the sole stable equilibrium is a depleted environmental state. For intermediate values of \(c\) there exists a region with multiple stable equilibria. In this intermediate regime, the inclusion of noise can lead to a flickering dynamic where the system makes irregular jumps from the high to low environmental basins of attraction (Dakos et al., 2012). Figure 2 shows the stable and unstable equilibrium states across a range of harvesting values, and illustrates that for intermediate extraction rates, the system has two alternative stable states. ### Human adaptation and wellbeing To model human wellbeing in response to a constantly changing environment, we assume that people can adapt their practices to be in alignment with the environmental state, but that this adaptation process takes time. The rate at which individuals can adapt to a changing environment depends on their adaptive capacity. Here, we conceptualize adaptation as individual or collective actions that allow individuals to be as successful as possible, given the current state of the environment. Adaptation allows individuals to shift the peak of their production function so that it aligns with the current environmental state. Potential avenues for adaptation are myriad and depend on the context of the case study. For pastoralist systems, they include moving to better locations when the local resource level is low, implementing irrigation systems to bolster Figure 2: For low and high extraction rates, the system has only one stable equilibrium. For intermediate extraction rates, bistability occurs and the potential for flickering dynamics arises. We use this distinction to classify our system into three distinct dynamical regimes. In regime 1, only the high environmental state is stable, regime 2 exhibits bistability and potentially flickering dynamics, and only the low environmental state is stable in regime 3. For any extraction rate, actual dynamics of the environment will fluctuate about their equilibria due to stochasticity. productivity, and storing feed for cattle (Chen et al., 2015). In agriculture, it could include adjusting the timing of planting and harvesting, and changing (or diversifying) the varieties of crops grown. We employ a very simple model that captures the notion of adaptive capacity and allows us to explore the consequences of decreased adaptive capacity on people's wellbeing. We let \(y_{t}\) be the environmental state to which individuals are most well adapted. When \(y_{t}=x_{t}\), agents achieve the highest possible payoff given the current environmental state. The adaptation of individuals to environmental change is constrained by their adaptive capacity, \(l\). When \(l=1\), individuals adapt fully in one time step to the current environmental state. On the other hand, when \(l<<1\), it will take much more time for adaptation to a particular environmental state to be achieved. We model the dynamic of adaptation as a deterministic discrete-time dynamical system governed by \[y_{t+1}=l(x_{t}-y_{t})+y_{t}. \tag{3}\] We construct a utility function that depends on the best possible payoff, \(\pi(x)\), that can be attained given the current environmental state, \(x\), and on the extent to which there is a divergence between the environmental state and the state to which individuals are most well adapted. We assume that the highest achievable payoff given an environmental state \(x\) is a linearly increasing function \(\pi(x)\), such that the potential for high payoffs improves with the state of the environment (Figure 3). We let this payoff be the utility that is achieved by an individual who is perfectly adapted to state \(x\), (i.e. \(y(t)=x(t)\)). When there is some degree of divergence between the current environmental state and an individual's adaptation, then utility decreases. We assume that utility can be described as a Gaussian function of environmental adaptation, \(y\), given by \[U(x,y)=\pi(x)\;\exp\left(\frac{-\ln(2)(x-y)^{2}}{a^{2}}\right) \tag{4}\] where \(a\) defines the degree of misadaptation at which utility is cut in half from its peak. This utility function adheres to our assumption that when environmental adaptation, \(y\), is equal to the current environmental state, \(x\), then utility, \(U(x,y)\), is equal to the payoff \(\pi(x)\). ### Human-environmental dynamics The coupled dynamics of the environment and adaptation can be described by the system of difference equations Figure 3: Illustrations showing two representative cases. In the first panel, the relationship between the environmental state and maximum payoff, \(\pi(x)\) is shown for a case where the payoffs are highly sensitive to the environmental state (Case 1) and where payoffs vary less in response to different environmental states (Case 2). The second panel shows the relationship between the misadaptation to the environment and how much of payoffs individuals realize as utility. In Case 1, individuals are more sensitive to misadaptation than Case 2. Finally, in the bottom panel, these impacts are aggregated and utility is shown as a function of the state most well adapted to \((y)\) for a current environmental state of \(x=7\). \[x_{t+1} =\left(rx_{t}\left(1-\frac{x_{t}}{K}\right)-\frac{cx_{t}^{2}}{x_{t}^ {2}+h^{2}}\right)+\left(1+i_{t}\right)x_{t} \tag{5}\] \[i_{t+1} =\left(\left(1-\frac{1}{T}\right)i_{t}+\eta_{t}\right)\] (6) \[y_{t+1} =l(x_{t}-y_{t})+y_{t}, \tag{7}\] where \(i\) is a red noise term, \(x\) is the state of the environment, and \(y\) is the environment to which individuals are most well adapted. Figure 2 shows that for low extraction rates, the sole stable equilibrium of the system is a high environmental state, but as extraction increases, a tipping point is crossed and the environmental state collapses. Layered on top of this tipping point is human adaptation, which influences wellbeing. We start by focusing solely on the dynamics of the environment and adaptation. Later we will turn our attention to the implications for wellbeing of these dynamics. ## Results ### Environmental adaptation in three regimes Figure 2 identifies three regimes which exhibit qualitatively distinct environmental dynamics. In regime 1, there is a single stable equilibrium with high resource biomass. In regime 2, there are alternative stable states, one with high biomass and one with a degraded environmental state. Lastly, in regime 3, only the degraded environmental equilibrium remains. We are motivated by the scenario in which historical conditions of the system correspond to regime 1. In other words, we start in a scenario where high biomass predominates. Nevertheless, the system is stochastic, and fluctuation about this high-biomass equilibrium can be significant in magnitude. The dynamics of adaptation are governed by the same equation across all three regimes, with agents adjusting their practices toward the current environmental state. Figure 4a shows that dynamics of the environment in regime 1 exhibit significant variation but that adaptation generally falls within the range of environmental variability. In response to shifting economic structures and environmental change, we assume that the extraction rate, \(c\), in the system will increase over time and the system will eventually fall within regime 2. Figure 4b and Figure 4c with \(c\) values corresponding to regime 2 show examples of flickering environmental dynamics. Whereas adaptation largely remains within the range of environmental variability in regime 1, in regime 2 when the system flips from one equilibrium region to the other, there are significant time periods where adaptation is significantly misaligned from the state of the environment. We show that this has important implications for the utility (wellbeing) of agents in regime 2. Figure 4d shows a case where the extraction rate is high enough that the system has been pushed beyond a tipping point and is in regime 3. The only stable equilibrium is a collapsed environmental state, but stochastic dynamics may nonetheless lead to ephemeral periods where environmental dynamics resemble historical conditions. In regime 3, agents are more or less able to maintain close adaptation to the environment. Because of this, we expect utility to approach the maximum payoffs that can be attained under perfect adaptation. #### Wellbeing and environmental regimes Wellbeing depends both on the maximum profitability that could be achieved given environmental conditions, as well as the degree to which agents are adapted to the environmental state. In this section, we examine how wellbeing (i.e., utility) depends on the environmental extraction rate, \(c\). As discussed, extraction rates structure the system into three qualitatively distinct regimes. Figure 5 shows how average payoff assuming perfect adaptation and average utility depend on which regime the extraction rate falls within and on the level of adaptive capacity, \(l\). Figure 5 shows the maximum average payoff, \[\overline{\pi}=\frac{1}{t_{\max}}\sum_{t=1}^{t_{\max}}\pi\left(x_{t}\right), \tag{8}\] that could be achieved through time for different fixed rates, \(c\), of environmental extraction. The figure also represents average utility, \[\overline{U}=\frac{1}{t_{\max}}\sum_{t=1}^{t_{\max}}U\left(x_{t},y_{t}\right), \tag{9}\] for several levels of adaptive capacity, \(l\). Unlike \(\overline{\pi}\), \(\overline{U}\) depends on both the state of the environment, \(x_{t}\) and adaptation, \(y_{t}\). For higher values of adaptive capacity, \(l\), the qualitative pattern of average utility mirrors that of average payoff. When \(l=0.1\), Both average payoff, \(\overline{\pi}\) and average utility, \(\overline{U}\) gradually decline as the extraction rate increases through the three regimes. For moderate to low levels of adaptive capacity, (e.g. \(l=0.01\) to \(l=0.001\)) the qualitative patterns seen in average payoff, and in average utility under high adaptive capacity no longer hold. In these cases, the flickering dynamics of regime 2 exacts a costly toll on average utility. The repeated switching between high and low environmental states leads to extended periods of Figure 4: Temporal dynamics of the environment and adaptation across three regimes. When environmental dynamics fall into regime 1 or 3, individuals adaption generally falls within the range of day-to-day environmental variability. For flickering environmental dynamics as seen in regime 2, when the environment flips from one basin of attraction to another, there are extended periods where individuals are significantly misadapted to the environment. This has important implications for wellbeing. misadaptation that diminish average utility and creates a noticeable utility trough in regime 2. In regime 3, the collapsed environment doesn't vary as widely, and even agents with limited adaptive capacity can eventually adapt their behaviors to the permanently degraded environmental state. This leads to an increase in the average utility of agents with low adaptive capacity and a convergence of utilities and payoffs across all scenarios. This result raises important questions about the usefulness of flickering as an early warning signal in socio-environmental systems. Flickering dynamics can depress average wellbeing by creating highly variable environmental conditions with abrupt shifts that require significant time to adapt to, especially for agents with low adaptive capacity. Our simulations suggest that rather than being an early warning signal, in socio-environmental systems, flickering can be a uniquely challenging regime for agents with low adaptive capacity. #### Transformational change In this section we consider the option of transformational change, where agents can choose once to dramatically overhaul their practices and adopt a more generalist approach. In Figure 3 we showed two alternative cases for the structure of the payoff and utility func Figure 5: Maximum attainable payoff given perfect environmental adaptation and realized utility under actual environmental adaptation for a range of extraction rates, \(c\) and levels of adaptive capacity, \(l\). Flickering can occur in regime 2 for intermediate extraction rates. Flickering dynamics present unique challenges for adaptive agents, especially those with limited adaptive capacity. When agents have low adaptive capacity, flickering leads to a utility trough that does not occur for agents with high adaptive capacity. tions. The first case, where payoffs are highly sensitive to the environmental state and utility is sensitive to misadaptation, has been the focus of the preceding simulation results. In essence, we have assumed that individuals attempt to become specialists in their environment. The second case shown in Figure 3 highlights an alternative possibility, that payoffs are less sensitive to environmental conditions and utility is less sensitive to misadaptation. This corresponds more closely with a generalist strategy, where peak payoffs may never be as high, but adaptation to precise environmental conditions is less important. Transformation to a generalist approach is an alternative to specialist adaptation to one's environment. Under transformation, agents fundamentally alter their practices in order to switch their payoff and utility functions from those in case 1 to those described by case 2. Figure 6 shows the maximum average payoff, \(\overline{\pi}\), of a specialist approach (case 1, in dark green) and a transformational generalist approach (case 2, in dark purple). In this example, the payoff functions are such that if agents were always perfectly adapted to the environment, they would choose the transformational approach as the system nears regime 3 and the dark purple circles rise above the dark green circles. However, agents will not always be perfectly adapted to the current environment, so their realized average utility will fall below the maximum attainable average payoff. In this case, agents would increase their average utility by adopting the transformational approach (case 2) while the system is still in regime 1. After transformation, the flickering dynamics experienced in regime 2 are far less detrimental to average utility. After crossing into regime 3, the average utility of the transformation approach remains higher than the baseline specialist approach. Other payoff and utility functions can be constructed where the transformational generalist approach is only favored during the flickering regime. In this case, analyzing average payoffs could indicate that transformation is never beneficial, while an analysis focusing on realized average utility could show that transformation can help agents navigate flickering critical transitions without suffering as greatly from the utility trough that might otherwise occur. ## Case studies ### Nomadic pastoral systems in Mongolia Climate change, land degradation and socio-political disruptions threaten the livelihoods of tens of millions of nomadic pastoralists who inhabit the semi-arid areas of Central Asia, the Middle East, North Africa and the Sahel zone. Nomadic pastoralists move seasonally between several locations that provide adequate conditions for livestock grazing and water access (Sneath, 2003). Figure 6: If agents are posed with the problem of when to transform their practices from a specialist approach (Case 1) to a generalist approach (Case 2) in response to a steadily increasing extraction rate in the system, the answer will depend on whether they take adaptation into account. Case 1 corresponds to the status quo, where payoffs are highly dependent on the state of the environment and highly sensitive to misadaptation. Case 2 corresponds to a transformative change, where payoffs are less sensitive to the state of the environment and misadaptation. Case 1 resembles a setting where individuals specialize their practices adapted to environmental conditions. Case 2 represents a transformative change where individuals’ wellbeing is less sensitive to the environmental and adaptation states. As the extraction rate increases, agents are eventually better off adopting transformative change. However, the timing of this shift depends on the adaptive capacity of agents. If people are always perfectly adapted to the environment, then payoffs under the status quo remain higher than those under transformation until after the system passes its tipping point at the cusp of regimes 2 and 3. On the other hand, when people are limited in their adaptive capacity, the optimal transformation timing is earlier, as illustrated by the intersection of the pink and green points (Utility curves) in regime 1. In this case, people are better off transforming before the system even reaches the flickering regime. Mongolia is the worlds most sparsely populated country, has an arid to semi-arid climate, few forests, and very limited arable land (Mongolian Statistical Information System, 2021). Nomadic lifestyles are well adapted to these highly variable landscapes and climate conditions. In Mongolia, nomadic pastoralists employ many adaptive strategies including storage of fodder for use during poor grazing conditions, mobility to seek out better rangelands, and communal pooling of resources and labor (Fernandez-Gimenez et al., 2015). Today, about one third of the country's population continues to be nomadic or semi-nomadic, while two thirds live in urban areas (See figure SI 5). Over the past 70 years, Mongolia has experienced an increase in average annual temperatures of 2.1\({}^{\circ}\)C (Lkhagvadorj et al., 2013), which has been accompanied by the proliferation of extreme weather conditions, in particular since the mid-1990 (Enebish et al., 2020). Severe winter weather events called Dzud, characterized by inaccessible grazing resources, have occurred repeatedly and with devastating effects for nomadic livelihoods, killing 33, 23 and 10 million heads of cattle in 1999, 2003 and 2010, respectively (Rao et al., 2015). During this time of dramatic environmental change, Mongolia also experienced a political transition from a socialist to a capitalist regime. This brought about changes to the governance, institutional, and public service structures upon which both settled and nomadic people rely. Prior to the emergence of the capitalist regime in 1990, herders were organized into modular collectives of families around community centers known as _'bag'_. These centers provided technical, social, health and veterinary services, as well as emergency stocks of fodder to buffer against Dzud \begin{table} \begin{tabular}{c|c|l} Variable & Range of values & Description \\ \hline \(x_{t}\) & 0–20 & Current environmental state \\ \(y_{t}\) & 0–20 & Current adaptation state \\ \(l\) &.001-1 & Adaptation rate \\ \(i_{t}\) & & Auto-correlated red noise \\ \(T\) & 30 & Timescale over which noise becomes uncorrelated \\ \(\eta_{t}\) & 0 & i.i.d. normal error term \\ \(\beta\) &.07 & The standard deviation of \(\eta\)’s \\ \(r\) & 1 & Resource growth rate \\ \(K\) & 10 & Resource carrying capacity \\ \(c\) & 0–4 & Extraction rate \\ \(h\) & 1 & Extraction half-saturation constant \\ \(\pi(x)\) & 5–10 & Environmentally dependent payoffs \\ \(U(x,y)\) & 0–\(\pi(x)\) & Utility as a function of environmental and adaptation states \\ \(a\) & 3-5 & Value of \(|x-y|\) for which \(U(x,y)=1/2\)\(\pi(x)\) \\ \end{tabular} \end{table} Table 1: Variables and parameters in the model, their approximate range of values, and meanings. Exact parameter values used for each figure available in Table SI 1. events (Fernandez-Gimenez et al., 2015). These centers bolstered the resilience of Mongolian pastoralist communities, even to severe shocks such as the loss of a family's entire herd. They represented type of community insurance that allowed families who lost their herd to restock the following year (Fernandez-Gimenez et al., 2015; Ahearn, 2018). Upon the institution of a capitalist regime, these governance structures were abandoned. Bag centers and their social services all but disappeared (Fernandez-Gimenez et al., 2015). Consequently, pastoralist families had to adapt by becoming increasingly self-reliant; they grew their livestock herds to raise income during favorable years and to be able to buffer losses during Dzud years. Livestock numbers increased from around 20-25 million during the socialist era, when livestock numbers were strongly regulated, to around 70 million today (Mongolian Statistical Information System, 2021), with grazing pressure growing accordingly (See Supplementary Information, Figure SI 1). In response to these dynamics and the devastating Dzud losses of the past decades, external actors, such as the Asian Development Bank, pushed the Mongolian government to privatize land ownership, in order to give exclusive land use rights to individual families and reduce overgrazing (Sneath, 2003). However, land privatization came at the cost of mobility, an essential element of pastoralism. Mobility is an adaptation to the low average productivity and high spatio-temporal variability of Mongolian rangelands (Sneath, 2003). Land privatization and reduced mobility led to a further deterioration of local pastures, adding to the vulnerability of the pastoralist families (Fernandez-Gimenez et al., 2015), (Wang et al., 2013). Thus, a number of compounding factors led to maladaptive measures and the loss of adaptive capacity and wellbeing: The response to extreme weather conditions and changing governance systems resulted in increased grazing pressure and may have provoked signals of environmental flickering leading to massive losses in livestock, while measures promoted by external actors aimed at tackling the loss of previous governance systems on the one hand, and the problem of overgrazing on the other hand, proved to be maladaptive. After the devastating Dzud of 2009/10, a number of donor-initiated community-based natural resource management organizations were initiated with the aim to establish community structures that resembled those that existed during the socialist regime, which had provided adaptive capacity at the community level. Others responded to the increasingly difficult conditions for maintaining rural lifestyles by migrating to urban centers. While the rural population has remained more or less stable over the past thirty years, the urban population has doubled (Mongolian Statistical Information System, 2021)(See Supplementary Information, Figure SI 4). For some, moving to an urban setting presented new opportunities for education and employment. For others, the transformation posed insurmountable challenges. Rapid urbanization has overwhelmed urban governance institutions, and the majority of urban migrants in Mongolia have inadequate access to basic services, such as water, sanitation and electricity (Terbish and Rawsthorne, 2016), and are unable to support their livelihoods once livestock herds are lost to Dzud or sold during the rural-urban migration process. For those that have remained in a (semi-) nomadic lifestyle, there is evidence of adaption to the highly variable (and potentially flickering) dynamics of weather conditions and pasture quality through new forms of communal governance (Fernandez-Gimenez et al., 2015), and private insurance (Ahearn, 2018). Under these highly unstable conditions, pastoralists have to invest into costly adaptation measures. A study examining adaptation strategies to climate impacts in Mongolia found more than 50 adaptation strategies applied in different combinations and with different frequencies (Wang et al., 2013). While land degradation has even been reversed in some areas of the Mongolian Plateau (Guo et al., 2021), many pastures of the Mongolian landscape remain heavily degraded. A combination of high mobility and collective social support structures made Mongolia's semi-nomadic society highly adapted to fluctuating weather conditions and productivity of grasslands. However, climate and governance change has tested the resilience of these pastoralist communities. While results for donor-incentivized community-based resource management have been mixed, lessons can be learned from successful cases. Herd size regulations and agreements to share grazing lands among herder families could help sustain the livelihoods of families with smaller herd sizes (Fernandez-Gimenez et al., 2015). When urbanization is the preferred adaptation/transformation strategy, governance mechanisms could be established that facilitate this transformation at an early stage, to avoid extended losses in utility (and wealth) resulting from continued adaptation to 'flickering' environmental conditions. However, given already insufficient urban infrastructures, sprawling (and under-served) urban settlements and overwhelmed city management in cities around the world, including Mongolia, governance actors should carefully consider what types of interventions will contribute to greater sustainability of local human-environment systems. Importantly though, given the high cost of adaptation to flickering socio-environmental conditions, the high cost and resulting loss of utility associated with (mal-) adaptive efforts must be accounted for when weighing options for when and how to intervene. Governance interventions into rural and urban livelihoods may create new interdependencies and dynamics between social and ecological systems and between rural and urban environments. ### Fisheries Fisheries support the income and food security of millions of people around the world (McClanahan et al., 2015). Fisheries are also being severely impacted by climate change: from increases in ocean temperatures impacting the structural integrity of coral reefs and the fisheries they support (Hoegh-Guldberg et al., 2017), to ocean acidity impacting early life stages of fish (Waldbusser et al., 2015) to species range shifts altering the spatial distribution of fishing effort (Pinsky and Fogarty, 2012), many fisheries are suffering, with subsequent adverse effects on coastal communities (Hollowed et al., 2013). Fishers have developed several ways of dealing with these changes though. In particular,ishers can move where they go, tracking fish stocks as they shift with climate change (Selden et al., 2020, although the price of fuel greatly constrains where there can go). Switching fisheries and operating in numerous fisheries over the course of a year is also common. This is one way in which fishers "smooth" their income over the year. But it can be costly in time and money, and sometimes impossible given certain fisheries management institutions (e.g., permits are required to fish, but sometimes not available). Switching between fisheries also requires training, knowledge and different fishing gear. Exposure to an ecological regime shift can restructure the incentives for what species are targeted. For example, coral reefs around the world are experiencing more frequent and intense marine heatwaves that lead to coral bleaching and mass coral mortality events (Hoegh-Guldberg et al., 2017; Hughes et al., 2018). In addition to impacts on coral cover, bleaching events are altering reef fish assemblages, especially if reefs experience a shift towards an algal-dominated state (Richardson et al., 2018; Robinson et al., 2019). Return times between bleaching events are presently about every six years and because coral requires on the order of 10-15 years for the fastest species to recover (Gilmour et al., 2013), it is possible that reefs today are experiencing flickering between a high- vs. low-coral state or have already transitioned into the latter. For coral reef fisheries, fishers reduce their sensitivity to climate change, including the impacts of flickering between high- and low-coral cover states, through livelihood diversification (Cinner et al., 2012). For example, in the Caribbean, alternative livelihoods among coastal fishers include agriculture, forestry, aquaculture, construction work, and ecotourism (Karlsson and Mclean, 2020). However, socio-economic barriers including poverty, a minimal social safety net, or a lack of access to capital can limit adaptive capacity and prevent this diversification (Cinner et al., 2012). Governance aimed at dismantling these barriers may also provide second-order benefits by helping to smooth the transition of socio-economic systems though the flickering stages of ecological regime shifts. The main concern suggested by our modeling is that if there is flickering between ecological/fishery states, then fishers might lose income by repeatedly adapting to different states. In addition to the costs associated with gaining the knowledge, the institutional costs such as attaining permits for a new fishery, and the sunk-costs associated with procuring necessary new fishing gear, may force many fishers to take drastic/transformative action, such as leaving fishing altogether. All of these factors indicate that a flickering transition in the underling ecosystem that a fishery is part of, is likely to cause a decline in wellbeing among fishers, potentially reducing the viability of certain fisheries in the future. ### Forest management and wildfire risk Forest management can be viewed as embedded within a socio-environmental system. The ecological dynamics of forests and the stochastic dynamics of wildfire are both coupled with management practices including timber harvesting, fire suppression, and prescribed fire (Luce et al., 2012; Steelman, 2016). Climate change has led to increasing frequency and severity of drought, as well as hotter peak summer temperatures in the forest ecosystems of the Western US (McKenzie et al., 2004). These changes have coincided with increasing tree density in western forests which was driven, in part, by a long-term management emphasis on fire suppression (Fellows and Goulden, 2008). Furthermore, there has been a dramatic increase in the extent of the wildland-urban interface (Radeloff et al., 2018), which increases the likelihood of ignition events and elevates the magnitude of damages that could result from a wildfire. These increasing stressors, driven by climate change, management policies and development, have combined to increase wildfire risk (Marlon et al., 2012) and raise the spectre of the collapse of these ecosystems and their transition to alternative states (Adams, 2013). Given inherent stochasticity in the ignition and spread of wildfires, forest ecosystems at the cusp of tipping points may exhibit flickering dynamics. In response to these entangled and increasing risks, the USDA Forest Service has developed a wildfire crisis strategy which centers on prescribed fire and other fuels treatments (US Forest Service, 2022). However, current climatic conditions, high fuel loads, and the vast extent of the wildland-urban interface will make the transition towards fire-resilient landscapes challenging. Our model suggests that a transition to a fire-resilient landscape that exhibits flickering could strain people's wellbeing, especially when adaptive capacity of management agencies, communities and individuals is low. This results from the time it takes for social practices to align with environmental states. Further, a flickering transition to a fire-resilient landscape may contain periods of time that resemble the historical ecosystem state, where the pressure to invest in adaptations to increase fire resilience may seem unnecessary. Given flickering, these intercalary chapters of unpredictable duration which resemble historical conditions may end with dramatic shifts marked by wildfire. The Forest Service wildfire crisis strategy aims to minimize the risk of these conflagrations by pairing prescribed fire with mechanical fuels treatments so that fire intensity stays low and wildfire risk is diminished. This approach may decrease the risk of a flickering transition, but our modeling results suggest that a complimentary management focus on increasing people's adaptive capacity may blunt the negative impacts that a flickering transi tion can cause. For example, investments in programs which provide education about the risks of wildfire smoke (Wen and Burke, 2022; Burke et al., 2022) and resources for improving indoor air quality could help communities navigate the transition to a fire resilient landscape with a decreased health burden from the impacts of air pollution. ## Discussion Global change is pushing socio-environmental systems to the brink of tipping points where further small changes to underlying conditions could lead to dramatic shifts in system states. Much work has focused on identifying early warning signals of tipping points and governance interventions that can prevent the collapse of the current socio-environmental state. However, as climate change and other anthropogenic impacts continue (e.g., converting Amazonian rainforests for agricultural and cattle use), we may see more social-ecological systems approach and pass through tipping points despite societies' best efforts to avoid them. In this case, in addition to efforts to avert a tipping point, it may be valuable to design governance interventions that minimize loss of wellbeing that results from passing through a tipping point. Flickering, where a system switches among alternative stable states as a result of noise, can occur prior to a tipping point and serves as an early warning indicator (Scheffer et al., 2009). However, the highly unpredictable environmental dynamics that occur under flickering pose unique challenges for people's environmental adaptation. In socio-environmental systems, rather than being an early warning signal, flickering may be a primary hurdle to successfully navigating a tipping point. Policies that bolster people's adaptive capacity may be vital to ensuring means are available to adapt (and thrive) in coming decades. Tools such as (parametric) insurance (Santos et al., 2021), climate clubs (Nordhaus, 2021) and risk pools (Watson et al., 2018; Tilman et al., 2018) are examples of mechanisms that could help people maintain adaptive capacity in the face of global change. A concerning possibility is that when people with low adaptive capacity are exposed to environmental flickering, the reduction in their wellbeing might further erode their adaptive capacity, for example by reducing their wealth or health. This could result in a vicious cycle wherein flickering induces a continuous reduction in the set of adaptation options open to people. Such a cycle could induce conditions that force people into adopting outside options, including urban or international migration. Human migration driven by adverse environmental conditions is a well known consequence of climate change (Cattaneo et al., 2020). A term used in the context of sea-level rise to describe the inevitability of community reorganization in the face of environmental change is "managed retreat" (Alexander et al., 2012). The challenge then is to ensure that this retreat is indeed managed in a way that accounts for the unique impacts that flickering could have during a transition through a regime shift. For many people around the world some form of retreat might be inevitable, either in space (i.e., human migration) or in terms of job sector. Income diversification is among the primary tools that people use for dealing with environmental risks (Brouwer et al., 2007; Shah et al., 2021). Therefore, flickering could incentivize people to diversify or change the industries that secure their income. This is evident in fisheries, where working in numerous fisheries throughout the year can act as a natural means of buffering the stochasticity associated with harvest (Kasperski and Holland, 2013; Finkbeiner, 2015; Cline et al., 2017). Fishers are also known to work in multiple sectors, including jobs on land within the timber and agricultural sectors (Anderson et al., 2017). Environmental flickering in the oceans could lead to fishers redistributing their efforts over the set of fisheries available to them, and other industries that they work in. Analogously to migration, extreme environmental flickering might induce people to permanently leave one industry for another. While we have studied the impact passing through a single regime shift on people's wellbeing, in reality there may be multiple cascading environmental tipping points (Rocha et al., 2018). In this case of systemic environmental risk, the associated social and environmental flickering could co-occur across various dimensions of a person's income portfolio. This might mean fishers will be unable to adapt by moving sectors, and nomadic herders may no longer find community support structures to help them recover from devastating losses. These impacts could scale-up and result in global systemic risks (Centeno et al., 2015) due to the connected nature of our environment, our socio-technological, and our governance systems. Correlated risks among marine heatwaves at sea, droughts on land and economic volatility could interact to present large-scale challenges for communities. Early-warning signals of environmental regime shifts may help people manage their adaptation to impending change (Lenton, 2011). However, we find that some early-warning signals of environmental regime shifts describe socio-environmental dynamics that can already have negative impacts on people. In these cases, our results suggest three types of governance interventions may be warranted either individually, or in combination. First, investments in assuring that people have high adaptive capacity can mitigate the impacts of flickering on wellbeing by helping people remain well adapted to the rapidly changing environment. Second, facilitating transformational change, which partially decouples environmental adaptation and wellbeing, can result in greater wellbeing across a broad range of conditions. Lastly, when a wellbeing trough is unavoidable, interventions that facilitate people's transitions to different ways of life or migration to different places may be warranted. Critically, these investments appear to be most benefical if enacted well before a tipping point is crossed. This suggests that climate adaptation policies may need to be more anticipatory than originally thought. ## Acknowledgements The findings and conclusions in this publication are those of the authors and should not be construed to represent any official USDA or U.S. Government determination or policy. Simulation code is available at [https://github.com/atilman/regimeshift_wellbeing](https://github.com/atilman/regimeshift_wellbeing). ## References * Adamo (2010) Adamo, S. B. (2010). Environmental migration and cities in the context of global environmental change. _Current Opinion in Environmental Sustainability_, 2(3):161-165. * Adams (2013) Adams, M. A. (2013). Mega-fires, tipping points and ecosystem services: Managing forests and woodlands in an uncertain future. _Forest Ecology and Management_, 294:250-261. * Ahearn (2018) Ahearn, A. (2018). Herders and hazards: Covariate dzud risk and the cost of risk management strategies in a Mongolian subdistrict. _Natural Hazards_, 95:165-181. * Alexander et al. (2012) Alexander, K. S., Ryan, A., and Measham, T. G. (2012). Managed retreat of coastal communities: understanding responses to projected sea level rise. _Journal of Environmental Planning and Management_, 55(4):409-433. * Anderson et al. (2017) Anderson, S. C., Ward, E. J., Shelton, A. O., Adkison, M. D., Beaudreau, A. H., Brenner, R. E., Haynie, A. C., Shriver, J. C., Watson, J. T., and Williams, B. C. (2017). Benefits and risks of diversification for individual fishers. _Proceedings of the National Academy of Sciences_, 114(40):10797-10802. * Bauch et al. (2016) Bauch, C. T., Sigdel, R., Pharaon, J., and Anand, M. (2016). Early warning signals of regime shifts in coupled human-environment systems. _Proceedings of the National Academy of Sciences_, 113(51):14560-14567. * Biggs et al. (2012) Biggs, R., Blencker, T., Folke, C., Gordon, L., Norstrom, A., Nystrom, M., and Peterson, G. (2012). Regime Shifts. In Hastings, A. and Gross, L. J., editors, _Encyclopedia of Theoretical Ecology_, page 609-616. University of California Press. * Brouwer et al. (2007) Brouwer, R., Akter, S., Brander, L., and Haque, E. (2007). Socioeconomic vulnerability and adaptation to environmental risk: A case study of climate change and flooding in Bangladesh. _Risk Analysis: An International Journal_, 27(2):313-326. * Brouwer et al. (2013) Burke, M., Heft-Neal, S., Li, J., Driscoll, A., Baylis, P., Stigler, M., Weill, J. A., Burney, J. A., Wen, J., Childs, M. L., et al. (2022). Exposures and behavioural responses to wildfire smoke. _Nature Human Behaviour_, 6(10):1351-1361. * Carlson et al. (2014) Carlson, S. M., Cunningham, C. J., and Westley, P. A. (2014). Evolutionary rescue in a changing world. _Trends in Ecology & Evolution_, 29(9):521-530. * Cattaneo et al. (2020) Cattaneo, C., Beine, M., Frohlich, C. J., Kniveton, D., Martinez-Zarzoso, I., Mastrorillo, M., Millock, K., Piguet, E., and Schraven, B. (2020). Human migration in the era of climate change. _Review of Environmental Economics and Policy_. * Centeno et al. (2015) Centeno, M. A., Nag, M., Patterson, T. S., Shaver, A., and Windawi, A. J. (2015). The emergence of global systemic risk. _Annual Review of Sociology_, 41(1):65-85. * Chen et al. (2015) Chen, J., John, R., Shao, C., Fan, Y., Zhang, Y., Amarjargal, A., Brown, D. G., Qi, J., Han, J., Lafortezza, R., et al. (2015). Policy shifts influence the functional changes of the CNH systems on the Mongolian plateau. _Environmental Research Letters_, 10(8):085003. * Cinner et al. (2012) Cinner, J., McClanahan, T., Graham, N., Daw, T., Maina, J., Stead, S., Wamukota, A., Brown, K., and Bodin, O. (2012). Vulnerability of coastal communities to key impacts of climate change on coral reef fisheries. _Global Environmental Change_, 22(1):12-20. * Cline et al. (2017) Cline, T. J., Schindler, D. E., and Hilborn, R. (2017). Fisheries portfolio diversification and turnover buffer alaskan fishing communities from abrupt resource and market changes. _Nature Communications_, 8(1):1-7. * Dakos et al. (2012) Dakos, V., Carpenter, S. R., Brock, W. A., Ellison, A. M., Guttal, V., Ives, A. R., Kefi, S., Livina, V., Seekell, D. A., van Nes, E. H., et al. (2012). Methods for detecting early warnings of critical transitions in time series illustrated using simulated ecological data. _PLOS One_, 7(7). * Enebish et al. (2020) Enebish, B., Dashkhuu, D., Renchin, M., Russell, M., and Singh, P. (2020). Impact of Climate on the NDVI of Northern Mongolia. _Journal of the Indian Society of Remote Sensing_, 48(2):333-340. * Fellows and Goulden (2008) Fellows, A. W. and Goulden, M. L. (2008). Has fire suppression increased the amount of carbon stored in western US forests? _Geophysical Research Letters_, 35(12). * Fernandez-Gimenez et al. (2015) Fernandez-Gimenez, M. E., Batkhishig, B., Batbuyan, B., and Ulambayar, T. (2015). Lessons from the dzud: Community-based rangeland management increases the adaptive capacity of Mongolian herders to winter disasters. _World Development_, 68:48-65. * Fonseca et al. (2015) Finkbeiner, E. M. (2015). The role of diversification in dynamic small-scale fisheries: lessons from Baja California Sur, Mexico. _Global Environmental Change_, 32:139-152. * Gatfaoui and De Peretti (2019) Gatfaoui, H. and De Peretti, P. (2019). Flickering in information spreading precedes critical transitions in financial markets. _Scientific Reports_, 9(1):1-11. * Gilmour et al. (2013) Gilmour, J. P., Smith, L. D., Heyward, A. J., Baird, A. H., and Pratchett, M. S. (2013). Recovery of an Isolated Coral Reef System Following Severe Disturbance. _Science_, 340(6128):69-71. * Guo et al. (2021) Guo, X., Chen, R., Thomas, D. S. G., Li, Q., Xia, Z., and Pan, Z. (2021). Divergent processes and trends of desertification in Inner Mongolia and Mongolia. _Land Degradation & Development_, 32:3684-3697. * Hoegh-Guldberg et al. (2017) Hoegh-Guldberg, O., Poloczanska, E. S., Skirving, W., and Dove, S. (2017). Coral Reef Ecosystems under Climate Change and Ocean Acidification. _Frontiers in Marine Science_, 4. * Holling (1959) Holling, C. S. (1959). The components of predation as revealed by a study of small-mammal predation of the european pine sawfly1. _The Canadian Entomologist_, 91(5):293-320. * Hollowed et al. (2013) Hollowed, A. B., Barange, M., Beamish, R. J., Brander, K., Cochrane, K., Drinkwater, K., Foreman, M. G., Hare, J. A., Holt, J., Ito, S.-i., et al. (2013). Projected impacts of climate change on marine fish and fisheries. _ICES Journal of Marine Science_, 70(5):1023-1037. * Hughes et al. (2018) Hughes, T. P., Anderson, K. D., Connolly, S. R., Heron, S. F., Kerry, J. T., Lough, J. M., Baird, A. H., Baum, J. K., Berumen, M. L., Bridge, T. C., Claar, D. C., Eakin, C. M., Gilmour, J. P., Graham, N. A. J., Harrison, H., Hobbs, J.-P. A., Hoey, A. S., Hoogenboom, M., Lowe, R. J., McCulloch, M. T., Pandolfi, J. M., Pratchett, M., Schoepf, V., Torda, G., and Wilson, S. K. (2018). Spatial and temporal patterns of mass bleaching of corals in the Anthropocene. _Science_, 359(6371):80-83. * Karlsson and Mclean (2020) Karlsson, M. and Mclean, E. L. (2020). Caribbean Small-Scale Fishers' Strategies for Extreme Weather Events: Lessons for Adaptive Capacity from the Dominican Republic and Belize. _Coastal Management_, 48(5):456-480. * Kasperski and Holland (2013) Kasperski, S. and Holland, D. S. (2013). Income diversification and risk for fishermen. _Proceedings of the National Academy of Sciences_, 110(6):2076-2081. * Katsukawa and Matsuda (2003) Katsukawa, T. and Matsuda, H. (2003). Simulated effects of target switching on yield and sustainability of fish stocks. _Fisheries Research_, 60(2-3):515-525. * Lenton (2011) Lenton, T. M. (2011). Early warning of climate tipping points. _Nature Climate Change_, 1(4):201-209. * Lenton et al. (2017) Lkhagvadorj, D., Hauck, M., Dulamusren, C., and Tsogtbaatar, J. (2013). Pastoral nomadism in the forest-steppe of the Mongolian Altai under a changing economy and a warming climate. _Journal of Arid Environments_, 88:82-89. * Luce et al. (2012) Luce, C., Morgan, P., Dwire, K., Isaak, D., Holden, Z., Rieman, B., Gresswell, R., Rinne, J., Neville, H. M., Gresswell, R., et al. (2012). Climate change, forests, fire, water, and fish: Building resilient landscapes, streams, and managers. General Technical Report RMRS-GTR-290, USDA Forest Service, Rocky Mountain Research Station. * Marlon et al. (2012) Marlon, J. R., Bartlein, P. J., Gavin, D. G., Long, C. J., Anderson, R. S., Briles, C. E., Brown, K. J., Colombaroli, D., Hallett, D. J., Power, M. J., et al. (2012). Long-term perspective on wildfires in the western USA. _Proceedings of the National Academy of Sciences_, 109(9):E535-E543. * May (1974) May, R. M. (1974). Biological populations with nonoverlapping generations: Stable points, stable cycles, and chaos. _Science_, 186(4164):645-647. * May (1977) May, R. M. (1977). Thresholds and breakpoints in ecosystems with a multiplicity of stable states. _Nature_, 269(5628):471-477. * McClanahan et al. (2015) McClanahan, T., Allison, E. H., and Cinner, J. E. (2015). Managing fisheries for human and food security. _Fish and Fisheries_, 16(1):78-103. * McKenzie et al. (2004) McKenzie, D., Gedalof, Z., Peterson, D. L., and Mote, P. (2004). Climatic change, wildfire, and conservation. _Conservation biology_, 18(4):890-902. * Mongolian Statistical Information System (2021) Mongolian Statistical Information System,. (2021). Mongolian statistical information database: www.1212.mn. * Mumby et al. (2007) Mumby, P. J., Hastings, A., and Edwards, H. J. (2007). Thresholds and the resilience of Caribbean coral reefs. _Nature_, 450(7166):98-101. * Nordhaus (2021) Nordhaus, W. (2021). Dynamic climate clubs: On the effectiveness of incentives in global climate agreements. _Proceedings of the National Academy of Sciences_, 118(45). * Pinsky and Fogarty (2012) Pinsky, M. L. and Fogarty, M. (2012). Lagged social-ecological responses to climate and range shifts in fisheries. _Climatic Change_, 115(3):883-891. * Radeloff et al. (2018) Radeloff, V. C., Helmers, D. P., Kramer, H. A., Mockrin, M. H., Alexandre, P. M., Bar-Massada, A., Butsic, V., Hawbaker, T. J., Martinuzzi, S., Syphard, A. D., et al. (2018). Rapid growth of the US wildland-urban interface raises wildfire risk. _Proceedings of the National Academy of Sciences_, 115(13):3314-3319. * Ruff et al. (2018) Rao, M. P., Dai, N. K., D'Arrigo, R. D., Skees, J., Nachin, B., Leland, C., Lyon, B., Wang, S.-Y., and Byambasuren, O. (2015). Dzuds, droughts, and livestock mortality in Mongolia. _Environmental Research Letters_, 10(074012). * Richardson et al. (2018) Richardson, L. E., Graham, N. A. J., Pratchett, M. S., Eurich, J. G., and Hoey, A. S. (2018). Mass coral bleaching causes biotic homogenization of reef fish assemblages. _Global Change Biology_, 24(7):3117-3129. * Rietkerk et al. (2004) Rietkerk, M., Dekker, S. C., de Ruiter, P. C., and van de Koppel, J. (2004). Self-organized patchiness and catastrophic shifts in ecosystems. _Science_, 305(5692):1926-1929. * Robinson et al. (2019) Robinson, J. P. W., Wilson, S. K., Jennings, S., and Graham, N. A. J. (2019). Thermal stress induces persistently altered coral reef fish assemblages. _Global Change Biology_, 25(8):2739-2750. * Rocha et al. (2018) Rocha, J. C., Peterson, G., Bodin, O., and Levin, S. (2018). Cascading regime shifts within and across scales. _Science_, 362(6421):1379-1383. * Santos et al. (2021) Santos, F. P., Pacheco, J. M., Santos, F. C., and Levin, S. A. (2021). Dynamics of informal risk sharing in collective index insurance. _Nature Sustainability_, 4(5):426-432. * Scheffer (1989) Scheffer, M. (1989). Alternative stable states in eutrophic, shallow freshwater systems: a minimal model. _Hydrobiological Bulletin_, 23(1):73-83. * Scheffer et al. (2009) Scheffer, M., Bascompte, J., Brock, W. A., Brovkin, V., Carpenter, S. R., Dakos, V., Held, H., Van Nes, E. H., Rietkerk, M., and Sugihara, G. (2009). Early-warning signals for critical transitions. _Nature_, 461(7260):53-59. * Schlenker and Roberts (2009) Schlenker, W. and Roberts, M. J. (2009). Nonlinear temperature effects indicate severe damages to US crop yields under climate change. _Proceedings of the National Academy of sciences_, 106(37):15594-15598. * Selden et al. (2020) Selden, R. L., Thorson, J. T., Samhouri, J. F., Bograd, S. J., Brodie, S., Carroll, G., Haltuch, M. A., Hazen, E. L., Holsman, K. K., Pinsky, M. L., et al. (2020). Coupled changes in biomass and distribution drive trends in availability of fish stocks to US West Coast ports. _ICES Journal of Marine Science_, 77(1):188-199. * Shah et al. (2021) Shah, A. A., Gong, Z., Khan, N. A., Khan, I., Ali, M., and Naqvi, S. A. A. (2021). Live-hood diversification in managing catastrophic risks: Evidence from flood-disaster regions of Khyber Pakhtunkhwa Province of Pakistan. _Environmental Science and Pollution Research_, 28(30):40844-40857. * Shah et al. (2018) Sneath, D. (2003). Land use, the environment and development in post-socialist Mongolia. _Oxford Development Studies_, 31(4):441-459. * Steelman (2016) Steelman, T. (2016). US wildfire governance as social-ecological problem. _Ecology and Society_, 21(4). * Taylor et al. (1993) Taylor, K. C., Lamorey, G., Doyle, G., Alley, R., Grootes, P., Mayewski, P. A., White, J., and Barlow, L. (1993). The 'flickering switch'of late Pleistocene climate change. _Nature_, 361(6411):432-436. * Terbish and Rawsthorne (2016) Terbish, B. and Rawsthorne, M. (2016). Social exclusion in Ulaanbaatar city Mongolia. _Asia Pacific Journal of Social Work and Development_, 26(2-3):88-101. * Tilman et al. (2018) Tilman, A. R., Levin, S., and Watson, J. R. (2018). Revenue-sharing clubs provide economic insurance and incentives for sustainability in common-pool resource systems. _Journal of Theoretical Biology_, 454:205-214. * US Forest Service (2022) US Forest Service,. (2022). Confronting the Wildfire Crisis: A Strategy for Protecting Communities and Improving Resilience in America's Forests. Technical Report FS-1187a, UDSA Forest Service. * Vitousek et al. (2017) Vitousek, S., Barnard, P. L., Fletcher, C. H., Frazer, N., Erikson, L., and Storlazzi, C. D. (2017). Doubling of coastal flooding frequency within decades due to sea-level rise. _Scientific Reports_, 7(1):1-9. * Waldbusser et al. (2015) Waldbusser, G. G., Hales, B., Langdon, C. J., Haley, B. A., Schrader, P., Brunner, E. L., Gray, M. W., Miller, C. A., and Gimenez, I. (2015). Saturation-state sensitivity of marine bivalve larvae to ocean acidification. _Nature Climate Change_, 5(3):273-280. * Wang et al. (2013) Wang, J., Brown, D. G., and Agrawal, A. (2013). Climate adaptation, local institutions, and rural livelihoods: A comparative study of herder communities in Mongolia and Inner Mongolia, China. _Global Environmental Change_, 23(6):1673-1683. * Wang et al. (2012) Wang, R., Dearing, J. A., Langdon, P. G., Zhang, E., Yang, X., Dakos, V., and Scheffer, M. (2012). Flickering gives early warning signals of a critical transition to a eutrophic lake state. _Nature_, 492(7429):419-422. * Watson et al. (2018) Watson, J. R., Armerin, F., Klinger, D. H., and Belton, B. (2018). Resilience through risk management: Cooperative insurance in small-holder aquaculture systems. _Heliyon_, 4(9):e00799. * Wen and Burke (2022) Wen, J. and Burke, M. (2022). Lower test scores from wildfire smoke exposure. _Nature Sustainability_, pages 1-9. * Wax and Wax (2015) _Supplementary Information for:_ Figure SI 2: Development of monetary income in Mongolian Tugrik. The drop in the mid-1990’s reflects the impacts of the 1997 Asian financial crisis. A gap emerges between urban and rural incomes from the late 2000’s. Data publicly available from Mongolian Statistical Information System (2021). Figure SI 3: Income/expenditure ratio for rural and urban households 1966-2020. Urban household income/expenditure ratios remain relatively steady until the 1997 and continue to fluctuate strongly until 2010, when they become more steady again. This could indicate the transition to an urban regime, as urbanization started to increase significantly in the late 1990’s. Rural income/expenditure ratios are more dynamic from year-to-year, and while they have followed urban dynamics more closely since 2010, there has been a large gap between urban and rural, with rural income/expenditure ratios being significantly lower that urban ratios. This indicates the costly toll on average utility resulting from the high costs of adaptation to the flickering regime for rural households. Data publicly available from Mongolian Statistical Information System (2021). Figure SI 4: Population development in Mongolia 1989-2021. Data publicly available from Mongolian Statistical Information System (2021).
2308.04889
NLLG Quarterly arXiv Report 06/23: What are the most influential current AI Papers?
The rapid growth of information in the field of Generative Artificial Intelligence (AI), particularly in the subfields of Natural Language Processing (NLP) and Machine Learning (ML), presents a significant challenge for researchers and practitioners to keep pace with the latest developments. To address the problem of information overload, this report by the Natural Language Learning Group at Bielefeld University focuses on identifying the most popular papers on arXiv, with a specific emphasis on NLP and ML. The objective is to offer a quick guide to the most relevant and widely discussed research, aiding both newcomers and established researchers in staying abreast of current trends. In particular, we compile a list of the 40 most popular papers based on normalized citation counts from the first half of 2023. We observe the dominance of papers related to Large Language Models (LLMs) and specifically ChatGPT during the first half of 2023, with the latter showing signs of declining popularity more recently, however. Further, NLP related papers are the most influential (around 60\% of top papers) even though there are twice as many ML related papers in our data. Core issues investigated in the most heavily cited papers are: LLM efficiency, evaluation techniques, ethical considerations, embodied agents, and problem-solving with LLMs. Additionally, we examine the characteristics of top papers in comparison to others outside the top-40 list (noticing the top paper's focus on LLM related issues and higher number of co-authors) and analyze the citation distributions in our dataset, among others.
Steffen Eger, Christoph Leiter, Jonas Belouadi, Ran Zhang, Aida Kostikova, Daniil Larionov, Yanran Chen, Vivian Fresen
2023-07-31T11:53:52Z
http://arxiv.org/abs/2308.04889v1
# NLLG Quarterly arXiv Report 06/23: ###### Abstract The rapid growth of information in the field of Generative Artificial Intelligence (AI), particularly in the subfields of Natural Language Processing (NLP) and Machine Learning (ML), presents a significant challenge for researchers and practitioners to keep pace with the latest developments. To address the problem of information overload, this report by the Natural Language Learning Group at Bielefeld University focuses on identifying the most popular papers on arXiv, with a specific emphasis on NLP and ML. The objective is to offer a quick guide to the most relevant and widely discussed research, aiding both newcomers and established researchers in staying abreast of current trends. In particular, we compile a list of the 40 most popular papers based on normalized citation counts from the first half of 2023. We observe the dominance of papers related to Large Language Models (LLMs) and specifically ChatGPT during the first half of 2023, with the latter showing signs of declining popularity more recently, however. Further, NLP related papers are the most influential (around 60% of top papers) even though there are twice as many ML related papers in our data. Core issues investigated in the most heavily cited papers are: LLM efficiency, evaluation techniques, ethical considerations, embodied agents, and problem-solving with LLMs. Additionally, we examine the characteristics of top papers in comparison to others outside the top-40 list (noticing the top paper's focus on LLM related issues and higher number of co-authors) and analyze the citation distributions in our dataset, among others. ## 1 Introduction In an era of ever-accelerating information flow, staying abreast of the overwhelming flood of data and research output is an intimidating task. This holds true especially in the context of the current large public interest (and even hype) surrounding Generative AI, with papers disseminated in ever shorter time intervals. This report, published by the Natural Language Learning Group ([https://nl2g.github.io/](https://nl2g.github.io/)) at Bielefeld University, aims to alleviate the information overload problem, even if only by a small extent, by identifying the currently most popular papers on the arXiv ([https://arxiv.org/](https://arxiv.org/)), especially focusing on the AI subfields natural language processing (NLP) and machine learning (ML) as some of the most vividly discussed research areas, including in mainstream media. Our intention is to give practitioners, incomers, and users of AI, from related and non-related fields (e.g., the social sciences or digital humanities) a quick guide on the most popular and presumably most relevant papers in order to better and (more) quickly grasp current developments. We place particular emphasis on exploring arXiv,1 given its status as a comprehensive and extremely popular pre-print repository. Notably, arXiv's expedited publication process provides a distinct advantage over traditional conferences and journals, ensuring that the latest research becomes readily available to the scientific community at a much faster pace. Footnote 1: Our report is similar to a ‘conference report’ as a popular form of science communication, e.g., [https://www.romanklinger.de/blog-assets/2023-05-12/eacl2023-conf-report.pdf](https://www.romanklinger.de/blog-assets/2023-05-12/eacl2023-conf-report.pdf). But instead of focusing on conferences, we focus on arXiv for multiple reasons: among others, (i) in an age of rapid developments, conferences and journals are too slow and often lagging behind recent developments; (ii) as everyone who regularly submits to NLP/ML conferences knows, conferences also suffer from low reviewing quality, with junior and non-expert reviewers abounding. Instead, we focus on citations (even though these are not unproblematic themselves) as a form of large-scale crowd voting. This report is structured as follows. In Section 2, we outline our methodology, which is entirely straightforward: we select papers from arXiv from the first half of the year 2023 and sort them by normalized citation counts. In Section 3, we show and discuss the list of the 40 most popular papers -- in terms of normalized citation counts -- from our arXiv dataset. In Section 4, we provide an analysis of our arXiv dataset relating to citation distributions, arXiv categories involved, characteristics of top papers, and popularity of 'hype' concepts such as ChatGPT and large language models (LLMs) over time. In Section 5, we conclude. Among our key findings are that: (i) NLP, once a niche area of research, is now considerably more influential than ML in terms of the citations it attracts: even though there are twice as many ML papers in our datasets, \(\sim\)60% of the most highly cited papers are from NLP; (ii) LLM and ChatGPT related papers have clearly dominated the first half of 2023, but especially ChatGPT is now on the decline; (iii) the efficient open-source model LLaMA from Meta AI is the relatively and absolutely most cited paper in our dataset, leaving behind the larger and properietary ChatGPT and GPT-4. Our code and data is available from [https://github.com/NL2G/Quaterly-Arxiv](https://github.com/NL2G/Quaterly-Arxiv). ## 2 Methodology To identify the most influential papers from the AI subfields NLP and ML, we used the following methodology. 1. **Data Retrieval from arXiv**: We collect all papers from 01/01/2023 to 06/31/2023 belonging to the arXiv categories cs.CL (computation and language) and cs.LG (machine learning) using a Python arXiv API.23 Our retrieval time is **July, 29, 2023** (which is important, \begin{table} \begin{tabular}{c|c c c} \hline \hline **Dataset name** & **Size** & **Time period** & **\# Primary Categories** \\ \hline arxiv-0623 & 20,843 & 01/01/2023-06/31/2023 & 123 \\ arxiv-0623-top40 & 40 & 01/01/2023-06/31/2023 & 5 \\ \hline \hline \end{tabular} \end{table} Table 1: Elementary statistics on our two released datasets. Size is the number of papers in each dataset; the last column gives the number of distinct primary arXiv categories our papers are assigned to. because citation counts are constantly in flux). ArXiv papers can be updated anytime; we take the date of the first submission of a paper to arXiv as its publication date. 2. **z-score calculation:** For each paper, we extract its citation count, as a measure of popularity and arguably importance [1], from Semantic Scholar [https://www.semanticscholar.org/](https://www.semanticscholar.org/). Since papers published at different time points may naturally have different citation counts (e.g., older papers have higher chance of being cited than very novel papers), we calculate a _normalized citation count_ by determining _how many standard deviations a paper is above the mean of citations of all papers published in the same week (Sunday-Saturday)_. This is the so-called z-score of Newman [23]: \[z_{t}=\frac{c_{t}-\textit{mean}(\textbf{c}(t))}{\textit{std}(\textbf{c}(t))}\] for a paper published in week \(t\) with citation count \(c_{t}\); \(\textbf{c}(t)\) is the list of citation counts of all papers published in week \(t\). If a paper lies several standard deviations above the mean (for all papers published in the same week), it can be considered excellent for its class. For example, in a normal distribution, only about 16% of data points lie one standard deviation above the mean value. As will be seen below, our top papers lie at least 9-12 standard deviations above the mean.4 Footnote 4: Our approach of identifying top papers in arXiv via the zscore is similar to [11]. 3. **Manual Evaluation** The published date on arXiv might differ from the actual first publication/release/submission date of a paper, e.g., when the authors upload the paper much later to arXiv. Thus, we conduct a manual evaluation to verify if a paper genuinely appeared the first time as indicated by its arXiv release time stamp. If the paper was available earlier, we remove it from consideration. Steps 1 and 2+3 above result in two distinct datasets that we release with this report. We refer to them as arxiv-0623 and arxiv-0623-top40, respectively. Table 1 gives elementary statistics on each of them. ## 3 Top \(N\) papers Table 2 showcases the top 20 papers extracted according to the methodology described in Section 2. We make several interesting observations: * 13 out of 20 (65%) of papers have cs.CL as their prime arXiv category (note that authors of papers may wish to indicate as many additional categories as they desire). cs.LG is the prime category 3 times, followed by cs.CV (computer vision; 2 times) and cs.CR (cryptography) and cs.AI (1 time each). * The absolute citation counts vary drastically, with 14 as lowest number in our top-20 list for a paper published in very late May (_Large Language Models are not Fair Evaluators_[30]) and 874 as highest numbers for the LLaMA paper [28] published in late February. The relative citation counts vary from 12 standard deviations above the mean to 28 standard deviations above the mean. \begin{table} \begin{tabular}{|c|l|l|l|l|l|l|} \hline No. & Title & Cat. & Link & Week & Cit & z-score \\ \hline 1 & LLaMA: Open and Efficient Foundation Language Models & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 9 & 874 & 28.051 \\ 2 & GPT-4 Technical Report & cs.CL & [http://arxiv.org/abs/2303](http://arxiv.org/abs/2303). & 11 & 509 & 25.382 \\ 3 & PaLM 2 Technical Report & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 20 & 82 & 25.182 \\ 4 & Sparks of Artificial General Intelligence: Early experiments with GPT-4 & cs.CL & [http://arxiv.org/abs/2303](http://arxiv.org/abs/2303). & 12 & 354 & 24.302 \\ 5 & PaLM-E: An Embodied Multimodal Language Model & cs.LG & [http://arxiv.org/abs/2303](http://arxiv.org/abs/2303). & 10 & 164 & 21.225 \\ 6 & QLoRA: Efficient Finetuning of Quantized LLMs & cs.LG & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 21 & 30 & 19.944 \\ 7 & Segment Anything & cs.CV & [http://arxiv.org/abs/2304](http://arxiv.org/abs/2304). & 14 & 165 & 18.548 \\ 8 & Judging LLM-as-a-judge with MT-Bench and Chatbot Arena & cs.CL & [http://arxiv.org/abs/2306](http://arxiv.org/abs/2306). & 23 & 21 & 17.916 \\ 9 & A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 6 & 214 & 16.819 \\ 10 & A Survey of Large Language Models & cs.CL & [http://arxiv.org/abs/2303](http://arxiv.org/abs/2303). & 13 & 169 & 16.594 \\ 11 & Visual Instruction Tuning & cs.CV & [http://arxiv.org/abs/2304](http://arxiv.org/abs/2304). & 16 & 89 & 15.277 \\ 12 & Tree of Thoughts: Deliberate Problem & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 20 & 49 & 14.968 \\ 13 & Voyager: An Open-Ended Embodied Agent with Large Language Models & cs.AI & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 21 & 21 & 13.860 \\ 14 & Toolformer: Language Models Can Teach Themselves to Use Tools & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 6 & 175 & 13.716 \\ 15 & How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection & cs.CL & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 3 & 94 & 13.712 \\ 16 & Extracting Training Data from Diffusion Models & cs.CR & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 5 & 97 & 13.596 \\ 17 & Large Language Models are not Fair Evaluators & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 22 & 14 & 13.352 \\ 18 & HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face & cs.CL & [http://arxiv.org/abs/2303](http://arxiv.org/abs/2303). & 13 & 129 & 12.614 \\ 19 & A Watermark for Large Language Models & cs.LG & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 4 & 76 & 12.481 \\ 20 & DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature & cs.CL & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 4 & 76 & 12.481 \\ \hline \end{tabular} \end{table} Table 2: Papers, their prime category, arXiv link, week of first arXiv submission, citation count (as of 07/29/2023) and z-score. **Top 20 papers** according to z-score among all arxiv-0623 papers. \begin{table} \begin{tabular}{|c|l|l|l|l|l|l|} \hline No. & Title & Cat. & Link & Week & Cit & z-score \\ \hline 21 & Mastering Diverse Domains through World & cs.AI & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 2 & 59 & 12.238 \\ & Models & & 04104v1 & & 04104v1 & \\ 22 & Augmented Language Models: a Survey & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 7 & 79 & 12.079 \\ & & 07842v1 & & 07842v1 & & \\ 23 & A Comprehensive Survey on Pretrained & cs.AI & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 7 & 79 & 12.079 \\ & Foundation Models: A History from BERT & & 09419v3 & & & \\ 24 & ImageBind: One Embedding Space To & cs.CV & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 19 & 39 & 11.966 \\ & Bind Them All & cs.CV & 05665v2 & & 05665v2 & \\ 25 & Muse: Text-To-Image Generation via Masked Generative Transformers & cs.CV & [https://arxiv.org/abs/2301](https://arxiv.org/abs/2301). & 1 & 111 & 11.692 \\ 26 & T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models & cs.CV & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 7 & 76 & 11.609 \\ 27 & Is ChatGPT a General-Purpose Natural Language Processing Task Solver? & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 6 & 145 & 11.328 \\ 28 & SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (Multi-CoNER 2) & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 19 & 36 & 11.024 \\ 29 & Mathematical Capabilities of ChatGPT & cs.LG & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 5 & 79 & 11.016 \\ 30 & The Flan Collection: Designing Data and Methods for Effective Instruction Tuning & cs.AI & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 5 & 78 & 10.873 \\ 31 & The False Promise of Imitating Property LLMs & cs.CL & 15717v1 & & 15.688v2 \\ 32 & The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 21 & 16 & 10.480 \\ 33 & Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes & cs.CL & 15717v1 & & 10.421 \\ 34 & Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding & cs.CV & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 19 & 33 & 10.083 \\ 35 & InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning & cs.CL & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 21 & 15 & 9.804 \\ 36 & PandaGPT: One Model To Instruction-Follow Them All & cs.LG & [http://arxiv.org/abs/2301](http://arxiv.org/abs/2301). & 2 & 46 & 9.459 \\ 37 & ChatGPT is not all you need. A State of the Art Review of large Generative AI models & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 5 & 68 & 9.440 \\ 38 & Theory of Mind May Have Spontaneously Emerged in Large Language Models & cs.CL & [http://arxiv.org/abs/2302](http://arxiv.org/abs/2302). & 5 & 68 & 9.440 \\ 39 & mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality & cs.CL & [http://arxiv.org/abs/2304](http://arxiv.org/abs/2304). & 17 & 34 & 9.377 \\ 40 & Otter: A Multi-Modal Model with Context Instruction Tuning & cs.CV & [http://arxiv.org/abs/2305](http://arxiv.org/abs/2305). & 18 & 23 & 9.146 \\ \hline \end{tabular} \end{table} Table 3: Papers, their prime category, arXiv link, week of first arXiv submission, citation count (as of 07/29/2023) and z-score. **Papers 21-40** according to z-score among all arxiv-0623 papers. * The four dominating papers can be seen as technical reports on **LLM foundations models**, including LLMa [28] (the paper with the highest z-score), PaLM 2 [2], and GPT4 (represented twice; once as an OpenAI publication without dedicated authors focusing on technical details [24] and once by a group of Microsoft researchers focused on extensive evaluation [6], both published at around the same time). A "Survey of Large Language Models" [32] (rank 10 in our list) published in late March and already updated 11 times further indicates the popularity of diverse LLMs. * While not all being technical reports or surveys, the vast majority of top papers are centered around LLMs (at least 18 out of 20, i.e., 90%). Exceptions are two papers from the computer vision domain (ranks 7 and 13). * It is interesting that LLaMA [28], a set of **efficient** (and open-source) foundation language models, dominates overall. This hints at the importance of efficiency for LLMs in general, both from an environmental perspective but possibly even more so from a practical perspective, as the LLaMA models can still be fine-tuned even by researchers with a'modest' GPU endowment [19]. Efficiency is further represented by QLoRA [8], submitted to arXiv in late May, which discusses efficient fine-tuning of quantized LLMs. * Three top papers [3, 16, 27] (ranks 9, 15 and 18) are specifically centered around **ChatGPT** (arguably as the originator of the new LLM hype [20]) and particularly discuss its _evaluation_ including failure cases. The paper [27] uses ChatGPT to solve AI tasks by querying huggingface. * Two further top papers (ranks 12 and 14) explore **problem solving with LLMs**, one using external tools [25] and one using reasoning strategies [31]. * Using **LLMs for evaluation** is discussed in the two papers [30, 33] (ranks 8 and 17), one for evaluating open-ended dialogue and one discussing biases of evaluation with LLMs. Both papers are much more recent, being published in late May and early June. * Two papers [9, 29] (ranks 5 and 13) discuss **embodied agents** that can interact with the real world, making use of LLMs. * Two papers [17, 22] (ranks 19 and 20) can be seen as particularly discussing the **ethical aspects** of detecting LLM generated text (e.g., for spotting misleading AI generated content or to detect cheating in educational contexts) and watermarking AI generated text, i.e., embedding signals in automatically generated text that allow its algorithmic detection. Both papers were published early on, in late January. * Finally, the exceptions in our top 20 list are two computer vision papers. The _Segment Anything_ paper [18] by Meta AI Research provides a dataset for image segmentation. The paper [7] discusses privacy of image diffusion models such as DALL-E 2 (which can be considered the analogues of LLMs in the computer vision domain). A further computer vision paper introduces a multimodal framework called LLaVA [21], building on top of GPT4. * Recently, there has been a debate whether AI/NLP has become more negative, i.e., whether papers tend to report more negatively regarding ongoing research (e.g., outline limitations and failure cases) [5, 4]. In our top-20 list, only two papers (10%) could be considered critique papers, namely [30], which focuses on and uncovers biases in LLMs as evaluation models, and [7], which criticizes lack of privacy of diffusion models, allowing to retrieve private information from the training data. In the top-40 list, there are two additional negative papers, i.e., [12] which disputes the mathematical capabilities of ChatGPT, and [15], which challenges whether distillation in which a smaller student LLM is trained on the outputs of a larger properetary LLM such as ChatGPT is really effective. A few papers are partly negative, highlighting some limitations, such as [3]. Overall, the most popular papers are (currently) thus positive regarding the development and abilities of recent LLMs. Table 3 gives analogous papers with rank 21 to 40. We refrain from an in-depth analysis as above. The papers have a similar scope, however, with 11 out of 20 (55%) having cs.CL as primary category and 13 out of 20 (65%) having a variant of LLM in their title (language models, ChatGPT, GPT, etc.). Interestingly, the list of papers with ranks 21-40 contain quite a few **multimodal** approaches such as text-to-image generation models, and relatively more so than the list of papers with ranks 1-20. ## 4 Analysis We now briefly perform a few further analyses on our corpus (not only arxiv-0623-top40 but also arxiv-0623) in order to better understand recent developments. How many citations and standard deviations are there per week?Figure 1 gives the mean citation counts of papers belonging to three primary categories (cs.CL, cs.LG, and all others) over time. We observe that: * citations tend to decrease over time (as is expected; more recent papers cannot yet have been cited so frequently), with, on average, decisively fewer than 2 citations per paper starting from May for all three arXiv categories * cs.CL attracts (considerably) more citations than cs.LG and the aggregation of all other involved primary categories * February has been the month with the most impactful papers in cs.CL, especially week 6 (e.g., Toolformer [25] and ChatGPT analysis [3] submitted to arXiv) and week 9 (e.g., LLaMA [28] submitted) Detailed results including overall standard deviations are also give in Table 4. Standard deviations are particularly large in weeks 1, 6, 8-13. How many arXiv categories (scientific subfields) are involved?Our dataset arxiv-0623 comprises 20,843 papers submitted to arXiv between 01/01/2023 and 06/31/2023 with at least one of the indicated categories given as cs.CL or cs.LG. As NLP and ML affect all aspects of life nowadays, we would expect that these papers do not only originate from either ML or NLP. Indeed, we find that our 20,843 papers are assigned to 123 different primary arXiv categories. We give detailed statistics on those 19 primary categories occurring at least 100 times in Table 5. Overall, the most frequent 19 primary categories are made up of 5 top level categories, namely: cs (computer science), stat (statistics), eess (electrical engineering and systems science), math (mathematics) \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Week Number** & **Week Date** & **Mean** & **Std** & **Mean cs.CL** & **Mean cs.LG** & **Mean Rest** \\ \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} \end{table} Table 4: Mean number of citations, over all papers including standard deviations, and for the primary categories cs.CL, cs.LG and the remaining categories. and quant-ph (quantum physics).5 The five most frequent fine-grained categories are cs.LG, cs.CL, cs.CV (computer vision), stat.ML (statistics, machine learning) and cs.AI (artificial intelligence). Footnote 5: ArXiv does unfortunately not include the humanities or social sciences directly. A pie chart of the distribution of primary categories is shown in Figure 2. cs.LG is the largest category, almost 40% of papers have it as its primary category. cs.CL is only about half the size (but dominates the top-40 papers as discussed above). Other primary categories (outside of the top 5 categories) are about the same size as cs.CL. What distinguishes top papers from other papers?We use the tool of [13] based on the log-likelihood ratio test [10] to determine unusually frequent words in our top-40 papers arxiv-0623-top40 vs. all other papers. Among the top-10 most distinctive unigrams are _chatgpt, gpt-4, modalities, visual, zero-shot_. Among the top bigrams are _language models, large language, models (llms), wide range_. The singular most important trigram is _large language models_. Conversely, words that characterize papers outside the top-40 the best are jargon referring to an older deep learning era such as _learning, neural, deep, network, neural network, machine learning_, etc. While this characterization is very simplistic (it certainly does not satisfy to publish a paper on LLMs to obtain high citation numbers), it is nonetheless insightful. Top-40 papers also have way more authors on average (11.8, with a standard deviation of 19.5) compared to the remaining papers (4.5 with a standard deviation of 3.2). Part of the effect could be trivial: more authors can increase self-citation counts (an arguably at least partly unethical Figure 1: Mean number of citations over weeks for different arXiv categories. practice [26]). On the other hand, more fundamental research may require a larger author list and industry may also produce papers with a higher number of authors. What are the most important key words of the top-40 papers?We plot a wordcloud of the top-40 papers (see Figure 3). To do so, we use KeyBERT [14] to identify the 5 most important tri-grams from the title and abstract of each paper. Then we filter out a manually selected list of unimportant words and lemmatize each word. Finally, we use the python library _wordcloud6_ for plotting. Here the focus of current research into ever larger models becomes apparent again, with phrases such as _trillion token_, _175b_, _large scale_ and _large language model_. The keywords _publicly available_ also show a focus on non-proprietary data and models. Footnote 6: [https://github.com/amueller/word_cloud](https://github.com/amueller/word_cloud) How popular are LLMs over time in our arXiv dataset?While we have seen that LLMs are the dominating theme in the top-40 paper list, we wonder how the popularity of LLMs and ChatGPT have developed over time in our complete arXiv dataset arxiv-0623. To this end, we query the keywords "LLMs" and "ChatGPT" in our dataset over time and flag a paper as relevant if it contains the keywords in its title or abstract.7 Footnote 7: We lowercase abstracts and titles, and we look for the keywords “llm(s)” and “large language model(s)” for LLM; for ChatGPT, we look for “chatgpt” and “chat-gpt”. Figure 4 shows the results. Both keywords were not very relevant in early 2023, less than 2% of papers contained them in January. The ChatGPT curve increases until late March (6% of all papers). Starting from mid-April, LLMs become the more popular keyword. ChatGPT \begin{table} \begin{tabular}{c|r} \hline \hline **Category** & **Occurrences** \\ \hline cs.LG & 8127 \\ cs.CL & 4966 \\ cs.CV & 1670 \\ stat.ML & 859 \\ cs.AI & 455 \\ eess.IV & 414 \\ cs.CR & 304 \\ cs.IR & 288 \\ cs.RO & 285 \\ cs.SD & 265 \\ math.OC & 214 \\ eess.AS & 212 \\ eess.SP & 201 \\ cs.HC & 148 \\ cs.NE & 143 \\ eess.SY & 134 \\ cs.SE & 127 \\ quant-ph & 125 \\ cs.CY & 111 \\ \hline \hline \end{tabular} \end{table} Table 5: All primary categories given in our arXiv dataset whose occurrence exceeds 100. ArXiv categories are described here: [https://arxiv.org/category_taxonomy](https://arxiv.org/category_taxonomy). as a keyword declines since then, while LLMs spike in the week of 05/21 (which marks 2023's camera-ready submission deadline for the popular NLP conference ACL [https://www.aclweb.org/portal/content/acl-2023-call-papers](https://www.aclweb.org/portal/content/acl-2023-call-papers)) with almost 12% of papers containing it; we assume that many accepted ACL papers (with LLMs as a topic) were posted to arXiv right after the camera-ready deadline. Since then, LLMs seem to be declining as a keyword, also -- even though this could just be an artefact of the conference deadline. ## 5 Conclusion We have examined arXiv papers related to the categories cs.CL and cs.LG over the first half of 2023. First, we sorted papers according to their normalized citation counts, finding that LLM related papers clearly dominate. Within LLMs, the most popular current issues center around: efficiency, LLM based evaluation, ethical aspects, embodied agents and problem solving with LLMs (only slightly less prominent are multimodal approaches encompassing language and other modalities such as images, with at least 8 papers within the top-40). We have also looked at, among others: (i) what characteristics top papers have relative to papers outside the top-40 list in terms of number of authors and vocabulary, (ii) the distributions of citations in our dataset, and (iii) the popularity of ChatGPT, which 'caused' the current hype surrounding LLMs in late 2022, and LLMs over time. We hope that our investigation is beneficial not only to newcomers and outsiders to the field of NLP and ML (of which there are seemingly very many nowadays, given how popular the fields have Figure 2: Pie chart of distribution of main categories in our dataset. become [34]), providing quick links to useful starting literature, but also to established researchers and their doctoral students. In the future, we want to regularly update the current report to see how tastes shift over time, examine our arXiv datasets arxiv-0623 and arxiv-0623-top40 in much more depth, and include further arXiv categories related to AI fields (e.g., cs.CV, stat.ML, cs.AI) into our datasets, among others. ## Limitations Limitations of our approach include the following. First of all, science tools like SemanticScholar or GoogleScholar make quite a few mistakes in correctly attributing citations. While we did not study this in depth, we note for example that LLaMA (our top paper) has 874 citations according to SemanticScholar (July 29, 2023) but only 710 citations according to GoogleScholar, a relative difference of \(\frac{164}{874}=18.7\%\). The paper with fewest citations in our top 20 list [30] has 14 citations (July 29, 2023) according to SemanticScholar but only 9 citations according to GoogleScholar, a relative difference of \(\frac{5}{14}=35.7\%\). While we do think that our rankings are relatively reliable, such deviations may naturally bias our selection of papers, assumedly with higher uncertainty for low citation papers. Secondly, focusing particularly on highly cited papers may induce a bias towards these papers similar to that of a self-fulfilling prophecy or preferential attachment. Thirdly, our focus Figure 3: Wordcloud based on the top-40 papers. on weekly citation averages may have unexpected effects: for example, a younger paper with more citations could be ranked below an older paper with fewer citations, for example, if that older paper was published in a week with fewer average citations (e.g., in the early weeks of January where research, and other human activity, is typically less productive, at least in relevant parts of the world, due to preceding holiday activities). Finally, some authors and research groups, potentially more traditional ones, may refrain from submitting their papers to arXiv, despite its otherwise high popularity particularly in the computer science community (see exponential submission growth rates of arXiv submission numbers in the last decades [https://info.arxiv.org/help/stats/2021_by_area/index.html](https://info.arxiv.org/help/stats/2021_by_area/index.html)). Papers from such authors or groups will not be part of our dataset and analysis. Our limitations must be kept in mind when interpreting our results. ## Acknowledgements The NLLG group gratefully acknowledges support from the Federal Ministry of Education and Research (BMBF) via the interdisciplinary AI research grant "Metrics4NLG". Steffen Eger is further supported by the DFG Heisenberg grant EG 375/5-1. We thank Andreas "Max Power" Ruckle for thoughtful discussions. Figure 4: Popularity of ChatGPT and LLMs (in percentage of papers having the words in their abstracts or titles) over time in our dataset.
2309.15475
Effects of coronal mass ejection orientation on its propagation in the heliosphere
Context. In the scope of space weather forecasting, it is crucial to be able to more reliably predict the arrival time, speed, and magnetic field configuration of coronal mass ejections (CMEs). From the time a CME is launched, the dominant factor influencing all of the above is the interaction of the interplanetary CME (ICME) with the ambient plasma and interplanetary magnetic field. Aims. Due to a generally anisotropic heliosphere, differently oriented ICMEs may interact differently with the ambient plasma and interplanetary magnetic field, even when the initial eruption conditions are similar. For this, we examined the possible link between the orientation of an ICME and its propagation in the heliosphere (up to 1 AU). Methods. We investigated 31 CME-ICME associations in the period from 1997 to 2018. The CME orientation in the near-Sun environment was determined using an ellipse-fitting technique applied to single-spacecraft data from SOHO/LASCO C2 and C3 coronagraphs. In the near-Earth environment, we obtained the orientation of the corresponding ICME using in situ plasma and magnetic field data. The shock orientation and nonradial flows in the sheath region for differently oriented ICMEs were investigated. In addition, we calculated the ICME transit time to Earth and drag parameter to probe the overall drag force for differently oriented ICMEs. The drag parameter was calculated using the reverse modeling procedure with the drag-based model. Results. We found a significant difference in nonradial flows for differently oriented ICMEs, whereas a significant difference in drag for differently oriented ICMEs was not found.
K. Martinic, M. Dumbovic, J. Calogovic, B. Vrsnak, N. Al-Haddad, M. Temmer
2023-09-27T08:17:18Z
http://arxiv.org/abs/2309.15475v1
# Effects of coronal mass ejection orientation on its propagation in the heliosphere ###### Abstract Context:In the scope of space weather forecasting, it is crucial to be able to more reliably predict the arrival time, speed, and magnetic field configuration of coronal mass ejections (CMEs). From the time a CME is launched, the dominant factor influencing all of the above is the interaction of the interplanetary CME (ICME) with the ambient plasma and interplanetary magnetic field. Aims:Due to a generally anisotropic heliosphere, differently oriented ICMEs may interact differently with the ambient plasma and interplanetary magnetic field, even when the initial eruption conditions are similar. For this, we examined the possible link between the orientation of an ICME and its propagation in the heliosphere (up to 1 AU). Methods:We investigated 31 CME-ICME associations in the period from 1997 to 2018. The CME orientation in the near-Sun environment was determined using an ellipse-fitting technique applied to single-spacecraft data from SOHO/LASCO C2 and C3 coronagraphs. In the near-Earth environment, we obtained the orientation of the corresponding ICME using in situ plasma and magnetic field data. The shock orientation and nonradial flows in the sheath region for differently oriented ICMEs were investigated. In addition, we calculated the ICME transit time to Earth and drag parameter to probe the overall drag force for differently oriented ICMEs. The drag parameter was calculated using the reverse modeling procedure with the drag-based model. Results:We found a significant difference in nonradial flows for differently oriented ICMEs, whereas a significant difference in drag for differently oriented ICMEs was not found. Conclusions: ## 1 Introduction A coronal mass ejection (CME) is a large-scale ejection of plasma and magnetic field from the solar corona into the interplanetary medium. When it reaches Earth, it can cause large disturbances in the near-Earth environment (i.e., it can trigger geomagnetic storms). It is relatively widely accepted that CMEs consist of a so-called flux rope (FR) structure (Chen, 1996; Bothmer & Schwenn, 1998; Moore et al., 2001) that may drive sheaths and shocks. An FR, in its simplest form, is a cylindrical structure in which a poloidal magnetic field component rotates about an axial magnetic field component that follows the central axis of the cylinder (Lundquist, 1950). Coronal mass ejections have been observed remotely with white-light coronagraphs. A CME FR reconstruction can be performed using stereoscopic coronagraph images. Therismien et al. (2006) developed a 3D model for CME FR reconstruction, referred to as the graduated cylindrical shell (GCS) model, in which an FR is represented as a "hollow croissant" consisting of two conical legs and a curved front. One of the six main parameters to fully describe the FR in the GCS reconstruction is tilt. The tilt of an FR is defined as the angle between the solar equator and the central axis of the FR. It is measured from solar west to solar north (positive values) and from solar west to solar south (negative values). Defined in this way, the tilt essentially gives the inclination of the CME with respect to the solar equator. Another way to determine the inclination of a CME is based on a 2D CME reconstruction, first proposed by Chen et al. (1997), where the observed CME front is represented with an ellipse. In this model, changing the position of the ellipse, the length of the axes, and the inclination of the major axis of the ellipse can account for the angular width and inclination of the CME (Krall & St. Cyr, 2006; Byrne et al., 2009; Martinic et al., 2022). Martinic et al. (2022) showed that GCS and ellipse fitting give comparable results for the inclination of CMEs when using remote data from coronagraphs aboard the SOHO and STEREO spacecraft for 22 Earth-directed events. Commonly, there is a distinction between the CMEs observed remotely in the corona and the interplanetary CMEs, or ICMEs, measured in situ by spacecraft. Recently, however, in situ measurements of CMEs in the upper corona and innermost heliosphere taken with the Parker Solar Probe and Solar Orbiter have caused this traditional distinction between CMEs and ICMEs to become less clear. In this study, we use the term "ICME" in the context of in situ measurements and interplanetary interaction with the ambient; for the rest, the "CME" term is used. Typically, the three-part structure (the shock, the sheath, and the magnetic obstacle) can be well-measured as the spacecraft passes an ICME. First, a fast-forward shock front is usually detected, characterized by an abrupt increase in magnetic field, solar wind speed, and temperature. After the shock front, a so-called ICME sheath region is measured. This is a special case of plasma sheaths where both expansion and propagation prop erties are observed (Siscoe & Odstrcil, 2008). The ICME sheaths are turbulent and compressed, as evidenced by elevated values and strong fluctuations of the magnetic field, density, velocity, and plasma beta parameter (Kilpua et al., 2017). After the sheath is the driver, the FR part of the ICME, that is, the magnetic obstacle (MO). A subset of well-defined MOs is called a magnetic cloud (MC), which is characterized by a smoothly rotating magnetic field, decreased plasma beta parameter, and decreased temperature (Burlaga, 1991). As a first approximation, and based on their chirality and orientation, ICMEs can be classified into eight basic types, as described in Bothmer & Schwenn (1998), Mulligan et al. (1998), and recently by Palmerio et al. (2018). Four of these eight types are low-inclined ICMEs, and the remaining four are high-inclined ICMEs. Three forces are active during different CME propagation phases. In the early acceleration phase, the Lorentz and gravitational forces compete with each other. Later, the magnetohydrodynamic (MHD) drag force from the solar wind acts on the CME. Observations have shown that CMEs faster than the solar wind slow down, while CMEs slower than the solar wind accelerate (Sheeley et al., 1999; Gopalswamy et al., 2000; Vrsnak et al., 2004; Manoharan, 2006). Drag in interplanetary space (MHD drag) is not primarily caused by viscosity and particle collisions but is rather related to the interaction of the ICME with the surrounding magnetic field, such as MHD waves (Cargill et al., 1996) and magnetic field dinging (Gosling & McComas, 1987), as described in Martinic et al. (2022). Interplanetary CMEs interact with the surrounding plasma and magnetic field as they propagate in the heliosphere. For fast ICMEs embedded in the slow ambient plasma, accelerations and deflections of the ambient plasma occur in front of the ICME FR part. Due to the high electrical conductivity, the ambient solar wind cannot easily penetrate the magnetized ICME structure, but it is accelerated and deflected around the obstacle. This occurs in an ICME sheath region and is particularly pronounced near the ICME FR part. A direct consequence of this plasma motion is the draping of the IMF around the ICME FR. Apart from the relative velocity between the ICME and the surrounding solar wind, the draping pattern depends strongly on the size and shape of the ICME and on the configuration of the surrounding magnetic field (Gosling & McComas, 1987; McComas et al., 1988; McComas et al., 1989). Consequently, for differently oriented ICMEs, even if embedded in similar configurations of the ambient magnetic field and solar wind, one might expect a different plasma flow and consequently a different draping pattern, as theorized by Martinic et al. (2022). Figure 1 shows a low-inclination ICME in panel (a) and a high-inclination ICME embedded in the surrounding magnetic field in panel (b). Only the meridional plane, the xz-plane of the Geocentric Solar Ecliptic (GSE) coordinate system, is shown in Figure 1, and one should consider the Parker spiral (i.e., the Parker spiral configuration of the magnetic field in the xy-plane). In the case of ICMEs with high inclination, more draping occurs due to the interaction with the broader extent of the ICME front. The blue arrows in Figure 1 schematically represent the plasma flows in front of the obstacle. Due to the larger pressure gradient associated with the pileup of the magnetized solar wind, the ambient plasma is expected to pass the obstacle more easily in the direction in which the extent of the obstacle is smaller. Thus, in an ICME with low inclination, the plasma flow in the xz-plane of the GSE coordinate system is more pronounced than in an ICME with high inclination. In contrast, for an ICME with high inclination, one would expect more pronounced plasma flows in the yz-plane (into and out of the plane shown in Figure 1). The ambient field that is draped eventually slides past the obstacle. This process should be more efficient for an ICME with a low inclination since the expansion in the xz-plane is smaller, and the ICME can push the draped field around the obstacle more easily than an ICME with high inclination. Vandas et al. (1995) and Vandas et al. (1996) studied the propagation of two MCs, one low inclined and one high inclined, represented by Lundquist's cylindrical force-free solution (Lundquist, 1950) in the inner heliosphere using the 2.5D MHD model. Details of this model can be found in Wu et al. (1979) (2D) and Wu et al. (1983) (2.5D). They found that the propagation of these MCs does not depend on the inclination of their axes with respect to the ecliptic plane (one lies in the ecliptic, and the other has an axis perpendicular to it). The MHD model used in these studies was confined to the solar equatorial plane and therefore does not provide a complete 3D MHD representation. In order to provide a better forecast of ICME arrivals, the influence of field line draping and associated nonradial flows (NRFs) on the ICME propagation from the observational perspective needs to be investigated on a statistically relevant sample of events. To our knowledge, this influence was first studied by observation in Martinic et al. (2022). In this present study, we extend the Figure 1: Idealized IMF in the meridional plane, xz-plane of GSE coordinate system, and its interaction with embedded ICME with low inclination (upper panel) and high inclination (bottom panel). The NRF is shown with blue arrows where its width and length suggest the pronouncement of the plasma flows in front of the embedded ICME. The figure is adapted from Martinic et al. (2022). data sample to provide better statistical coverage and investigate the effects of NRFs and field line draping on the propagation behavior of the CME. In Section 2, we describe the method by expanding on the study by Martinic et al. (2022). We highlight several dynamical features used to study the interaction between differently oriented ICMEs and the environment. In terms of the plasma flows in front of the ICME FR, we studied NRFs and shock orientation; and in terms of the overall drag, we studied drag parameter and ICME transit time. The main findings are presented in Section 3, and our conclusions are in Section 4. ## 2 Data and method We searched for associated CME-ICME pairs from 1996 to 2020. The lists we used to create our sample can be found in the following studies: Nitta & Mulligan Skov 2017 (abbr. NM), Palmerio et al. 2018 (abbr. P), Temmer et al. 2021 (abbr. T), and Xie et al. 2021 (abbr. X). In total, 113 CME-ICME pairs were found, but only 31 were used in our analysis. Most events were excluded for two reasons: insufficiently developed sheath region (32 excluded) and unclear MO boundary determination (30 excluded). The former relates to missing signatures of a clear sheath region ahead of the MO (for a discussion of CMEs with and without sheath regions, see Salman et al. 2020). As highlighted in Kilpua et al. (2017), the sheath thickness depends on the velocity and physical properties of the driving MO and the ambient solar wind, but sheath thickness has also been shown to increase from the nose toward the flanks. Unclear MO boundary determination is related to the subjectivity in determining the boundaries of the MO. There are some MO examples where there are clearly multiple rotations of the same or different magnetic field components, and in such cases, it is not straightforward to establish the MO boundaries and associate the example with a simple FR categorization of eight types. Other reasons why some of the events were excluded are as follows: faint CME front and multiple eruptions within the LASCO field of view (11 excluded); possible ICME interactions with other ICMEs or high-speed streams (4 excluded); no clear magnetic field rotation, that is ejecta-ICME, (1 excluded); no in situ data (1 excluded); possible incorrect CME-ICME association (1 excluded); and inconsistent dominant inclination derived from remote observations and in situ measurements (2 excluded). Ultimately, 31 CME-ICME pairs in the period from 1997 to 2018 with clear MO signatures were left. ### Dominant inclination determination We derived the dominant inclination for the CME-ICME pairs from both the remote and in situ data. For the remote data, we used SOHO/LASCO (Brueckner et al. 1995) coronagraph images and performed an ellipse fit. This method assumes that the outer edge of the (partial) halo CME can be represented by an ellipse whose major axis inclination indicates the dominant inclination of the CME. An example of the application of the ellipse-fitting technique to event number eight is shown in Figure 3. The top row shows running difference images in the LASCO-C2 and LASCO-C3 field of view (FOV). In the bottom row, the ellipse fitting is overlaid with a red line. In situ data was obtained from the WIND and ACE space probes, available through the OMNI database (King & Papitashvili 2005). The dominant inclination from the in situ data was derived from the rotation of the magnetic field components in the MO part of the ICME using the GSE system. If the rotation of the \(B_{z}\) component was observed to change sign but the \(B_{y}\) component retained its sign, we considered the event to be a dominantly low-inclined event (see Figure 2). On the other hand, if a sign change was observed in the \(B_{y}\) component but the \(B_{z}\) component remained the same throughout the MO, the event was considered to be dominantly high inclined. We divided all events into eight basic categories. Four of these eight categories are dominantly high inclined (ESW, ENW, WSE, and WNE), and the other four are dominantly low inclined (SWN, NWS, SEN, and SWN). Here, E stands for east, W for west, N for north, and S for south. The ESW type has an axis directed toward the south and a helical field rotating from east to west. The ENW type has the same helical field rotation, but the axial field is directed toward the north. The same applies to the others. The results of the classification are shown in Table 2. Al-Haddad et al. (2013) found that FR reconstruction shows different inclinations for different FR reconstruction techniques, and this varies greatly with the MO boundary set. This is the reason why we only distinguish between dominantly high- and dominantly low-inclined events, rather than deriving the exact inclination for each event (see Martinic et al. 2022). In summary, we divided all events into two groups: events with predominantly low inclination and those with predominantly high inclination. Events with predominantly low inclination are those with an inclination of less than 40\({}^{\circ}\), as determined from the ellipse fit, and with a rotation in the \(B_{z}\) magnetic field component (ESW, ENW, WSE, and WNE), as observed in situ. Events with predominantly high inclination are those with an inclination greater than 45\({}^{\circ}\), as determined from the ellipse fit, and with rotation in the \(B_{y}\) magnetic field component (SWN, NWS, SEN, and NES), as seen in situ. We considered the events with an inclination between 40\({}^{\circ}\) and 45\({}^{\circ}\) to be intermediate inclination events and did not include them in the analysis. For two CME-ICME pairs that were excluded, we found inconsistencies in the dominant inclination inferred from the in situ and remote data. Xie et al. (2021) showed that 25% of the events studied had a rotation of more than 40\({}^{\circ}\) from the near-Sun to L1. They also showed that 56% of these events exhibited rotation in the STEREO/SECCHI-COR2 FOV (i.e., in the mid-corona). Isavnin et al. (2013) showed that about one-third of the events studied showed a change in inclination from predominantly low to high, or vice versa. In our sample of 33 events, we found only two events where this was true. This could be due to the fact that we excluded over 30 CME-ICME pairs because of ambiguous rotation of the magnetic field components within the MO part of the ICME. Of the remaining 31 events, 19 are dominantly low inclined, while 12 are dominantly high inclined. These 31 CMEs are listed in Table 2.1, and their interplanetary counterparts, ICMEs, are listed in Table 2.2. The first column of Table 2.1 shows the event number accompanied by an abbreviation indicating which study the CME-ICME association was taken. The second column shows the first C2 appearance time as reported in the SOHO/LASCO CME catalog.1 The third and fourth columns show the time at which the ellipse fit reconstruction was performed in the LASCO-C2 and LASCO-C3 FOV, respectively. This is followed by the columns showing the obtained tilt, in LASCO-C2 FOV and LASCO-C3 FOV, respectively. The last column shows whether the event is dominantly high or dominantly low inclined, as obtained from the ellipse fit in the LASCO-C2 and LASCO-C3 FOV. The letter "L" indicates that the event is dominantly low inclined and that the average of the absolute tilt values obtained from the ellipse fit reconstruction in LASCO-C2 and LASCO-C3 FOV is less than 40\({}^{\circ}\). The letter "H" indicates that the event is dominantly high inclined. Analogously, such events are those whose average absolute tilt values are higher than 45\({}^{\circ}\). In Table 2, one can see that the inclination derived from LASCO-C2 may differ from the inclination derived from the LASCO-C3 coronagraphic images. The CME evolves through the entire FOV of C2 and C3, and by marking slightly different leading edges (green crosses in Figure 3) at different times, we can infer slightly different inclinations for the same event. We note that this is not necessarily related to strong rotations and deflections in the LASCO-C2 or LASCO-C3 FOV (Yurchyshyn et al. 2009; Vourlidas et al. 2011; Kay et al. 2017) but to simple ambiguities inherent in the measurements. This is also visible in Figure 3, where in LASCO-C3 FOV the ellipse is slightly less inclined than in the LASCO-C2 FOV. This is one of the reasons why we focus only on the dominant inclination. ### Sheath region nonradial flows and shock orientation The boundaries of the MO and sheath region were determined manually for each event. We note that the selection of ICME boundaries involves a degree of uncertainty. In the first instance, the boundaries of the MO were chosen to cover the entire magnetic field rotation. When this was not possible due to the rotation of several magnetic field components, the events were excluded. As mentioned earlier, there were 30 events where this was the case. From left to right, the columns in Table 2 show the event number, the date of the MO onset, shock-clear sheath occurrence time \(SH_{\rm start}\), clear sheath end time \(SH_{\rm end}\), the MO onset time, the MO end time, the derived FR type, the NRF ratio, the shock orientation \(\theta_{B}\), the observed transit time TT, and \(\gamma\) parameter. The sheath region was divided into two parts in some cases. The first part is the region where only clear sheath signatures can be seen (i.e., a strongly fluctuating magnetic field and plasma with increased density, temperature, and plasma beta). The second part of the envelope has fewer Figure 3: Coronal mass ejection that occurred on 15 March 2002. The upper panels show the running difference images in LASCO-C2 (left) and LASCO-C3 (right). The bottom panels show the corresponding ellipse fitting. The ellipse is indicated with a red line, whereas green crosses mark the points outlined on the CME front used to obtain the fit. Figure 2: Interplanetary CME measured in situ on 10 January 1997 (left panels) and 3 November 2000 (right panels). From top to bottom, the following parameters are shown: Magnetic field magnitude in black and magnetic field fluctuations in gray (right scale); GSE magnetic field components (red, \(B_{x}\); blue, \(B_{y}\); green, \(B_{z}\)); proton density in black, temperature in red, and expected temperature in blue; solar wind speed in black and plasma beta parameter in gray; GSE velocity components (blue, \(B_{y}\); green, \(B_{z}\)). From left to right, the vertical magenta lines mark the shock arrival, the end of the clear sheath, and the MO end time. In the right panels, the end of the clear sheath part does not coincide with the MO onset time, and there is an additional vertical magenta line present. high plasma parameters and/or a not as strongly fluctuating magnetic field. This part shows no clear sheath and no clear MO properties. We identified this second part in 14 out of 31 events, as shown in Table 2 (see column \(SH_{\rm end}\)). In these 14 events, the end of the clear sheath region does not correspond to the beginning of the MO part. This part between the clear sheath and the clear MO was studied by Kilpua et al. (2013), who recognized it as the disturbed front part of the FR known as the MO front region. More recently, Temmer & Bothmer (2022) recognized this as compressed ambient solar wind and noted it as a leading edge structure. An example of a sheath with clear sheath properties is shown in the left panels of Figure 2, while an example of a more complex sheath where the clear sheath is observed after the shock but then toward the MO part of the ICME one can also see a region with both sheath and MO properties is shown in the right panels of Figure 2. There, one can observe a region that shows a stronger magnetic field with fewer fluctuations than in the clear sheath part. The density and plasma beta parameter show a further increase accompanied by a decrease in the temperature. Interplanetary CMEs are usually associated with NRFs in (1) the sheath region and (2) the expanding magnetic ejecta part. The first association is due to the plasma motion of the ambient solar wind escaping around the ICME ejecta part, and the second is related to the expansion of the magnetic ejecta in the nonradial direction, as described in Al-Haddad et al. (2022). The NRF in the sheath region was previously studied by Gosling & McComas (1987). They discovered a westward flow related to the magnetic stress of the Parker spiral acting on ICMEs. Later, Owens & Cargill (2004) showed that the NRF in the sheath region can be used as an indicator of the local axis orientation of ICMEs and the point at which spacecraft and ICMEs meet. Additionally, Liu et al. (2008) investigated whether NRFs in the sheath could relate to the curvature of the MO. Similarly, Martinic et al. (2022) showed how differently oriented ICMEs may have different NRFs. We calculated the NRF ratio between the plasma flow in the \(y\) and \(z\) directions of the GSE coordinate system. The NRF flow is defined as the average of the absolute flow of the plasma in the \(y\) or \(z\) direction in GSE. The NRF ratio for each event is given in Table 2, column 8. We emphasize that the NRF ratio was determined from the part of the sheath where we observed only unique sheath features. For the 14 events mentioned above with complex sheath structures, this means that only the first part of the sheath was considered. In addition to the NRF in the sheath region, the shock orientation \begin{table} \begin{tabular}{l c c c c c c} Nr. & First C2 Appearance & Ellipse Fit in C2 & Ellipse Fit in C3 & Tilt C2 [\({}^{\circ}\)] & Tilt C3 [\({}^{\circ}\)] & Inclination \\ \hline \hline 1\({}^{X}\) & 1997-01-06 15:10 & no data & 1997-01-07 01:59 & & 3 & L \\ [MISSING_PAGE_POST] ,X}\) & 2013-01-13 12:00 & 2013-01-13 15:54 & faint LE & -6 & 0 & L \\ 24\({}^{P,T,X}\) & 2013-04-11 07:24 & 2013-04-11 08:24 & 2013-04-11 10:30 & 84 & 90 & H \\ 25\({}^{NM,X}\) & 2013-06-23 22:36 & 2013-06-24 02:48 & faint LE & 59 & & H \\ 26\({}^{P,T,X}\) & 2013-07-09 15:12 & 2013-07-09 16:24 & faint LE & 12 & & L \\ 27\({}^{P,T,X}\) & 2014-08-15 17:48 & 2014-08-15 20:24 & faint LE & -52 & & H \\ 28\({}^{X}\) & 2015-11-04 14:48 & 2015-11-04 15:24 & 2015-11-04 17:30 & 23 & 37 & L \\ 29\({}^{X}\) & 2016-10-09 02:24 & 2016-10-09 06:24 & 2016-10-09 10:18 & -15 & -35 & L \\ 30\({}^{X}\) & 2017-05-23 05:00 & 2017-05-23 08:24 & 2017-05-23 13:29 & 15 & -3 & L \\ 31\({}^{X}\) & 2018-03-06 01:25 & 2018-03-06 03:48 & faint LE & 20 & & L \\ \hline \end{tabular} \end{table} Table 1: Remote features of the observed CMEs. The first column is the event number with the indication of where the CME-ICME association was taken from and is followed by the CME’s first C2 appearance time. The third column corresponds to the time the ellipse fit was performed in LASCO-C2 FOV, and the fourth column is the time the ellipse fit was performed in LASCO-C3 FOV. The fifth and sixth columns show the tilt results derived from LASCO-C2 and LASCO-C3, respectively. The last column shows the dominant inclination obtained from Tilt C2 and Tilt C3 values (see text for details); “L” stands for low inclination, “H” stands for high inclination, and “LE” stands for the leading edge. \(\theta_{B}\), that is, the angle between the shock normal vector \(\hat{n}\) and the upstream magnetic field \(B_{up}\): \[\theta_{B}=\frac{180^{\circ}}{\pi}\arccos\Big{(}\frac{|B_{up}\cdot\hat{n}|}{\|B _{up}\|\,\|\hat{n}\|}\Big{)}. \tag{1}\] The shock normal vector \(\hat{n}\) was calculated by the mixed-mode method Abraham-Shrauner & Yun (1976), and in the cases where the data gap of velocity components was present, magnetic coplanarity from Colburn & Sonett (1966) was used. (For more detail on the \(\hat{n}\) calculation, we refer the reader to the database of interplanetary shocks from which the \(\theta_{B}\) were obtained.2). The shock orientation \(\theta_{B}\) values are given in Table 2. One can notice that not all events from Table 2.2 have a corresponding \(\theta_{B}\). These events (3, 12, 14, 23, and 31) do not meet the shock criterion given in the database of interplanetary shock documentation. However, they have a sheath developed enough to compute NRFs, as indicated above. Footnote 2: [http://ipshocks.fi/database](http://ipshocks.fi/database) ### Transit time The transit time (TT) was calculated as the time difference between the time of onset of the ICME MO in the in situ data and the CME start time at 20 R\({}_{s}\) (solar radii). We note that this transit time is not the same as the one typically given in databases that corresponds to the arrival time of the shock. The CME start time at a starting radial distance of 20 R\({}_{s}\) was taken from the second order fit of the altitude-time measurements provided by SOHO/LASCO CME catalog.3 When measurements were only available for starting radial distances less than 20 R\({}_{s}\), an interpolation was performed using the acceleration corresponding to the same second order fit. Footnote 3: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/) ### Drag-based model and \(\gamma\) parameter determination Observational studies have derived that drag force dominates ICME propagation after a certain distance in the heliosphere. Results from these studies have formed the basis of numerous drag-based CME models (Vrsnak et al., 2013; Hess & Zhang, 2015; Mostl et al., 2015; Kay & Gopalswamy, 2018), which apply the simple analytical equation: \[F_{d}=\gamma(v-w)|v-w|, \tag{2}\] where \(v\) is the CME velocity, \(w\) is the solar wind velocity, and \(\gamma\) is the so-called drag parameter given by the following equation (Vrsnak et al., 2013): \begin{table} \begin{tabular}{r r r r r r r r r r} Nr. & In Situ Date & \(SH_{\rm start}\) & \(SH_{\rm end}\) & \(MO_{\rm start}\) & \(MO_{\rm end}\) & FR type & NRF ratio & \(\theta_{B}\)[\({}^{\circ}\)] & TT[h] & \(\gamma\)[10\({}^{-7}\) km\({}^{-1}\)] \\ \hline \hline 1 & 1997-01-10 & 10.04 & & 10.21 & 11.14 & SWN & 0.56 & 51 & 46.46 & 0.096 \\ 2 & 1997-10-10 & 283.68 & 283.92 & 284.15 & 285 & SWN & 0.88 & 89 & 98.33 & 8.901 \\ 3 & 1997-11-07 & 310.95 & 311.26 & 311.68 & 312.57 & WNE & 1.85 & no data & 78.4 & 0.431 \\ 4 & 1998-01-07 & 6.58 & 6.98 & 7.11 & 8.4 & ENW & 1.28 & 59 & 90.31 & 0.418 \\ 5 & 2000-08-12 & 224.8 & & 225.25 & 226.25 & SEN & 1.02 & 64 & 59.92 & 0.125 \\ 6 & 2000-11-06 & 311.4 & 311.55 & 311.95 & 312.65 & SEN & 1.06 & 46 & 68.3 & 1.141 \\ 7 & 2001-04-28 & 118.2 & 118.48 & 119.08 & 119.6 & SEN & 1 & 48 & 58.34 & 0.460 \\ 8 & 2002-03-19 & 77.55 & & 78.24 & 79.52 & WNE & 0.96 & 39 & 75.59 & 0.355 \\ 9 & 2002-04-17 & 107.47 & 107.7 & 108.02 & 109.15 & SWN & 0.92 & 66 & 64.23 & 0.137 \\ 10 & 2003-08-18 & 229.58 & & 230.12 & 231.25 & ESW & 1.13 & 62 & 53.9 & 2.332 \\ 11 & 2005-05-15 & 135.12 & 135.26 & 135.4 & 136.1 & ENW & 2.39 & 62 & 38.58 & 0.180 \\ 12 & 2008-12-17 & 351.5 & & 352.2 & 352.8 & NWS & 1.22 & no data & 102.34 & 4.782 \\ 13 & 2010-04-05 & 95.35 & 95.48 & 95.53 & 96.57 & NWS & 0.43 & 54 & 45.31 & \\ [MISSING_PAGE_POST] 69 & 69.8 & SWN & 0.42 & no data & 54.89 & 0.164 \\ \hline \end{tabular} \end{table} Table 2: In-situ derived features of ICMEs, shock angle \(\theta\), and \(\gamma\) parameter obtained with the reverse modelling procedure. First column shows the event number. Next is the date of MO onset followed by sheath onset time (\(SH_{\rm start}\)); sheath end time (\(MO_{\rm start}\)); and MO end time (\(MO_{\rm end}\)), all given in day of the year (DOY). The following columns show the FR type, NRF ratio, shock orientation \(\theta_{B}\), observed transit time \[\gamma=C_{d}\frac{A\rho_{w}}{M+M_{V}}. \tag{3}\] Here, A is the cross-sectional area of the CME, \(\rho_{w}\) is the solar wind density, \(M\) is the CME mass, \(M_{V}\) is the mass corresponding to the volume of the fluid displaced by the movement of the body (the so-called virtual mass), and \(C_{d}\) is the dimensionless drag coefficient. We emphasize that \(C_{d}\) is usually taken as one and as a constant during the propagation of the ICME. However, Cargill (2004) has shown that the value of \(C_{d}\) depends on the relative density and velocity of the CME with respect to the density and velocity of the solar wind. Cargill also showed that the value of \(C_{d}\) increases from one for dense CMEs to as high as three for low-density CMEs and that \(C_{d}\) has a significant radial dependence for the latter. The drag parameter \(\gamma\) is a very important parameter in the context of the drag force acting on a CME. Due to its dependence on CME cross section, mass, virtual mass, and solar wind density, obtaining the drag parameter \(\gamma\) through direct measurements is currently unreliable (see e.g. Vrsnak et al. 2013; Dumbovic et al. 2021). To derive the most reliable gamma value for our data sample, we used a reverse modeling method with the drag-based ensemble version v3 tool (DBEMv3 tool; Calogovic et al. 2021). In DBEMv3, input parameters (CME start time, CME source region longitude, CME half-width, solar wind speed, starting speed of CME, and \(\gamma\) parameter) with their uncertainties follow a normal distribution, with the observation input value set as the mean and three standard deviations as the uncertainty. The DBEMv3 tool creates 100,000 ensemble members from these input parameters and performs a single DBM run for each of them. For more detail on the creation of ensemble members using the DBEMv3 tool, the reader is referred to Calogovic et al. (2021), and for a comprehensive description of the basic DBM and later developed versions, such as this ensemble version, to Dumbovic et al. (2021). The reverse modeling method with DBEM has also been used by Paouris et al. (2021) to find the optimal \(\gamma\) parameters and solar wind speed for a different subset of CME-ICME pairs. For this particular study, the input parameters of CME start time, CME source region longitude, and CME half-width were set without uncertainties. These values are given in Table 3. The derivation of the CME start time is described in Sect. 2.3. The CME source region was determined from low coronal signatures: post-flare loops, coronal dimmings, sigmoids, flare ribbons, and filament eruptions. For this, we used the JHeliowiewiewier (Muller et al. 2017) visualization tool. We analyzed 171, 211, 193, and 304 A filtergrams from SDO/AIA (Lemen et al. 2012) and SDO/HMI (Scherrer et al. 2012) magnetogram data. When these data were not available, we used SOHO/EIT (Delaboudiniere et al. 1995) and SOHO/MDI (Scherrer et al. 1995) magnetogram data. The CME half-width, \(\lambda\), was set to 89\({}^{\circ}\) because all events were (partial) halo events as seen in the LASCO-C2 and LASCO-C3 FOV. The solar wind speed \(w\) and the starting speed of CME \(v_{0}\) follow a normal distribution, with the mean value being an observed value given in Table 3. The solar wind speed was obtained from in situ plasma measurements provided by the OMNI database King & Papitashvili (2005), and it was determined as the mean velocity of the solar wind over an undisturbed period of several hours prior to the arrival of the CME shock. The CME start speed was taken as a second order speed given in SOHO/LASCO CME catalog.4 The uncertainty (i.e., 3\(\sigma\) value) for both the CME start speed and solar wind speed was set to 10% of the mean value. For the purpose of reverse modeling with DBEMv3, we set the allowed gamma range to 0.01-10 10\({}^{-7}\) km\({}^{-1}\) with an equal probability for all \(\gamma\) parameters in this range (i.e., the \(\gamma\) parameter followed a uniform distribution in this range). As part of the reverse modeling procedure, we searched for the optimal \(\gamma\) parameters where the forecast transit time is within one hour of the actual observed transit time. The median values of these obtained \(\gamma\) parameters are listed in Table 2. Footnote 4: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/) Events 1, 10, 26, 27, 29, and 31 in Table 3 are marked with an asterisk. For these events, the original DBEMv3 input was changed because there were no transit times matching the observed transit time within one hour (i.e., no \(\gamma\) parameters were found). We studied those events in more detail, and we found that for events 1, 10, 29, and 31, the radial takeoff distance needed to be changed. For events 26 and 27, the takeoff speed and speed uncertainty needed to be increased. The height at which the drag force begins to dominate is not universal and varies greatly from event to event (Vrsnak 2001; Sachdeva et al. 2015; Sachdeva et al. 2017). For events 1, 10, 29, and 31, we found that a starting radial distance of 20\(R_{s}\) is not suitable as a DBEM input because the CME is still accelerating at this distance, and its propagation is therefore not dominated by the drag force. To improve our input for these events, the starting distance was increased by the trial-and-error method until a suitable initial distance was found that provided a "perfect transit time" (similar to Sachdeva et al. 2015). For events 1, 10, and 31, this distance was found to be 70 \(R_{s}\), and we found it to be 50 \(R_{s}\) for event 29. For events 26 and 27, we found that the initial CME speed at 20 \(R_{s}\) may be underestimated. This speed underestimation might come from the use of the second order fit of the height-time measurements. The second order fit shows a very small deceleration in the LASCO FOV. A linear fit yielded slightly different velocity estimates that provided physical solutions to find an optimal \(\gamma\) with DBEM for event 26. The uncertainties of the CME launch speed were also increased to 20% in order to better compensate for the initial underestimation of velocity. For event 27, even after considering the linear speed and after increasing the uncertainties of the initial velocity, the optimal \(\gamma\) parameter was not found. It could be that the DBM does not capture the physics of this event well. The same is true for event 13. This CME was launched on 3 April 2010 and is a well-studied event (Rodari et al. 2018; Zhou et al. 2014; Rollett et al. 2012; Temmer et al. 2011; Liu et al. 2011). Temmer et al. (2011) reported quite complex CME dynamics in the LASCO FOV and later in the heliosphere. This CME initially strongly accelerated up to 1100 km s\({}^{-1}\) and then had an abrupt deceleration down to 800 km s\({}^{-1}\) (all below 20 \(R_{s}\)). Later, the CME again accelerated and decelerated in the heliosphere, possibly due to a high-speed stream crossing. Due to its complex dynamics, this event is not suitable for reverse modeling with the DBEM or DBM in general. We find that it is also important to emphasize that even more sophisticated 3D MHD models such as ENLIL were not able to correctly represent the propagation of this CME (Temmer et al. 2011). We note that some of the obtained \(\gamma\) values lay outside of an expected range, 0.2-2 10\({}^{-7}\) km\({}^{-1}\), as given by Vrsnak et al. (2013). This is most prominent for events 2, 12, 14, and 23 (see Table 2.2). We also emphasize that such high \(\gamma\) values might be unreal, but testing such an assumption is beyond the scope of this paper. This would require meticulous analysis of the pre-eruption state of the heliosphere as well as detailed eruption analysis (see Zic et al. 2015 and Temmer et al. 2012). We also highlight that from a theoretical point of view (see Equation 2), for cases when the CME launch speed is close to the solar wind speed, the corresponding optimal \(\gamma\) obtained by the reverse modeling with drag-based models can easily take on very large values that may not be physically plausible. However, we also note that the reverse modeling procedure gave results close to the expected range of values for the majority of events, (i.e., for 25 out of 31 events). direction, and therefore the NRF ratio is smaller for ICMEs with low inclination. In contrast, the extent of the ICME with high inclination is smaller in the \(\pm\gamma\) direction, so the plasma flows mainly in this direction. A sketch of the various NRFs in terms of the different inclinations of CMEs is shown in Martinic et al. (2022). The result of Welch's test for the \(\gamma\) parameter is that the null hypothesis should not be rejected (i.e., the \(\gamma\) parameter for high- and low-inclination events comes from populations with equal means). Welch's test is based on the normality assumption, which is hardly satisfied for \(\gamma\) values (see histogram in Figure 4, panel d). The Kolmogorov-Smirnov test and Mann-Whitney U-test, as nonparametric significance tests, were also performed. However, we note that both tests confirmed the results from Welch's test at the same confidence interval (95%), which is not the case for the \(\gamma\) parameter. \begin{table} \begin{tabular}{|r|c c c|c c c c c|} \hline & \multicolumn{4}{c|}{LOW INCLINATION} & \multicolumn{4}{c|}{HIGH INCLINATION} \\ \hline \hline & NRF ratio & \(\theta_{B}\)[\({}^{\circ}\)] & TT[h] & \(\gamma\)[\(\times 10^{-7}km^{-1}\)] & NRF ratio & \(\theta_{B}\)[\({}^{\circ}\)] & TT[h] & \(\gamma\)[\(\times 10^{-7}km^{-1}\)] \\ MEAN & 0.98 & 62.67 & 72.7 & 1.63 & 1.5 & 60.09 & 72.8 & 0.65 \\ MEDIAN & 1.00 & 64 & 68.3 & 0.22 & 1.24 & 62 & 76.99 & 0.43 \\ STD & 0.37 & 19.31 & 18.63 & 2.80 & 0.61 & 15.64 & 15.02 & 0.60 \\ PERC[5,95] & [0.42,1.76] & [37,87.6] & [46.35,101.22] & [0.08,7.93] & [0.78,2.44] & [36,79.5] & [47.00,90.44] & [0.14,1.72] \\ \hline \end{tabular} \end{table} Table 4: Statistical results. Mmean, median, standard deviation, and 5. and 95. percentiles for low- and high-inclination events (reported separately). Figure 4: Distributions for NRF ratio, transit time (TT), shock orientation (\(\theta_{B}\)), and drag parameter \(\gamma\) for high-inclination events (orange) and low-inclination events (blue). meaning that there is no significant difference between low- and high-inclination events regarding \(\gamma\) values. For shock orientation and transit time, the F-test confirmed similar variances for low- and high-inclination samples. Thus, instead of Welch's test, the student t-test was performed under the assumption that (1) the shock orientation/transit time for high- and low-inclination events are independent, (2) the shock orientation/transit time distributions for low- and high-inclination samples are normal, and (3) the shock orientation/transit time variances for low-inclination and high-inclination events are similar (according to the F-test). The t-test confirmed the null hypothesis at the 95% significance level, meaning that the samples of shock inclination and transit time for low- and high-inclination events come from populations with equal means. In other words, there is no statistically significant difference between low- and high-inclination groups of events. The fact that there is no difference in the \(\gamma\) parameter and transit time for differently oriented CMEs suggests that the orientation of the CME does not affect the overall drag of the CME. However, we note that the drag depends primarily on the difference between the velocity of the CME and the ambient solar wind speed. In addition, the \(\gamma\) parameter depends on the CME cross section, the ambient solar wind density, the mass of the CME, and the virtual mass. It is possible that the effect of inclination is small enough to be "masked" by all these contributions, even though we selected the sample in order to minimize them. As described in Martinic et al. (2022), the inclination effect on the drag should be most pronounced at the minimum of the solar cycle, where the configuration of the IMF most closely matches that of a simple magnetic dipole. While our sample of events includes some that occurred near the minimum of solar activity (event numbers 11,12,13,14, and 31), the majority of events correspond to the maximum, when the IMF configuration is very complex. Due to the very small sample of events at the minimum of solar activity, no analysis of the difference between events at the minimum and maximum of activity was performed. Except for inclination influence, Vandas et al. (1995) and Vandas et al. (1996) also emphasized the importance of the chirality of the CME for its propagation, which is not captured by our study. This was later tackled by Chane et al. (2006), who studied the propagation of two CMEs: one in which the initial magnetic field and the background magnetic field had the same polarity and another where they had opposite polarities. Their simulations showed that the initial magnetic polarity significantly affects the evolution of CMEs. We note here that the study of Chane et al. (2006) did not examine the effects of CME inclination but rather the effects of initial chirality on propagation in the inner heliosphere. More recently, Shen et al. (2021) studied the effects of different initial CME densities, masses, sizes, and magnetic field configurations on simulation results for observers near Earth and Mars. Nevertheless, to our knowledge, there are no 3D MHD studies aimed specifically at investigating the effects of (I)CME inclination and its interaction with the environment, such as IMF draping and plasma flows ahead of the ICME. Such a study could beneficially complement our findings based on observations. ## 4 Summary and conclusions Altogether, 31 Earth-directed CME-ICME pairs with distinct magnetic obstacle (MO) properties and pronounced sheath regions during the period from 1997 to 2018 were studied. We inferred the dominant inclination from the ellipse fitting of LASCO-C2 and LASCO-C3 coronagraphic images. The dominant inclination was also derived from in situ data of the rotation of magnetic field components in the MO part of the ICME. Of the 31 CME-ICME pairs, 19 are low-inclination events, and 12 are high-inclination events. Some basic features of the ICME propagation in terms of the inclination of the event were analyzed. We investigated the NRFs in the sheath region along with the shock orientation, transit time, and \(\gamma\) parameter. We found a significant difference in NRFs for differently oriented ICMEs. Low-inclination events were found to have lower NFR ratios, while high-inclination events were found to have higher NFR ratios. This implies that low-inclination events are more likely to have ambient plasma escape via the meridional plane, while high-inclination events are more likely to have plasma escape via the ecliptic plane (see Martinic et al. 2022). The plasma deflection on the fast-forward shock could also contribute to the measured NRF ratios. To confirm that the above-stated difference between low- and high-inclination events is indeed due to the deflection of the plasma around the obstacle (ICME FR part) and not due to the deflection of the plasma by the shock front, we examined the dependence of the NRF ratios on the shock orientation. We found no differences in the NRF occurrence frequency with respect to the shock orientation, thus confirming the result stated above. No significant difference was found in the transit time and \(\gamma\) parameter for differently oriented ICMEs. This suggests that the predominant inclination of the ICME has no effect on the drag due to the interaction with the ambient solar wind and IMF. We note that by inclination we mean tilt, that is, the angle between the elliptic plane and ICME flux rope axis, not the magnetic field orientation. We also emphasize that most of the studied events occurred near solar maximum, which is when the IMF has a very complex configuration. It is also possible that the influence of the inclination on the drag force is much smaller than the contributions of other features, such as the difference between the speed of the CME and the solar wind, the CME mass, the CME cross section, and the ambient density, and therefore the inclination effect is very difficult to decipher. ###### Acknowledgements. We acknowledge the support by the Croatian Science Foundation under the project IP-2020-02-9893 (ICHODSS). KM. acknowledges support by the Croatian Science Foundation in the scope of Young Researches Career Development Project Training New Doctoral Students. N. A. acknowledges grants NSF A651954983 and NASA-ECID 80NSSC21K0463. We also acknowledge the support from the Austrian-Comaa Bilateral Scientific Projects "Comparison of ALMA observations with MHD-simulations of coronal waves interacting with coronal holes" and "Multi-Wavelength Analysis of Solar Rotation Profile" This paper uses data from the Heliospheric Shock Database, generated and maintained at the University of Helsinki The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut fur Aeronomie (Germany), Laboratoire d'Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. We acknowledge use of NASA/GSFC's Space Physics Dana Facility's OMNIWeb (or CDAWeb or ftp) service, and OMNI data.
2309.05666
Stellar Cruise Control: Weakened Magnetic Braking Leads to Sustained Rapid Rotation of Old Stars
Despite a growing sample of precisely measured stellar rotation periods and ages, the strength of magnetic braking and the degree of departure from standard (Skumanich-like) spindown have remained persistent questions, particularly for stars more evolved than the Sun. Rotation periods can be measured for stars older than the Sun by leveraging asteroseismology, enabling models to be tested against a larger sample of old field stars. Because asteroseismic measurements of rotation do not depend on starspot modulation, they avoid potential biases introduced by the need for a stellar dynamo to drive starspot production. Using a neural network trained on a grid of stellar evolution models and a hierarchical model-fitting approach, we constrain the onset of weakened magnetic braking. We find that a sample of stars with asteroseismically-measured rotation periods and ages is consistent with models that depart from standard spindown prior to reaching the evolutionary stage of the Sun. We test our approach using neural networks trained on model grids produced by separate stellar evolution codes with differing physical assumptions and find that the choices of grid physics can influence the inferred properties of the braking law. We identify the normalized critical Rossby number ${\rm Ro}_{\rm crit}/{\rm Ro}_\odot = 0.91\pm0.03$ as the threshold for the departure from standard rotational evolution. This suggests that weakened magnetic braking poses challenges to gyrochronology for roughly half of the main sequence lifetime of sun-like stars.
Nicholas Saunders, Jennifer L. van Saders, Alexander J. Lyttle, Travis S. Metcalfe, Tanda Li, Guy R. Davies, Oliver J. Hall, Warrick H. Ball, Richard Townsend, Orlagh Creevey, Curt Dodds
2023-09-11T17:59:25Z
http://arxiv.org/abs/2309.05666v1
# Stellar Cruise Control: Weakened Magnetic Braking Leads to Sustained Rapid Rotation of Old Stars ###### Abstract Despite a growing sample of precisely measured stellar rotation periods and ages, the strength of magnetic braking and the degree of departure from standard (Skumanich-like) spindown have remained persistent questions, particularly for stars more evolved than the Sun. Rotation periods can be measured for stars older than the Sun by leveraging asteroseismology, enabling models to be tested against a larger sample of old field stars. Because asteroseismic measurements of rotation do not depend on starspot modulation, they avoid potential biases introduced by the need for a stellar dynamo to drive starspot production. Using a neural network trained on a grid of stellar evolution models and a hierarchical model-fitting approach, we constrain the onset of weakened magnetic braking. We find that a sample of stars with asteroseismically-measured rotation periods and ages is consistent with models that depart from standard spindown prior to reaching the evolutionary stage of the Sun. We test our approach using neural networks trained on model grids produced by separate stellar evolution codes with differing physical assumptions and find that the choices of grid physics can influence the inferred properties of the braking law. We identify the normalized critical Rossby number \(\rm{Ro_{crit}/Ro_{\odot}}=0.91\pm 0.03\) as the threshold for the departure from standard rotational evolution. This suggests that weakened magnetic braking poses challenges to gyrochronology for roughly half of the main sequence lifetime of sun-like stars. Over their main sequence lifetimes, low-mass stars gradually lose angular momentum and slow their rotation due to magnetic braking (Weber and Davis, 1967; Skumanich, 1972). This angular momentum loss results from the interaction between a star's dynamo-generated field and stellar winds (Parker, 1958; Kawaler, 1988; Barnes, 2007). The method of leveraging stellar rotation periods to estimate age, called _gyrochronology_(Barnes, 2010; Epstein and Pinsonneault, 2013), can provide constraints on age with \(\sim\)10% precision for sun-like stars in some age ranges (Meibom et al., 2015). Numerous studies have provided prescriptions for angular momentum loss (Kawaler, 1988; Krishnamurthi et al., 1997; Sills et al., 2000; Barnes, 2010; Denissenkov et al., 2010; Reiners and Mohanty, 2012; Epstein and Pinsonneault, 2013; Gallet and Bouvier, 2013, 2015; Matt et al., 2015; van Saders et al., 2016), which can be empirically calibrated to observations. The relationship between rotation period and age has been well characterized for young and intermediate-age clusters (Barnes, 2007, 2010; Mamajek and Hillenbrand, 2008; Meibom et al., 2011; Gallet and Bouvier, 2015; Meibom et al., 2015; Angus et al., 2019; Dungee et al., 2022), where both properties can be constrained with adequate precision. In essentially all of these calibrators, rotation rates are measured by observing spot modulation due to dark starspots rotating in and out of view. The high photometric precision of the _Kepler_ Space Telescope (Borucki et al., 2010), and the subsequent _K2_ mission (Howell et al., 2014), enabled predictions for magnetic braking to be tested on a wealth of open clusters and associations (see Cody et al., 2018) as well as a population of older field stars (McQuillan et al., 2014; Santos et al., 2021). In addition to starspot modulation used to detect rotation, brightness modulations due to stellar oscillations are measurable in the high-precision, long-baseline _Kepler_ time series photometry (Huber et al., 2011). Asteroseismology--the study of these oscillations--provides valuable information about the internal structure and evolution of stars. Specifically, stellar rotation rates can be measured from the mode frequencies (Nielsen et al., 2015; Davies et al., 2015; Hall et al., 2021) and ages can be inferred by comparisons with stellar models (Metcalfe et al., 2014, 2016; Silva Aguirre et al., 2015; Creevey et al., 2017). When the ages of older, sun-like field stars were asteroseismically measured with _Kepler_ data, they were found to maintain surprisingly rapid rotation late into their main sequence lifetimes (Angus et al., 2015). To explain this sustained rapid rotation, it was proposed that stars diverge from the "standard spindown" model and enter a phase of "weakened magnetic braking" (WMB; van Saders et al., 2016, 2019). When stellar rotation was measured using asteroseismology rather than spot modulation, the observed rotation periods were consistently faster than predicted by the standard spindown model and evidence for WMB strengthened (Hall et al., 2021). Asteroseismology measures internal rotation rates in the stellar envelope, making it insensitive to surface differential rotation (Nielsen et al., 2015) and stellar inclination (Davies et al., 2015); additionally, asteroseismology can measure rotation rates for stars with weak surface magnetic activity and therefore undetectable spot modulation signals (Chaplin et al., 2011). These features allow asteroseismic rotation periods to avoid potential biases present in measurements from spot detection. Careful analysis of pileups in the temperature-period distribution of sun-like stars also supported the WMB model. Studies of rotation rates in the _Kepler_ field identified an upper envelope in stellar mass versus rotation period that matched a gyrochrone at \(\sim\)4 Gyr (Matt et al., 2015). An upper edge to the distribution could be caused by either a magnetic transition or detection bias in spot modulation (van Saders et al., 2019). Forward modeling of the _Kepler_ field predicted a pileup of rotation periods in the weakened braking scenario that was not seen in the data, but van Saders et al. (2019) argued that errors in the measured effective temperatures were obscuring the feature. With refined measurements of stellar effective temperature, the predicted pileup in the temperature-period distribution was identified (David et al., 2022). A study of sun-like stars with projected rotation periods measured from spectroscopic line broadening found them to be inconsistent with the Skumanich relation beyond \(\sim\)2 Gyr (dos Santos et al., 2016), supporting a departure from standard spindown. This sample was later revisited (Lorenzo-Oliveira et al., 2019), and the analysis suggested that the smooth rotational evolution scenario was favored, and if weakened braking takes place, it occurs at later times (\(\gtrsim\) 5.3 Gyr). However, these measurements faced biases introduced by an uncertain distribution of inclinations, which can inflate rotation periods measured spectroscopically. The physical mechanism that would lead to WMB remains uncertain, though some have proposed that a transition in the complexity of the magnetic field could reduce magnetic braking efficiency (Reville et al., 2015; Garraffo et al., 2016; van Saders et al., 2016; Metcalfe et al., 2016, 2019). Because the transition may to be rooted in the strength and morphology of the magnetic field, it is challenging to test with surface rotation rates measured through spot modulation, which require active stellar dynamos to drive starspot production (Matt et al., 2015; Reinhold et al., 2020). To effectively use gyrochronology to estimate stellar ages, it is essential to understand when the transition to weakened braking occurs. Previous studies have provided estimates for the onset of WMB (van Saders et al., 2016, 2019; David et al., 2022), but fully hierarchical modeling for the braking law has not been previously performed. As the departure from standard spindown depends on the dimensionless Rossby number and is predicted to be shared between all stars (van Saders et al., 2016), the problem is inherently hierarchical. Here, we provide new constraints on the evolutionary phase at which stars undergo weakened braking. We build on previous efforts (e.g. Hall et al., 2021) by modeling the rotational evolution of each star individually. We apply a Hierarchical Bayesian Model (HBM) to constrain the population-level parameters for a WMB model. The use of an HBM has been shown to increase the precision of inferred stellar properties for high-dimensional models (Lyttle et al., 2021). Here, we model the weakened braking parameters as global properties shared by all stars, while simultaneously fitting individual stellar properties. We test the results of our fit using multiple model grids, and compare the performance of a WMB model to standard spindown. By comparing results between multiple model grids, we provide the first constraints on biases introduced by the choices of grid physics when modeling stellar rotational evolution. We find that weakened braking likely occurs before stars reach the evolutionary phase of the Sun. ## 2 Data We fit our rotational model to open clusters, the Sun, and _Kepler_ field stars with asteroseismic measurements to ensure that we capture the early rotational evolution prior to the onset of weakened braking in addition to the behavior on the latter half of the main sequence. The seismic sample that best probes braking generally lies within 0.2 M\({}_{\odot}\) of the Sun and covers a wide range of ages. Stars hotter than 6250 K (\(\sim\)1.2 M\({}_{\odot}\)) lack deep convective envelopes on the main sequence, and do not undergo significant magnetic braking, and the seismic signals of stars cooler than 5000 K (\(\sim\)0.8 M\({}_{\odot}\)) have low pulsation amplitudes and are challenging to measure. We describe our calibrator sources in the following section. ### Open Clusters We included stars from the following open clusters: 23 stars in Praesepe (\(0.67\pm 0.134\) Gyr; [Fe/H] \(=0.15\pm 0.1\) dex; Rebull et al., 2017), 45 stars in NGC 6811 (\(1.0\pm 0.2\) Gyr; [Fe/H] \(=0.0\pm 0.04\) dex; Meibom et al., 2011; Curtis et al., 2019), and 17 stars in NGC 6819 (\(2.5\pm 0.5\) Gyr; [Fe/H] \(=0.10\pm 0.03\) dex; Meibom et al., 2015). We select stars within the T\({}_{\rm eff}\) range of our asteroseismic sample (5200 K \(\leq\) T\({}_{\rm eff}\)\(\leq\) 6200 K), using values for T\({}_{\rm eff}\) reported in Curtis et al. (2020). Ages and metallicities were taken from the corresponding cluster reference, and were used to define priors in our fitting. The Hertzsprung-Russell diagram positions of the open cluster members can be seen in panel (a) of Figure 1. ### Asteroseismic Sample We also included a sample of _Kepler_ field stars with asteroseismically-measured rotation rates and ages from Hall et al. (2021, hereafter Hall21). Rotation rates for main sequence stars can be challenging to measure with starspot modulation, particularly for older and less active stars, due to long rotation periods and diminished stellar activity. However, the rotational splitting of asteroseismic oscillation frequencies can be observed for stars in the end stages of the main sequence, and provides invaluable benchmarks for WMB. Hall et al. (2021) used asteroseismic mode splitting to measure rotation periods for 91 _Kepler_ dwarfs. We augmented the Hall21 sample with two additional stars with asteroseismic rotation measurements in the wide binary system HD 176465 (KIC 10124866; White et al., 2017). The A and B components of this system are sometimes referred to by their nicknames _Luke_ and _Leia_, respectively. The rotation periods reported in White et al. (2017) were derived by fitting asteroseismic mode splitting, following the same approach as Hall21. Figure 1: **(a)** Hertzsprung-Russell diagram showing our sample of calibrators in open clusters. Model tracks generated by our emulator are shown as gray lines. **(b)** Hertzsprung-Russell diagram of our asteroseismic sample from Hall21. We derived the stellar properties shown here with asteroseismic modeling. **(c)** Observed rotation period plotted as a function of stellar age. We color points by their effective temperature. Asteroseismic stars are shown as circles and open cluster members are marked by triangles. We performed asteroseismic modeling for Luke & Leia and 47 stars from the Hall21 sample that fall within our desired mass range using version 2.0 of the Asteroseismic Modeling Portal1 (AMP; Metcalfe et al., 2009; Woitaszek et al., 2009; Metcalfe et al., 2023). This optimization method couples a parallel genetic algorithm (Metcalfe & Charbonneau, 2003) with MESA stellar evolution models (Paxton et al., 2019) and the GYRE pulsation code (Townsend & Teitler, 2013) to determine the stellar properties that most closely reproduce the observed oscillation frequencies and spectroscopic constraints for each star. The choices of input physics are nearly all the default choices in MESA release 12778, and the models include gravitational settling of helium and heavy elements (Thoul et al., 1994) as well as the two-term correction for surface effects proposed by Ball & Gizon (2014). The resulting asteroseismic sample is shown in panel (b) of Figure 1, while the stellar properties and rotation periods can be found in Table 1, which includes maximum-likelihood estimates of the age, mass, composition, and mixing-length from our AMP modeling. Footnote 1: github.com/travismetcalfe/amp2 With masses derived from asteroseismic modeling, we made mass cuts (\(0.8~{}\rm M_{\odot}\leq M\leq 1.2~{}\rm M_{\odot}\)) to ensure our sample would fall within the bounds of our model grids. Previous studies have indicated that rotation periods in field stars \(<7\) days are likely due to non-eclipsing short-period binaries (Simonian et al., 2019, 2020), and we therefore remove three stars (KIC 6603624, KIC 8760414, KIC 8938364) from the sample that showed rotation \(<7\) days at ages \(>8\) Gyr that we suspect are inconsistent with single star evolution. Panel (c) of Figure 1 shows the rotation periods and ages for our full sample of open clusters and asteroseismic field stars. ## 3 Methods We produced model grids for rotational evolution using two stellar evolution codes--Modules for Experiments in Stellar Astrophysics (MESA; Paxton et al., 2010, 2013, 2015, 2018, 2019) and Yale Rotating Stellar Evolution Code (YREC; Pinsonneault et al., 1989; Demarque et al., 2008). The ranges of stellar properties covered by our grid are detailed in Table 2, and we describe the model physics used to generate each grid in the following sections. ### Mesa Model Grid We construct our MESA grid with identical input physics to the models used for asteroseismic inference (described in SS2.2) in order to avoid biases introduced by the modeling (see Tayar et al., 2020). Our models used initial elemental abundances from Grevesse & Sauval (1998) and an atmospheric temperature structure following an Eddington \(T(\uptau)\) relation with fixed opacity. We smoothly ramp diffusion from fully modeled at \(\rm M\leq 1.1~{}\rm M_{\odot}\) to no diffusion at \(\rm M\geq 1.2~{}\rm M_{\odot}\). We do not include core or envelope overshoot. We varied the mass \(M\), metallicity [Fe/H], initial Helium abundance \(Y_{\rm init}\), and mixing length parameter \(\alpha_{\rm MLT}\). We calculated rotational evolution histories (as described in SS3.3) for each combination of stellar properties and appended them to our grid. By default, MESA models do not output the necessary stellar parameters to perform rotational evolution, and it was necessary to adapt the outputs included in the grid. The additional parameters we include for each star were the total moment of inertia \(I_{\rm tot}\), the moment of inertia of the convective envelope \(I_{\rm env}\), the photospheric pressure \(P_{\rm phot}\), and the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline KIC & Age (Gyr) & \(P_{\rm rot}\) (days) & \(M\) (\(M_{\odot}\)) & T\({}_{\rm eff}\) (K) & [Fe/H] (dex) & Y\({}_{\rm init}\) & \(\alpha_{\rm MLT}\) \\ \hline [MISSING_PAGE_POST] convective overturn timescale \(\tau_{\rm cz}\). We define \(\tau_{\rm cz}\) as \[\tau_{\rm cz}=\frac{H_{P}}{v_{\rm conv}}\] where \(H_{P}\) is the pressure scale height at the convective zone boundary and \(v_{\rm conv}\) is the convective velocity one pressure scale height above the base of the convective zone. Stellar interiors in MESA models are divided into shells and the parameters are evaluated at a finite number of points. We identified the precise location of the base of the convective zone as a function of the star's mass fraction using the Schwarzschild criterion, and then interpolated between the values calculated at each shell boundary to more precisely identify the values of our desired parameters at each time step. ### YREC Model Grid We construct our YREC grid following the settings laid out in van Saders and Pinsonneault (2013) and Metcalfe et al. (2020). We use the mixing length theory of convection (Vitense, 1953; Cox and Giuli, 1968) with the 2006 OPAL equation of state (Rogers et al., 1996; Rogers and Nayfonov, 2002). Abundances were taken from Grevesse and Sauval (1998) and opacities from the Opacity Project (Mendoza et al., 2007). We define atmosphere and boundary conditions from Kurucz (1997). Nuclear reaction rates were drawn from Adelberger et al. (2011). \(Y_{\rm init}\) was fixed to a linear Helium-enrichment law anchored to the Sun with a slope of \(\left(\frac{{\rm dY}}{{\rm dZ}}\right)_{\odot}=1.296\) (see SS5.4). We varied the same parameters as we did for the MESA grid, with the exception of \(Y_{\rm init}\). As with the MESA grid, we trace additional parameters to evaluate the angular momentum loss law. For each model at each timestep, we calculate the moment of inertia of both the star and its convective envelope, the photospheric pressure, and the convective overturn timescale. ### Magnetic Braking Model Prescriptions for magnetic braking often incorporate the dimensionless Rossby Number (Ro), defined as the ratio between the rotation period, \(P\), and convective overturn timescale within the stellar envelope, \(\tau_{\rm cz}\), as a means to estimate magnetism across stars of different masses. We use the Rossby number in our rotation model due to its utility as a tracer for both the mass and composition dependence of spindown and magnetic field strength. We invoke a Rossby threshold, \({\rm Ro}_{\rm crit}\), beyond which point stars depart from a simple power law spindown and conserve angular momentum (van Saders et al., 2016). We adopt the Matt et al. (2012) modification to the Kawaler (1988) braking law. We assume, as in van Saders and Pinsonneault (2013), that the magnetic field strength \begin{table} \begin{tabular}{l l l l} \hline \hline Parameter & & MESA Bounds & YREC Bounds \\ \hline Mass & \(M\) (\(M_{\odot}\)) & [0.8, 1.2] & [0.8, 1.2] \\ Mixing Length Parameter & \(\alpha_{\rm MLT}\) & [1.4, 2.0] & [1.4, 2.0] \\ Metallicity & [Fe/H] (dex) & [-0.3, 0.3] & [-0.3, 0.3] \\ Initial Helium Abundance & \(Y_{\rm init}\) & [0.22, 0.28] & _not varied_ \\ Braking Law Strength & \(f_{K}\) & [4.0, 11.0] & [4.0, 11.0] \\ Critical Rossby Number & \({\rm Ro}_{\rm crit}\) & [1.0, 4.5] & [1.0, 4.5] \\ \hline \hline \end{tabular} \end{table} Table 2: Parameter boundaries of the MESA and YREC grids. scales as \(B\propto\mathrm{P}_{\mathrm{phot}}^{1/2}\mathrm{Ro}^{-1}\), where \(\mathrm{P}_{\mathrm{phot}}\) is the photospheric pressure, and that mass loss \(\dot{M}\) scales as \(\dot{M}\propto L_{X}\propto L_{\mathrm{bol}}\mathrm{Ro}^{-2}\), where \(L_{X}\) is the x-ray luminosity and \(L_{\mathrm{bol}}\) is the bolometric luminosity. Our full model for rotational evolution is described by \[\frac{\mathrm{d}J}{\mathrm{d}t}=\begin{cases}f_{K}K_{M}\omega\left(\frac{ \omega_{\mathrm{sat}}}{\omega_{\odot}}\right)^{2},&\omega_{\mathrm{sat}}\leq \omega\frac{\tau_{\mathrm{cz}}}{\tau_{\mathrm{cz},\odot}},\mathrm{Ro}\leq \mathrm{Ro}_{\mathrm{crit}}\\ f_{K}K_{M}\omega\left(\frac{\omega\tau_{\mathrm{cz}}}{\omega_{\odot\tau_{ \mathrm{cz},\odot}}}\right)^{2},&\omega_{\mathrm{sat}}>\omega\frac{\tau_{ \mathrm{cz}}}{\tau_{\mathrm{cz},\odot}},\mathrm{Ro}\leq\mathrm{Ro}_{\mathrm{ crit}}\\ 0,&\mathrm{Ro}>\mathrm{Ro}_{\mathrm{crit}}\end{cases}\] where \(\mathrm{Ro}\) is defined as \[\mathrm{Ro}=\frac{P}{\tau_{\mathrm{cz}}},\] \(f_{K}\) is the scaling factor for the strength of angular momentum loss during classical spindown, \(\omega_{\mathrm{sat}}\) is the threshold at which angular momentum loss saturates for young stars, and with \[\frac{K_{M}}{K_{M,\odot}}=c(\omega)\left(\frac{R}{R_{\odot}}\right)^{3.1} \left(\frac{M}{M_{\odot}}\right)^{-0.22}\left(\frac{L}{L_{\odot}}\right)^{0.56} \left(\frac{P_{\mathrm{phot}}}{P_{\mathrm{phot}},\odot}\right)^{0.44}.\] The term \(c(\omega)\) is the centrifugal correction from Matt et al. (2012), and we assume \(c(\omega)=1\), which is appropriate for slowly rotating stars. To calculate the rotation histories for our grid, we take the outputs of non-rotating MESA and YREC models, and compute rotation periods with the rotevol code (van Saders & Pinsonneault, 2013; Somers et al., 2017). We focus only on \(f_{K}\) and \(\mathrm{Ro}_{\mathrm{crit}}\) as they will be the most dominant parameters of a WMB law for the stars in our sample, which are old enough to have converged onto tight rotation sequences (Epstein & Pinsonneault, 2013; Gallet & Bouvier, 2015). We assume a disk locking period of 8.13 days and disk lifetime of 0.28 Myr, setting the initial rotation rates of our models (van Saders & Pinsonneault, 2013). We fix \(\omega_{\mathrm{sat}}\) to 3.863 \(\times 10^{-5}\) rad/s. Each of these parameters will be important at early (\(<100\) Myr) times, but will have negligible effects by the time stars reach the ages in our sample. We assume solid body rotation in our models, since the epoch of radial differential rotation in this mass range is again limited to young stars (Denissenkov et al., 2010; Gallet & Bouvier, 2015; Spada & Lanzafame, 2020). ### Model Grid Emulator With rotationally evolved model grids, we construct an emulator for rapid stellar evolution modeling. The general approach to this type of optimization problem is simple interpolation between tracks in a high-dimensional model grid (e.g. Berger et al., 2020). However, due to the size of the grid, number of parameters (4-5 per star and cluster, with 2 additional global braking law parameters), and large sample of potential targets, this approach becomes computationally expensive, particularly in the application of Bayesian inference through sampling the model. We therefore opt to train an artificial neural network (ANN) to map the stellar parameters of the grid to observable parameters of stars in our sample. We define our MESA ANN with seven input parameters and four output parameters. Our inputs represent fundamental stellar properties: age, mass, metallicity, initial Helium abundance, mixing length parameter, braking law strength, and critical Rossby number. The ANN outputs are observable quantities: effective temperature, radius, surface metallicity, and rotation period. The YREC ANN has the above input parameters with \(Y_{\text{init}}\) excluded, and identical output parameters. The remainder of this section describes the training and characterization of the MESA ANN. The process for training the YREC ANN is identical, and we compare the results when using different grids in SS5.4. Our model structure results in a neural network that acts as a stellar evolution emulator. Given some set of input stellar properties, the model will output the corresponding observable quantities. Because the emulation is rapid, the model can also be used to calculate likelihoods to infer input parameters--given some set of observed properties, we can sample prior distributions for the underlying stellar properties and retrieve posterior distributions, providing estimates for these values with uncertainties. We construct an ANN with 6 hidden layers comprised of 128 neurons each (following the tuning process of Lyttle et al., 2021). Each hidden layer used an Exponential Linear Unit (ELU) activation function. Using TensorFlow(Abadi et al., 2016), we trained the model on an NVidia Tesla V100 graphics processing unit (GPU) for 10,000 epochs using an _Adam_ optimizer (Kingma and Ba, 2017) with a learning rate of \(10^{-5}\). We trained the ANN in \(\sim\)8,000 batches of \(\sim\)16,000 points. The full model architecture is detailed in Appendix A. Prior to training the ANN, we remove the pre-main sequence from the tracks in our grid, defined as the threshold at which the luminosity from nuclear burning exceeds 99% of the total stellar luminosity. We allow the tracks to begin evolving across the subgiant branch, as our sample includes stars at or approaching this evolutionary stage, but remove tracks that exceed a rotation period of 150 days. In order to ensure that the mapping performed by the neural network does not introduce significant uncertainty to the inferred parameters, we divide the grid data into a training set and a validation set. The training set is composed of 80% of the models in the grid, drawn at random, and is used to generate the connections between the input model parameters and observed stellar properties. The remaining 20% of the grid is then used as a validation set to predict the observed parameters based on the provided input parameters, allowing us to characterize the neural network's ability to successfully predict well understood values. When compared to the measurement uncertainties associated with these parameters, the error introduced by the ANN is negligible, with typical fractional uncertainties of \(\sim\)\(10^{-3}\) in the recovery of our validation set (see Figure 2). We also find negligible systematic offset for parameters in our validation set, indicating that the ANN is not introducing significant bias. ### Statistical Modelling In order to efficiently optimize the braking law model parameters, we construct a hierarchical Bayesian model (HBM). The application of a similar HBM for constraining the distribution of \(Y_{\text{init}}\) and \(\alpha_{\text{MLT}}\) has been demonstrated by Lyttle et al. (2021). We begin the construction of our model with Bayes' theorem--the posterior probability of our model parameters \(\boldsymbol{\theta}_{i}\) given some set of observed data \(\mathbf{d}_{i}\) is \[p(\boldsymbol{\theta}_{i}|\mathbf{d}_{i})\propto p(\boldsymbol{\theta}_{i})p( \mathbf{d}_{i}|\boldsymbol{\theta}_{i})\] where \(p(\boldsymbol{\theta}_{i})\) is the prior on the model parameter \(\boldsymbol{\theta}_{i}\) (for \(i\) parameters) and \(p(\mathbf{d}_{i}|\boldsymbol{\theta}_{i})\) is the likelihood of the data given the model. We use our trained ANN to sample the prior distribution \(p(\boldsymbol{\theta}_{i})\) for each parameter and evaluate an instance of the model \(\boldsymbol{\mu}_{i}=\boldsymbol{\lambda}_{i}(\boldsymbol{\theta}_{i})\), where \(\boldsymbol{\lambda}_{i}\) represents the ANN model. From this, we can represent the likelihood of each observation \(\mathbf{d}_{i}\) with uncertainty \(\boldsymbol{\sigma}_{i}\) given the model evaluation \(\boldsymbol{\mu}_{i}\) as the normal distribution \[p(\mathbf{d}_{i}|\boldsymbol{\theta}_{i})=\prod_{n=1}^{N}\frac{1}{\sigma_{n,i} \sqrt{2\pi}}\exp\left[-\frac{(d_{n,i}-\mu_{n,i})^{2}}{2\sigma_{n,i}^{2}}\right]\] given \(N\) observed variables. The hierarchical structure of our model allows us to prescribe various levels of pooling to different parameters. The WMB model parameters \(f_{K}\) and \(\mathrm{Ro}_{\mathrm{crit}}\), for example, are assumed to be the same for all stars in our sample. For the ANNs trained on both the the MESA and YREC grids, we define the prior for \(\mathrm{Ro}_{\mathrm{crit}}\) as \[\mathrm{Ro}_{\mathrm{crit}}\sim\mathcal{U}(1.0,4.5)\] and the prior for \(f_{K}\) as \[f_{K}\sim\mathcal{U}(4.0,11.0)\] where \(\theta\sim X\) represents a parameter \(\theta\) being randomly drawn from a distribution \(X\), and \(\mathcal{U}(a,b)\) is a uniform distribution bounded between \(a\) and \(b\). The values of \(\mathrm{Ro}_{\mathrm{crit}}\) and \(f_{K}\) drawn from these Figure 2: Uncertainty introduced by the MESA ANN emulator. The histograms for P, R, and T\({}_{\mathrm{eff}}\) show the (predicted\(-\)truth)/truth value for our training set, and the bottom right panel shows predicted\(-\)truth for the surface metallicity to account for points where \([\mathrm{Fe}/\mathrm{H}]_{\mathrm{surface,truth}}\approx 0\). The median \(\mu\) and standard deviation \(\sigma\) of these distributions are shown in the top right corner for each parameter, and \(\mu\) is marked by the solid vertical line. The error incurred by the ANN is negligible compared to the uncertainty on the observed values. uniform distributions are used to calculate the full set of model evaluations \(\boldsymbol{\mu}_{i}\) for that step. The bounds for \(\text{Ro}_{\text{crit}}\) and \(f_{K}\) were centered near the solar Rossby number derived for our grids (for MESA: \(\text{Ro}_{\odot}\approx 2.05\), \(f_{K,\odot}\approx 5.89\); for YREC: \(\text{Ro}_{\odot}\approx 2.33\), \(f_{K,\odot}\approx 7.52\)). Other parameters are assumed to be unique to each star. For the YREC ANN, we constrain the mass, metallicity, mixing length parameter, and age. We constrain the same parameters for the MESA ANN with the addition of the initial Helium abundance. The prior distributions for these parameters are defined as truncated normal distributions, given by \[p(\boldsymbol{\theta})\sim\mathcal{N}_{[a,b]}(\mu,\sigma)\] where \(\mathcal{N}\) is the normal distribution, \(a\) and \(b\) are the lower and upper bounds, respectively, \(\mu\) is the median and \(\sigma\) is the standard deviation. Here, \(\mu\) and \(\sigma\) are taken from the observational constraints on the parameters and their uncertainties. For stars in clusters, we define a prior centered on the value reported in the corresponding reference (see SS2.1) with a width set to the measurement uncertainty for age, metallicity, mixing length parameter, and rotation period (with the inclusion of \(Y_{\text{init}}\) for the MESA grid). For the masses of cluster stars, we use a homology scaling relationship with T\({}_{\text{eff}}\) and set a broad prior (\(\sigma_{M}=\)0.25 M\({}_{\odot}\)), and for the mixing length parameter and initial Helium abundance we use uniform priors. For asteroseismic stars in our sample, all of the above properties are constrained by the asteroseismic fitting, and we use this asteroseismic value and its uncertainty as the center and width of the prior distributions, respectively. Our truncated distributions for all stars are bounded by the grid limits described in Table 2. Finally, we include a third class of prior distributions in our model which are shared by some stars but not all. Each star within the same cluster is assumed to have the same age, metallicity, and initial Helium abundance, while these parameters should be fully independent for each target in the asteroseismic sample and for the Sun. These prior distributions share the same truncated normal form as the independent parameters, but can be selectively applied to specific subsets of the data. With our priors and likelihoods defined, we sampled the model parameters. The ANN is compatible with automatic differentiation, allowing us to utilize No-U-Turn Sampling (NUTS; Hoffman & Gelman, 2014). We constructed a probabilistic model with PyMC3 (Salvatier et al., 2016), then calculated the maximum a posteriori estimate as our starting point and sampled 4 chains for 5,000 draws with 1,000 tuning steps. We sampled chains long enough to ensure that the Gelman-Rubin \(\hat{R}\) statistic (Gelman & Rubin, 1992) was lower than 1.01 for all parameters indicating model convergence. The residuals from our fit, as well as an example of our model fit to the Sun, are shown in Appendix B. ## 4 Results We optimize the parameters of our model under two different assumptions--standard spindown and WMB. In the standard spindown framework, we assume stars follow a Skumanich-like angular momentum loss law, where \(\dot{J}\propto\omega^{3}\) at late times. Under the WMB assumption, stars lose angular momentum to magnetized stellar winds with the same relation as the standard spindown law until they reach a critical Rossby number \(\text{Ro}_{\text{crit}}\), at which point angular momentum is conserved. We use the MESA ANN as our primary emulator as its grid physics match the models used in the asteroseismic parameter estimates. In the standard spindown case, we only optimize for \(f_{K}\), and retrieve a constraint of \(f_{K}=6.11\pm 0.73\). For the WMB model, we report \(f_{K}=5.46\pm 0.51\) and \(\text{Ro}_{\text{crit}}/\text{Ro}_{\odot}=0.91\pm 0.03\). Figure 3 shows the distribution of rotation periods predicted by our WMB model. We have divided the sample into equal-size bins in \(\mathrm{T_{eff}}\) because temperature captures the effects of both a star's mass and metallicity on its rotational evolution. The red shaded regions show the density of stars drawn from a simulated population of 1,000,000 stars under the best-fit WMB assumptions, generated with stellar properties drawn from uniform distributions for each parameter bounded by the edges of our sample using our MESA emulator. The width of the distribution is caused by the range of masses, metallicities, Helium abundances, and mixing length parameters within each \(\mathrm{T_{eff}}\) bin. Stars in clusters can be seen as groups with discrete, well-constrained ages below 2.5 Gyr, and are valuable calibrators for the early angular momentum loss \(\dot{J}\). In our model, this early \(\dot{J}\) is captured by the braking law strength parameter, \(f_{K}\). Stars in our asteroseismic sample span a wide range of ages, particularly on the second half of the main sequence, and provide the constraint on \(\mathrm{Ro_{crit}}\). In Figure 4, we show the comparison between the rotation periods predicted by both the standard spindown and WMB models (in blue and red, respectively). Each shaded region represents the density of points in a population of 100,000 simulated stars from our MESA emulator. The standard spindown model was fit to the full sample, without altering angular momentum loss beyond a Rossby threshold. The models produce similar constraints on \(f_{K}\), as the early rate of \(\dot{J}\) is well-constrained by the clusters in both models. At older ages, the standard spindown model significantly overpredicts the rotation periods of stars in our asteroseismic sample. The WMB model results in a smaller average deviation from the observed rotation periods. Figure 5 shows the the difference between predicted and observed rotation periods for our sample. The colored points show the uncertainty-weighted median within a 0.2 \(t/t_{\mathrm{MS}}\) bin. On average, the standard Figure 3: Stellar rotation period versus age, shown in three bins each spanning 300 K in \(\mathrm{T_{eff}}\). Asteroseismic measurements and cluster stars are shown by points—black points represent rotation periods with fractional uncertainties \(\sigma_{P}/P\leq 25\%\) and gray points show \(\sigma_{P}/P>25\%\). The Sun is marked by the \(\odot\) symbol. Red contours represent the distribution of rotation periods within a given \(\mathrm{T_{eff}}\) bin predicted by our MESA emulator model, produced from a sample of one million emulated stars with stellar properties randomly drawn from uniform distributions bounded by our sample, and \(f_{K}\) and \(\mathrm{Ro_{crit}}\) fixed to the median values of the posterior distributions. spindown model overpredicts rotation periods by 0.72 days for the full sample and 6.00 days for stars beyond the first half of the main sequence (\(t/t_{\rm MS}\geq 0.5\)). Conversely, WMB underpredicts rotation periods by 0.31 days for the full sample and 3.18 days for stars past \(0.5t/t_{\rm MS}\). Isolating only the asteroseismic sample (at all ages), standard spindown overpredicts \(P_{\rm rot}\) by 4.66 days on average, and WMB underpredicts by 2.02 days. The corresponding fractional deviations for the asteroseismic sample are \(+17.73\%\) for standard spindown and \(-9.09\%\) for WMB. We perform a reduced chi-squared test to determine the goodness-of-fit for our models, and we find \(\chi^{2}_{\nu,{\rm WMB}}=1.07\) and \(\chi^{2}_{\nu,{\rm standard}}=14.02\). Because \(\chi^{2}_{\nu,{\rm WMB}}\ll\chi^{2}_{\nu,{\rm standard}}\), we conclude that the WMB model provides a better fit to the data. Figure 5 shows the difference between predicted and observed rotation periods as a function of fraction of main sequence lifetime. For the first half of the main sequence, the standard spindown and WMB models both describe the observed rotation periods well. However, at roughly halfway though the main sequence (\(0.5t/t_{\rm MS}\)), the standard spindown model deviates from the observed distribution and begins overpredicting rotation periods. Both models are consistent with the cluster data, which follow a tight spindown sequence that is nearly identical for the two models (see Figure 4). ## 5 Discussion We have provided refined probabilistic estimates for the onset of WMB, described by the parameter \(\rm Ro_{\rm crit}\). Our model indicates that stars enter a phase of weakened braking before reaching the Rossby number of the Sun (\(\rm Ro_{\rm crit}=0.91\pm 0.03~{}Ro_{\odot}\)). This result supports constraints by David et al. (2022), which found a sub-solar \(\rm Ro_{\rm crit}\) when examining the pileup in the temperature-period distribution of _Kepler_ stars. van Saders et al. (2016, 2019) found that a critical Rossby number of \(\rm Ro_{\rm crit}\approx Ro_{\odot}\) provided the best fit to the observed rotation periods, which agrees with our results within \(2\sigma\). Figure 4: Same as Figure 3, with the additional comparison to the standard spindown model. The contours represent the distribution of predicted rotation periods within a given \(\rm T_{\rm eff}\) bin, with red showing our WMB model and blue showing a standard Skumanich-like spindown model, both generated with our MESA emulator. The value of \(f_{K}\) for the standard spindown model is the median value from the posterior of a fit to our full sample with no \(\rm Ro_{\rm crit}\) constraint. The new constraints on weakened braking parameters provided here can be used as guidelines for where gyrochronology is likely to be accurate. Beyond \(\mathrm{Ro_{crit}}\), rotation evolves only slowly with the changing moment of inertia, and stars can be observed with the same rotation period for Gyr timescales, challenging any gyrochronological estimate. We show that gyrochronological ages should be precise until \(\sim\)\(\mathrm{Ro_{\odot}}\), corresponding to an age of \(\sim\)4 Gyr for sun-like stars. After the onset of WMB, age estimates should have significantly larger uncertainties due to the slowly evolving rotation on the second half of the main sequence. ### WMB Model Performance Towards the end of the main sequence, our model for weakened braking begins to underestimate rotation periods. This likely reflects our overly simple implementation of the transition from standard Figure 5: Difference between predicted and observed rotation periods for all stars in our sample (shown as gray points) as a function of fraction of main sequence lifetime \(t/t_{\mathrm{MS}}\). The blue and red points represent the uncertainty-weighted median of \(\delta P\) within a 0.2 \(t/t_{\mathrm{MS}}\) bin for the standard and WMB models, respectively. Main sequence lifetime was estimated by identifying the age of core-H exhaustion in MESA models generated for each star. Roughly halfway through the main sequence lifetime, the standard spindown model begins significantly over-predicting rotation periods. The WMB model is consistent with the observed distribution until near the end of the main sequence, at which point it underestimates rotation periods. to weakened braking. The immediate shutdown of angular momentum loss beyond \(\mathrm{Ro}_{\mathrm{crit}}\) is the simplest model which introduces the fewest new parameters. Given the limited sample of reliable calibrators spanning a wide range of \(\mathrm{T}_{\mathrm{eff}}\) near the onset of WMB, any parameterization of a possible gradual transition, or a transition that does not completely shut down magnetic braking, is not well constrained. As more seismic constraints are placed on the ages and rotation periods near \(\mathrm{Ro}_{\mathrm{crit}}\), additional parameters that lead to a gradual transition, or \(\dot{J}\neq 0\) beyond the transition, can be tested. The deviation between the WMB model and observed rotation periods could additionally be partially explained by small deviations in inferred model ages. At the end of a star's main sequence lifetime, even in the WMB framework when angular momentum is conserved, the rotation period increases steeply due to the changing stellar moment of inertia as the star's radius expands. Models for rotation increase on short time spans in parallel vertical tracks in rotation-age space as stars traverse the subgiant branch, with small separations between stars of different \(\mathrm{T}_{\mathrm{eff}}\). Improved asteroseismic modeling, or a larger sample of stars with asteroseismic parameter constraints, could better distinguish between these effects at the end of the main sequence. ### Assessing the Asteroseismic Constraint To illustrate the impact of the asteroseismic sample on our ability to constrain \(\mathrm{Ro}_{\mathrm{crit}}\), we fit our model to two subsets of the data: one comprised of only clusters and the Sun, capturing the early rotational evolution, and one that adds the asteroseismic stars. Figure 6 shows a Kernel Density Estimate (KDE) of the sampled marginal posterior distributions for \(\mathrm{Ro}_{\mathrm{crit}}\) when fit to each of these samples. When fit to only clusters and the Sun, \(\mathrm{Ro}_{\mathrm{crit}}\) has little to no likelihood below the solar value, and is unconstrained beyond the solar value. This aligns with our expectations, as the young cluster sample has repeatedly been shown to follow standard braking (Barnes, 2007, 2010; Mamajek and Hillenbrand, 2008; Gallet and Bouvier, 2015; Meibom et al., 2011, 2015). When the asteroseismic sample is included, the posterior becomes tightly constrained near the solar value. This exercise clearly demonstrates why the effects of WMB were not identified until a large enough sample of stars with precise rotation periods and ages spanning the main sequence were available. ### Consistency with Solar Twins A recent study by Lorenzo-Oliveira et al. (2019) proposed tension between the weakened magnetic braking model and an observed population of "solar twins." The stars in this sample have typical masses within \(\pm 0.05\)\(\mathrm{M}_{\odot}\) of solar and metallicities with \(\pm 0.04\) dex of solar. Rotation periods were not directly measured for the majority of stars in this sample, instead the projected rotational velocity \(v\sin i\) of each star was estimated from spectral line broadening. This was converted to a projected rotation period, \(P_{\mathrm{rot}}/\sin i\), using stellar properties derived from from _Gaia_ DR2 (Gaia Collaboration et al., 2018) and ground-based spectroscopic data. If a system is observed directly edge on (\(i=90^{\circ}\)), the projected rotation period will match that measured from photometric spot modulation or asteroseismic mode splitting. The primary effect of rotation axis inclination away from \(90^{\circ}\) is to shift the projected rotation period to a higher value (see panel (a) of Figure 7). Lorenzo-Oliveira et al. (2019) undergo a selection process of simulating projected rotation periods given some random orientation between 0 and \(90^{\circ}\), comparing their measured population against these simulations, and reducing their sample to stars they found most likely to be seen edge on based on the agreement (see SS2 of Lorenzo-Oliveira et al. 2019 for a full description of their approach). As only a fraction of the observed sample is likely to be observed directly edge on, the fastest rotation periods in the solar twins sample represent a lower envelope to the true distribution of rotation periods of the sample. We test the standard spindown and WMB models against the solar twins sample, seen in panels (b) and (c) of Figure 7. We calculate \(P_{\rm rot}/\sin i\) for our MESA emulator model tracks, drawing inclinations randomly from a uniform distribution between 0 and 1 in \(\cos i\). The stellar properties of our model grid were drawn from uniform distributions bounded by the parameter cuts described in Lorenzo-Oliveira et al. (2019)--mass and metallicity were bounded by 0.8 M\({}_{\odot}\leq\) M \(\leq 1.2\) M\({}_{\odot}\) and \(-0.04\leq\) [Fe/H] \(\leq\) +0.04, and unconstrained parameters were given broad uniform priors (\(0.22\leq Y_{\rm init}\leq 0.28\), \(1.4\leq\alpha_{\rm MLT}\leq 2.0\)). We note that fixing \(Y_{\rm init}\) and \(\alpha_{\rm MLT}\) to solar-calibrated values has negligible impact on the model fit. We find that the standard spindown model overpredicts projected rotation periods beyond the age of the Sun. The WMB model predicts the observed population with minor deviations from entirely edge-on inclinations. We find that the WMB model reasonably reproduces the behavior observed in the solar twins, and does so better than the standard spindown model. ### Accounting for Grid Bias We test our model fit using neural networks trained on grids of models generated by two stellar evolution codes, MESA and YREC. This provides an opportunity to independently validate our results as well as test for any bias introduced by the choice of grid. To date, most investigations of WMB have used ages and rotational evolution that were inferred using reasonable, but different, underlying stellar evolution models. Our MESA grid was constructed with input physics matching the asteroseismic modeling, avoiding the cross-grid bias when fitting the MESA-trained neural network to the asteroseismic observations. While we have matched the physics in the seismic and rotational models, we have not performed the fits simultaneously, which we reserve for future work. Figure 6: Comparison between posterior distributions for Ro\({}_{\rm crit}\) from models fit to different subsets of the data. When fit to only clusters and the Sun (shown in red), Ro\({}_{\rm crit}\) is unconstrained beyond the solar Rossby number. With the inclusion of the asteroseismic sample (shown in blue), Ro\({}_{\rm crit}\) is tightly constrained just below the solar Rossby number. The y-axis has been arbitrarily scaled for clarity. The primary difference between the construction of the grids was to vary \(\mathrm{Y_{init}}\) as an additional dimension of the MESA grid, while calculating it with a fixed He-enrichment law in the YREC grid. We used a relation to compute the value of \(\mathrm{Y_{init}}\) for a model in the YREC grid given its metallicity [Fe/H] given by \[Y_{\mathrm{init}}=Y_{P}+\frac{(1-Y_{P})\left(\frac{\mathrm{d}Y}{\mathrm{d}Z} \right)_{\odot}}{\left(\frac{\mathrm{d}Y}{\mathrm{d}Z}\right)_{\odot}+\left( \frac{Z}{X}\right)_{\odot}^{-1}10^{-[\mathrm{Fe/H}]}+1}\] where \(\mathrm{Y_{P}}\) is the primordial Helium abundance, the slope of the Helium enrichment law that matches the solar value is \(\left(\frac{\mathrm{d}Y}{\mathrm{d}Z}\right)_{\odot}=1.296\), and the solar metal fraction is \(\left(\frac{Z}{X}\right)_{\odot}=0.02289\)(Grevesse & Sauval, 1998). The ANN for the YREC grid was trained identically to the process for the MESA grid described in SS3.4, and we constructed the probabilistic model following the process described in SS3.5. For the YREC ANN, the value of \(Y_{\mathrm{init}}\) fit by our asteroseismic modeling with MESA was not used as a constraint on the model likelihood, while it was for the MESA ANN. The choice to include \(Y_{\mathrm{init}}\) as a free parameter, as well as the differences between how different stellar evolution codes calculate quantities used in our modeling, have the potential to introduce systematic biases in the resulting model fits. Here, we compare between the results inferred by emulators trained on different model grids. Most braking laws include a strong Ro dependence, and thus a dependence on the convective overturn timescale \(\mathrm{\tau_{cz}}\), and there is no single agreed upon means of calculating this value (see Kim & Demarque, 1996). Furthermore, changes in grid physics can result in different values of \(\mathrm{\tau_{cz}}\), even in solar-calibrated models. To account for this, we normalize \(\mathrm{Ro_{crit}}\) by a grid-dependent solar Rossby Figure 7: **(a)** Projected rotation period, \(P_{\mathrm{rot}}/\sin i\), of the solar twins sample versus age. The colored lines show tracks from our MESA emulator for a solar-calibrated model with a range of stellar inclinations, evolved under WMB assumptions. Models that are not observed edge on have their projected periods shifted to higher values. **(b)** The solar twins sample compared to a standard spindown model with a range of stellar inclinations. We generated a population of 1,000,000 stars with parameters drawn from uniform distributions within \(\pm 0.05\)\(\mathrm{M_{\odot}}\) of solar for M, \(\pm 0.04\) dex of solar for [Fe/H], and inclinations, \(i\), drawn from a uniform distribution in \(\cos i\). \(Y_{\mathrm{init}}\) and \(\alpha_{\mathrm{MLT}}\) were drawn from uniform distributions covering our model grid. **(c)** Same as panel (b), but with the WMB model. \(f_{K}\) and \(\mathrm{Ro_{crit}}\) were fixed to values fit to our full sample. The standard model overpredicts rotation periods of the solar twins sample beyond the age of the Sun, while they are consistent with WMB when accounting for inclinations. number \(\mathrm{Ro}_{\odot}\). To calculate \(\mathrm{Ro}_{\odot}\) for each grid, we produced solar-calibrated stellar evolution tracks and compute the Rossby number at the age of the Sun. For each model grid, we also compute the value of \(f_{K}\) that reproduced solar rotation at solar age under the standard spindown assumption, and apply this as a normalization factor when comparing the inferred values of \(f_{K}\) in our WMB models. We notate this solar-normalized braking law strength as \(f_{K}^{\prime}\). These normalization factors allow us to compare directly between the braking law parameters inferred from the ANN trained on each model grid. The left panel of Figure 8 shows the marginal and joint posterior distributions for the braking law parameters when fit with the MESA and YREC ANNs. The black dashed line shows the solar Rossby number, \(\mathrm{Ro}_{\odot}\). Both MESA and YREC return values of \(\mathrm{Ro}_{\mathrm{crit}}\) below \(\mathrm{Ro}_{\odot}\), indicating that the onset of WMB occurs before the age of the Sun for a solar analog. The inferred braking law parameters have slight offsets, but agree within \(1\sigma\). To assess the impact of leaving the \(Y_{\mathrm{init}}\) parameter free, we also performed probabilistic modeling with the MESA ANN with \(Y_{\mathrm{init}}\) set to the He-enrichment law described above. We show the updated posterior distributions for this fit compared to the YREC ANN in the right panel of Figure 8. Figure 8: **(a)** Corner plot showing the marginal and joint posterior distributions for the global parameters of our WMB model. Blue shows the samples from the fit using a neural net trained on a grid of MESA models, and red shows the samples from a fit using the YREC-trained neural net. The solar Rossby number \(\mathrm{Ro}_{\odot}\) is shown as a dashed black line. The median values of each distribution are shown as dashed lines in their respective colors in the top and right panels. **(b)** The same posterior distributions, now with the He-enrichment law in the MESA probabilistic model fixed to the relation used when generating the YREC grid. The primary difference between the grids used to train the emulator models is the varied Helium abundance \(Y_{\mathrm{init}}\) in the MESA grid. When fixed to the YREC enrichment law, the constraints on WMB global parameters are in closer agreement. Using the YREC emulator model, we retrieve constraints on the braking law parameters of \(f^{\prime}_{K}=0.86\pm 0.07\) and \(\mathrm{Ro}_{\mathrm{crit}}=0.94\pm 0.04\). When \(Y_{\mathrm{init}}\) is left as a free parameter, the MESA emulator model returns \(f^{\prime}_{K}=0.77\pm 0.07\) and \(\mathrm{Ro}_{\mathrm{crit}}=0.91\pm 0.03\). When we fix the Helium enrichment law to that used in the YREC grid, the MESA emulator model reports \(f^{\prime}_{K}=0.80\pm 0.07\) and \(\mathrm{Ro}_{\mathrm{crit}}=0.94\pm 0.03\). We note that all models consistently return a value of \(\mathrm{Ro}_{\mathrm{crit}}\) below the solar Rossby number. When holding \(Y_{\mathrm{init}}\) fixed to the YREC He-enrichment law, we find closer agreement between the braking law parameters inferred by our model fitting, with \(\mathrm{Ro}_{\mathrm{crit}}\) in near-perfect agreement. This implies that \(Y_{\mathrm{init}}\) provides additional constraints on the braking law parameters, and its inclusion as a grid dimension can influence the result. \(Y_{\mathrm{init}}\) is a challenging property to measure for sun-like stars, and yet affects our inferred value of \(\mathrm{Ro}_{\mathrm{crit}}\) at the \(\sim\)1\(\sigma\)-level. We conclude that uncertainty in the Helium enrichment law should be treated as a systematic uncertainty in the inference of \(\mathrm{Ro}_{\mathrm{crit}}\). ### Future Applications In this study, we focus only on \(f_{K}\) and \(\mathrm{Ro}_{\mathrm{crit}}\) due to the age distribution of our sample. In the future, the same approach described here could be applied to a sample of targets which span earlier phases of evolution (i.e. young open clusters), at which time braking law assumptions, such as the disk-locking timescale, disk lifetime, \(\omega_{\mathrm{sat}}\), and internal angular momentum transport must be treated more carefully. We limited the range of our input model grid to cover the parameters of our sample in order to reduce the computational time required for model generation and neural network training. The framework for the ANN emulator could easily be applied to a grid spanning a wider range of stellar properties, and would provide a useful tool for quickly evaluating stellar evolution tracks or simulating stellar populations. To reduce training time, the grid resolution could be selectively increased to reach a precision threshold. Scutt et al. (2023) suggested that parameter spacing can be modified in different regions of the grid to improve ANN precision. Asteroseismic pulsation frequencies are often generated alongside stellar models using tools such as GYRE(Townsend and Teitler, 2013). These pulsation frequencies, particularly the large frequency spacing (\(\Delta\nu\)), can be included in the grid dimensions (e.g. Lyttle et al., 2021) and applied as further likelihood constraints for models. Ideally, some combination of the above additions could be implemented to produce a broadly applicable stellar evolution emulator that does not require generating or interpolating large model grids. ## 6 Conclusions In summary, our primary conclusions are: 1. We present evidence for weakened magnetic braking in old stars. Using a neural network as a stellar evolution emulator, we perform probabilistic modeling to produce posterior distributions for the parameters of the weakened braking model. We find that the weakened braking model provides the best fit to the observed distribution of rotation periods. 2. We show that the most likely weakened braking scenario diverges from standard spindown at a slightly earlier evolutionary phase than the Sun (\(\mathrm{Ro}_{\mathrm{crit}}/\mathrm{Ro}_{\odot}=0.91\pm 0.03\)). We caution that our WMB model is a simplified case in which angular momentum loss is fully switched off at a critical Rossby number, and likely does not fully capture the time evolution of the stellar dynamo. The relatively sparse calibrator sample near Ro\({}_{\rm crit}\) means that it remains challenging to infer the precise onset of WMB relative to the Sun's evolution. 3. Our method for emulating stellar evolution with a neural network enables rapid evaluation of stellar models, making it possible to fit braking law parameters while properly accounting for the uncertainties in the stellar parameters of our calibrator sample. By modifying the braking law used to generate our training set, we could test other effects at early times, such as the impact of internal angular momentum transport or disk-locking. 4. We report mild disagreement between the constraints on WMB parameters when using different underlying model grids. This indicates that the choice of grid physics and which parameters are varied in the model can impact the inferred model parameters. For our choices, the impact is at the 1\(\sigma\) level. 5. The WMB model appears compatible with the solar twins sample. The standard spindown model predicts slower rotation than observed in the solar twins stars during the second half of the main sequence, while their rotation periods can be described by the WMB model with modest deviations from a fully edge-on population. 6. Our constraint on the Ro\({}_{\rm crit}\) at which stars enter a phase of weakened braking suggests that gyrochronology faces challenges when estimating stellar ages for much of the main sequence lifetime. For sun-like stars, gyrochronological age estimates are likely unreliable beyond an age of \(\sim\)4 Gyr. For more massive stars (\(\gtrsim 1.1\)M\({}_{\odot}\)), gyrochronology relations appear break down even earlier, at an age of \(\sim\)2.5 Gyr. Even after a star has entered the weakened braking phase, a reasonable range for its age can be estimated from its rotation period, and our constraint on Ro\({}_{\rm crit}\) enables gyrochronological modeling that will provide a realistic uncertainty on the stellar age. The growing population of stars with precisely measured ages and rotation periods from asteroseismology is shedding essential light on the evolution of stellar rotation. Improved direct observations of magnetic field strength can add additional constraints on the braking law parameters. As more stars are added to this sample, the transition to WMB can be constrained to higher precision. ## Acknowledgements N.S. acknowledges support by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1842402. J.v.S. and N.S. acknowledge support from the Research Corporation for Science Advancement through Scialog award #39436, in partnership with the Heising-Simons Foundation. J.v.S. also acknowledges support from the National Science Foundation grant AST-2205888. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (CartographY GA. 804752). T.S.M. acknowledges support from NASA grant 80NSSC22K0475. Computational time at the Texas Advanced Computing Center was provided through XSEDE allocation TG-AST090107. A.J.L. acknowledges the support of the Science and Technology Facilities Council. R.H.D.T. acknowledges support from NASA grant 80NSSC20K0515.
2309.11613
Block eccentricity, a radius bound, and an application to the Randić index
We propose a framework for thinking about eccentricity in terms of blocks. We extend the familiar definitions of radius and center to blocks and verify that a central block contains all central points. We classify graphs into two types depending upon the relationship between block radius and vertex radius and between central blocks and central vertices; from this we derive a new lower bound on diameter in terms of the diameter of the central block. We also identify a subgraph which respects the block structure of the original graph and realizes the same vertex radius, and we use it to verify that cactus graphs satisfy a conjectured bound between vertex radius and the Randic index, an invariant from mathematical chemistry.
Margaret I. Doig
2023-09-20T19:57:39Z
http://arxiv.org/abs/2309.11613v1
# Block eccentricity, a radius bound, and an application to the Randic index ###### Abstract. We propose a framework for thinking about eccentricity in terms of blocks. We extend the familiar definitions of radius and center to blocks and verify that a central block contains all central points. We classify graphs into two types depending upon the relationship between block radius and vertex radius and between central blocks and central vertices; from this we derive a new lower bound on diameter in terms of the diameter of the central block. We also identify a subgraph which respects the block structure of the original graph and realizes the same vertex radius, and we use it to verify that cactus graphs satisfy a conjectured bound between vertex radius and the Randic index, an invariant from mathematical chemistry. Key words and phrases:eccentricity; center; radius; blocks; Randic index 2010 Mathematics Subject Classification: Primary 05C12, Secondary 05C69, 05C09 Supported by CURAS Summer Faculty Research Fund ## 1. Introduction We employ a modification of a traditional concept in graph theory (the extension of eccentricity to blocks) to derive new results on classic questions regarding the center of a graph and the relationship between radius and diameter. ### Origins of Centrality The idea of centrality in a graph traces back to the nineteenth century and the work of Jordan. To aid in his classification of small connected graphs up to isomorphism, including their symmetry groups, he defined the version of centrality that we now call a centroid. Only well into the next century did the concept of centrality begin to be embraced as a way to identify vertices which are somehow medial in the graph, with numerous variations crafted for different types of applications, for example, betweenness centrality as a measure of the influence a node has on information flow in a network, or various forms of eigenvector centrality and their use in internet search algorithms. The structure of the central points is not obvious. Jordan [14, Section 2] showed that a tree's centroid consists of either one vertex or two adjacent vertices. Some time later, a parallel theorem emerged concerning the center of a tree, and, in 1953, Harary and Norman showed further that the center of any graph lies in a single block [15, Lemma 1]. (Note the theorem on the center of a tree was erroneously attributed to Jordan in his 1869 paper by, e.g., Buckley and Norman [1, Theorem 2.1], but Jordan defines "center" the way we now define "centroid"). ### Eccentricity and Centrality We refine these results on the relationship between eccentricity and the block structure of a graph. In Section 2, we extend the concept of eccentricity to a block and define block versions of center and radius. In Section 3, we utilize the relationship among articulation points, geodesics, and distance to prove Theorem 3.3, that central blocks are essentially the same as Harary and Norman's blocks containing all central points, and further that graphs fall into two types, A and B, based on whether there is a unique central block (equivalently, based on whether the block radius is equal to the vertex radius). We also note in Theorem 3.6 the location of the periphery with respect to the central block(s). ### Radius vs. Diameter In Section 4, we apply our concept of block eccentricity to derive a refinement of the traditional bound \(\operatorname{r}\leq\operatorname{d}\leq 2\operatorname{r}\): we show that our type A graphs are exactly those which attain the upper bound, and we give a new (sometimes sharper) lower bound for a type B graph in terms of the diameter of its central block. ### Radius and the Randic index In Section 5, we use our concept of a central block to identify a subgraph which realizes the radius of the original graph and whose structure reflects the block structure of the original graph (specifically, the BC-tree of the subgraph is a subgraph of the BC-tree). In Section 6, we apply this subgraph to a question from mathematical chemistry: The Randic index is an invariant studying the branching of a graph, and a long-standing conjecture states that it is bounded below by the radius (with the trivial exception of an even path). We find a recipe for the Randic index when decomposing a graph into articulation components and verify the conjecture for cactus graphs. ## 2. Definitions For a graph \(G\), we let \(V(G)\) be the vertex set and \(E(G)\) the edge set. We write the edge between adjacent vertices \(u\) and \(v\) as \([u,v]\). Unless otherwise stated, all graphs are connected. ### Blocks and the BC-tree A special type of vertex we will use extensively is an _articulation point_, also known as a _separating vertex_ or _cut vertex_. This is a vertex whose removal would increase the number of components. A graph without articulation points is _nonseparable_ or _2-connected_, and a maximal nonseparable subgraph is a _block_ (sometimes called a _2-connected component_); note each articulation point is in all incident blocks. We denote the set of blocks \(B(G)\). To describe the block structure, we may draw a new graph \(\mathfrak{G}\), called the _block-cutpoint tree_, often shortened to _BC-tree_. We assign one vertex \(\mathfrak{a}\) for each articulation point \(a\in V(G)\) and one vertex \(\mathfrak{B}\) for each block \(B\in B(G)\). If \(v\in B\), we further assign an edge \([\mathfrak{a},\mathfrak{B}]\). If \(H\) is a subgraph of \(G\) whose BC-tree is \(\mathfrak{H}\), then \(\mathfrak{H}\) is a subgraph of \(\mathfrak{G}\), and we call \(H\) the _shadow_ of \(\mathfrak{H}\). We will regularly use the _articulation components_ or _branches_ at an articulation point \(a\), the maximal subgraphs which do not contain \(a\) as an articulation point; equivalently, they are the shadows in \(G\) of the connected components of \(\mathfrak{G}-\mathfrak{a}\) (note that, if we allow disconnected graphs, then it is traditional to require that the articulation point contain \(a\), i.e., it must come from the same component as \(a\)). We extend this definition to a block: if \(B\) is a block, the _articulation components at \(B\)_ are the maximal connected subgraphs which do not contain \(B\) and do not have any vertex of \(B\) as an articulation point; equivalently, they are the shadows in \(G\) of the connected components of \(\mathfrak{G}-\mathfrak{B}\). Note that there may be a complicated relationship between the two types of articulation components. If \(a\) is an articulation point on block \(B\), then the set of articulation components at \(a\) contains one subgraph for each of the blocks incident to \(a\). In contrast, the set of articulation components at \(B\) splits apart the single component at \(a\) which contains \(B\) (and removes \(B\) from each of its pieces), while it combines the remaining components into one. ### Eccentricity and its extension to blocks Let \(\operatorname{dist}(v_{1},v_{2})\) be the standard distance metric on a graph, in other words, the minimum number of edges in a path between the vertices \(v_{1}\) and \(v_{2}\); such a shortest path may be called a _geodesic_. We extend this idea of distance to blocks: for a vertex \(v\in V(G)\) and a block \(B\in B(G)\), \[\operatorname{dist}(v,B)=\operatorname{dist}(B,v)=\min_{u\in V(B)} \operatorname{dist}(u,v)\] and, for two blocks \(B_{1},B_{2}\in B(G)\), \[\operatorname{dist}(B_{1},B_{2})=\min_{v_{i}\in B_{i}}\operatorname{dist}(v_{ 1},v_{2}).\] We may also borrow the traditional idea of _eccentricity_, or how far away the graph tends from a given vertex \(v\), \[\operatorname{ecc}(v)=\max_{u\in V(G)}\operatorname{dist}(u,v).\] A vertex of maximal distance from a given vertex \(v\) is called an _eccentric vertex_ of \(v\). We may extend the concept of eccentricity to blocks (and other subgraphs) as well, as Buckley and Harary did [1, Section 2.4]: \[\operatorname{ecc}(B)=\max_{v\in V(G)}\operatorname{dist}(B,v).\] Since \(\operatorname{ecc}(B)\) ranges over all possible vertices in \(B\), the eccentricity of the block is bounded above by the eccentricity of its vertices, though it may not be realized by any given vertex (for example, consider a 4-cycle with a leaf attached at each vertex: the cycle has block eccentricity 1 while its vertices have eccentricity 3). **Proposition 2.1**.: _A block's eccentricity is bounded by its vertices' eccentricity:_ \[\operatorname{ecc}(B)\leq\min_{v\in V(B)}\operatorname{ecc}(v).\] Traditionally, a vertex with minimal eccentricity is called a _central point_ and the set of them the _center_, and its eccentricity is named the _radius_, \[\operatorname{r}(G)=\min_{v\in V(G)}\operatorname{ecc}(v).\] We extend such ideas to blocks. If a block realizes the minimum block eccentricity, we call it a _central block_, and we declare its eccentricity to be the _block radius_: \[\operatorname{br}(G)=\min_{B\in B(G)}\operatorname{ecc}(B).\] This value is frequently much smaller than the radius (e.g., the 4-cycle with a leaf at each vertex has the cycle as the central block with block eccentricity 1, while the pendants have block eccentricity 3; the articulation points inherit the maximum of their incident block eccentricities, or 3). As a direct result of Proposition 2.1, **Proposition 2.2**.: _Block radius is bounded by vertex radius:_ \[\operatorname{br}(G)\leq\operatorname{r}(G).\] Graphs divide into two different groups by behavior which are related to the relationship between \(\operatorname{r}(G)\) and \(\operatorname{br}(G)\). We define _type A_ to be the graphs where the invariants are equal and _type B_ where they are not. Theorem 3.3 will show that a type A graph is one with multiple central blocks and a type B graph one with a unique central block. We would be remiss not to revisit maximal eccentricity as well. The maximal eccentricity of a vertex is the _diameter_, and a vertex realizing it is a _peripheral vertex_ (the set of all such is the _periphery_), \[\operatorname{d}(G)=\max_{v\in V(G)}\operatorname{ecc}(v).\] It is tempting to define block diameter as maximal block eccentricity, but such a definition would not be compatible with vertex diameter, as the block periphery would not necessarily contain the vertex periphery. To the contrary, we could define a peripheral block to be a block containing a peripheral vertex; further motivation for such a definition would be that we could define an upper distance \[\operatorname{dist}^{\prime}(B,v)=\max_{u\in V(B)}\operatorname{dist}(u,v)\] and an upper eccentricity \[\operatorname{ecc}^{\prime}(B)=\max_{v\in V(B)}\operatorname{ecc}(v),\] in which case the block with the maximal upper eccentricity would be exactly a block containing a peripheral vertex, and the maximum such eccentricity would be equal to the traditional vertex diameter. We will not explore these concepts further as they would not contribute significantly to our investigations. ### Block eccentricity vs. eccentricity within the BC-tree We must take care not to confuse these concepts of eccentricity, radius, and diameter for blocks with their synonymous versions within the BC-tree itself. For example, if \(G\) is any graph with two blocks \(B_{1}\) and \(B_{2}\) and articulation point \(a\), then \(\mathfrak{B}_{\mathtt{1}}\) and \(\mathfrak{B}_{\mathtt{2}}\) will necessarily have the same vertex eccentricity in \(\mathfrak{G}\), while \(B_{1}\) and \(B_{2}\) may have very different block eccentricities in \(G\), depending on their relative sizes. ## 3. Block eccentricity and centrality We now review the impact of separating structures on geodesics and distance measures before we arrive at our first main result in Theorem 3.3, a characterization of type A and type B graphs in terms of the number and properties of their central points and central blocks as well as the relation between block and vertex radius. We will conclude with an application to self-centered graphs and a remark on the locations of eccentric vertices with respect to the central block. ### Separating structures and distance measures The rationale behind the name _separating vertex_ as a synonym for articulation point is that such a vertex \(a\) may be said to _separate_ two other vertices \(v_{1}\) and \(v_{2}\) if all paths between \(v_{1}\) and \(v_{2}\) must pass through \(a\); we may likewise say that one vertex or block _separates_ any two other vertices or blocks (although we will forbid the improper case of a block separating two non-articulation points inside it, or a non-articulation point inside it from anything else; this will prevent some obstreporous cases which would violate Proposition 3.1 and later results). If a vertex or block is separating in this fashion, it has immediate consequences for distance measurements. **Proposition 3.1**.: _If a vertex \(a\) separates two other vertices \(v_{1}\) and \(v_{2}\), then:_ \[\operatorname{dist}(v_{1},v_{2})=\operatorname{dist}(v_{1},a)+\operatorname{ dist}(a,v_{2}).\] _If \(a\) separates two blocks \(B_{1}\) and \(B_{2}\):_ \[\operatorname{dist}(B_{1},B_{2})=\operatorname{dist}(B_{1},a)+\operatorname{ dist}(a,B_{2}).\] _If \(a\) separates vertex \(v_{1}\) and block \(B_{2}\):_ \[\operatorname{dist}(v_{1},B_{2})=\operatorname{dist}(v_{1},a)+\operatorname{ dist}(a,B_{2}).\] _If \(a\) block \(B\) separates two vertices \(v_{1}\) and \(v_{2}\), and if \(a_{i}\) is the articulation point in \(B\) which separates it from \(v_{i}\), then_ \[\operatorname{dist}(v_{1},v_{2})=\operatorname{dist}(v_{1},B)+\operatorname{ dist}(a_{1},a_{2})+\operatorname{dist}(B,v_{2}).\] _If \(B\) separates two other blocks \(B_{1}\) and \(B_{2}\) and \(a_{i}\) is the articulation point in \(B\) separating it from \(B_{i}\), then_ \[\operatorname{dist}(B_{1},B_{2})=\operatorname{dist}(B_{1},B)+\operatorname{ dist}(a_{1},a_{2})+\operatorname{dist}(B,B_{2}).\] _Finally, if \(B\) separates vertex \(v_{1}\) and block \(B_{2}\) where \(a_{1}\) and \(a_{2}\) are the articulation points in \(B\) separating it from \(v_{1}\) and \(B_{2}\) respectively, then_ \[\operatorname{dist}(v_{1},B_{2})=\operatorname{dist}(v_{1},B)+\operatorname{ dist}(a_{1},a_{2})+\operatorname{dist}(B,B_{2}).\] ### Central blocks vs. central points We being by exploring how our concept of central block relates to the traditional concept of central point. For context, we review the motivating 1953 result of Harary and Norman [13] and rephrase the proof in terms of our concept of block eccentricity. (Note "Husimi trees" in the title of the original manuscript refers to cactus graphs, and the lemma statement calls a block a "star.") **Lemma 3.2**.: _[_13_, Lemma 1]_ _There is a block which contains all central points._ Proof.: Say the graph \(G\) has two central points \(v_{1}\) and \(v_{2}\) which are not in the same block. Then there must be some articulation point \(a\) separating them. Consider the articulation components at \(a\), and let \(G_{i}\) be the component containing \(v_{i}\) and \(G_{0}\) be the graph union of the remaining components. In other words, \(G_{1}\) is \(a\) and everything that \(a\) does not separate from \(v_{1}\); \(G_{2}\) is the same for \(v_{2}\); and \(G_{0}\) is \(a\) and everything else. By Proposition 3.1, the vertices in \(G_{0}\) and \(G_{1}\) are strictly closer to \(a\) than they are to \(v_{2}\), so their distance to \(a\) is less than \(\operatorname{ecc}(v_{2})=\operatorname{r}(G)\); similarly, the vertices in \(G_{2}\) (and again \(G_{0}\)) are closer to \(a\) than to \(v_{1}\), so their distance to \(a\) also is less than \(\operatorname{ecc}(v_{1})=\operatorname{r}(G)\). Thus, \(a\) has a lower eccentricity than \(\operatorname{r}(G)\), which is a contradiction. This idea of a block which contains all the central points corresponds exactly to our new definition of a central block, and the two graphs types, A and B, correspond to the properties of the central block(s). **Theorem 3.3**.: _A block is central iff it contains every central point. Further, the graph falls into one of two types:_ 1. _Type A: There is a unique central point which is an articulation point, all blocks containing it are central blocks, and_ \[\operatorname{br}(G)=\operatorname{r}(G).\] 2. _Type B: There is a unique central block_ \(B\) _which contains every central point, and_ \[\operatorname{br}(G)<\operatorname{r}(G).\] **Example 3.4**.: Note that a type B graph may have a single central point, even one which is an articulation point: consider a \(6\)-cycle with vertices labelled, in order, \(v_{1},v_{2},\cdots,v_{6}\). Add a pendant at \(v_{2}\) and \(v_{4}\) and a length \(2\) path at \(v_{6}\). Now \(v_{6}\) is the unique center with vertex eccentricity \(3\), and the \(6\)-cycle is the unique central block with block eccentricity \(2\). See Corollary 4.3 for a complementary result to Theorem 3.3(2), that \[\operatorname{r}(G)-\operatorname{d}(B)\leq\operatorname{br}(G).\] It is important to note that a comparable result is not true for peripheral blocks: for example, for a \(6\)-cycle and a \(4\)-cycle joined at a single vertex, the \(4\)-cycle is the only peripheral block (with block eccentricity \(3\), as opposed to the \(6\)-cycle at \(2\)), but the diameter is realized by a point from each cycle. Proof of Theorem 3.3.: First, we prove that every central block contains every central point by repeated applications of Proposition 3.1. This will also imply that either there is a unique central block or else there are multiple central blocks intersecting at a unique central point. Say there is a central block \(B\) and a central point \(v\not\in B\). Let \(a\) be any articulation point separating \(v\) from \(B\) (in particular, \(a\neq v\), although \(a\) may be in \(B\)). Consider the articulation components at \(a\): let \(G_{B}\) be the articulation component containing \(B\) (i.e., everything \(a\) does not separate from \(B\)), let \(G_{v}\) be the component containing \(v\) (i.e., everything \(a\) does not separate from \(v\)), and let \(G_{0}\) be the union of the remaining components (or, if there are none, set \(G_{0}=\{a\}\)). These subgraphs cover \(G\) and intersect pairwise at \(a\). Now let \(u\) be an eccentric vertex of \(a\), that is, \[\operatorname{dist}(a,u)=\operatorname{ecec}(a)\geq\operatorname{r}(G).\] This vertex must be in \(G_{v}\). If it were not, then \(a\) would separate \(u\) from \(v\),so \[\operatorname{dist}(u,v)=\operatorname{dist}(u,a)+\operatorname{dist}(a,v) \geq 1+\operatorname{r}(G),\] yet \(\operatorname{ecc}(v)=\operatorname{r}(G)\) (note our assumptions implied \(G\) is not an isolated vertex, so \(u\neq a\)). Thus \(u\not\in G_{B}\), or \(a\) separates \(u\) from \(B\), so \[\operatorname{dist}(B,u)=\operatorname{dist}(B,a)+\operatorname{dist}(a,u)\geq \operatorname{r}(G).\] We also know \[\operatorname{dist}(B,u)\leq\operatorname{ecc}(B)=\operatorname{br}(G)\leq \operatorname{r}(G).\] Therefore, the inequalities are equalities, which means that \(\operatorname{dist}(B,a)=0\), so \(a\in B\); that \(\operatorname{dist}(a,u)=\operatorname{r}(G)\), so \(a\) is a central point; and that \(\operatorname{br}(G)=\operatorname{r}(G)\). These three conclusions are contradictory. Since \(a\) was an arbitrary vertex separating \(v\) from \(B\), then \(a\in B\) implies that \(v\) must be in a block adjacent to \(B\), call it \(B_{v}\), and \(B\) and \(B_{v}\) share \(a\) as an articulation point. If we calculate \(\operatorname{ecc}(B_{v})\), it will be too small: Say \(u\not\in G_{v}\), that is, \(u\) is separated from the central point \(v\) by \(a\), so \[\operatorname{dist}(u,B_{v})=\operatorname{dist}(u,a)<\operatorname{dist}(u,v)= \operatorname{r}(G).\] Additionally, if \(u\in G_{v}\), either \(u\in B_{v}\) and \(\operatorname{dist}(u,B_{v})=0\), or else \(B_{v}\) separates \(u\) from \(a\) (since \(a\in B_{v}\) and \(a\) is not an articulation point in \(G_{v}\)), in other words, \[\operatorname{dist}(u,B_{v})<\operatorname{dist}(u,a)=\operatorname{r}(G).\] Therefore, \(\operatorname{ecc}(B_{v})<\operatorname{r}(G)\), which is not compatible with \(\operatorname{br}(G)=\operatorname{r}(G)\). To see that any block containing the center is central, and to check the remaining conditions, we consider two separate cases. Say \(G\) has multiple central blocks. Their intersection must contain the set of central points, so there must be a unique central point \(a\) which is an articulation point. We need to show that \(\operatorname{br}(G)=\operatorname{r}(G)\), so that the graph is type B, and also that every block containing \(a\) is indeed a central block. Consider the set of articulation components at \(a\), call them \(\{G_{i}\}\), and let \(B_{i}\) be the block in each \(G_{i}\) which contains \(a\). Then \(a\) has some eccentric vertex \(v\); without loss of generality, \(v\in G_{1}\) and (since there are multiple central blocks) \(B_{2}\) is a central block. Since \(a\) separates \(v\) from \(B_{2}\), \[\operatorname{dist}(B_{2},v)=\operatorname{dist}(a,v)=\operatorname{r}(G),\] that is, \(\operatorname{br}(G)=\operatorname{ecc}(B_{2})\geq\operatorname{r}(G)\). In fact, no \(B_{i}\) has greater eccentricity: since \(a\) is not an articulation point in any \(G_{i}\), it is separated from \(G_{i}-B_{i}\) by \(B_{i}\), that is, \[\operatorname{dist}(u,B_{i})<\operatorname{dist}(u,a)\leq\operatorname{r}(G)\] for all \(u\in G_{i}\), and \(B_{i}\) is separated from any other articulation component by \(a\), so \[\operatorname{dist}(u,B_{i})=\operatorname{dist}(u,a)\leq\operatorname{r}(G)\] for any \(u\not\in G_{i}\). Say \(G\) has a unique central block \(B\). We need only verify \(\operatorname{br}(G)<\operatorname{r}(G)\). Let the articulation components of the graph at \(B\) be called \(\{B\cup G_{i}\}\), let \(a_{i}\) be the articulation point shared by \(B\) and \(G_{i}\), and let \(B_{i}\) be the block in each \(G_{i}\) containing \(a_{i}\). Recall Proposition 2.2 that \(\operatorname{br}(G)\leq\operatorname{r}(G)\) and assume \(\operatorname{br}(G)=\operatorname{r}(G)\); then there is some vertex \(v\) of distance \(\operatorname{r}(G)\) from \(B\), say \(v\in G_{1}\), in which case \[\operatorname{dist}(a_{1},v)=\operatorname{dist}(B,v)=\operatorname{r}(G).\] If \(u\) is any vertex not in \(G_{1}\), then \(a_{1}\) separates \(u\) from \(v\), so \[\operatorname{dist}(u,v)>\operatorname{dist}(a_{1},v)=\operatorname{r}(G),\] and \(u\) is not a central point. In particular, no vertex in \(B\) other than \(a\) may be central, but \(B\) must contain a non-trivial center, so \(a\) is a central vertex. Additionally, since \(a_{1}\in B_{1}\), Proposition 2.1 says \[\operatorname{ecc}(B_{1})\leq\operatorname{ecc}(a_{1})=\operatorname{r}(G),\] and \(B_{1}\) is also a central block, which is a contradiction. ### Self-centered graphs Theorem 3.3 completely characterizes graphs with maximal diameter of \(2\operatorname{r}\). As an immediate corollary, we can also find a necessary condition for a graph to have minimal diameter. These are known as _self-centered graphs_ because every vertex has the same eccentricity and so is a central vertex. **Corollary 3.5**.: _If \(\operatorname{r}(G)=\operatorname{d}(G)\), then \(G\) is non-separable._ ### Eccentric vertices We now investigate the implications of central blocks for eccentric vertices and geodesics realizing eccentricity. **Theorem 3.6**.: _Let \(G\) be a graph._ 1. _If_ \(G\) _is type A with_ \(a\) _the central point and_ \(\{G_{i}\}\) _the articulation components at_ \(a\)_, then any_ \(v\in G_{i}\) _has at least one eccentric vertex outside of_ \(G_{i}\)_. In particular,_ \(a\) _itself has eccentric vertices in at least two different articulation components_ \(G_{i}\)_._ 2. _If_ \(G\) _is type B with_ \(B\) _the central block and_ \(\{G_{i}\}\) _the articulation components at_ \(B\)_, then any_ \(v\in G_{i}\) _has all its eccentric vertices outside_ \(G_{i}\)_._ _In particular, any \(v\) has an eccentric vertex \(u\) so that the geodesic from \(v\) to \(u\) passes through at least a central block._ **Example 3.7**.: Note that Theorem 3.6(1) does not prevent \(v\) from having eccentric vertices in its own \(G_{i}\). Consider a square pyramid, i.e., a 4-cycle with another vertex at the peak joined to each of the cycle's vertices by an edge. Duplicate the pyramid and identify the peaks to make a vertex \(a\): then \(a\) is the unique center and an articulation point with 2 articulation components and eccentricity 1, realized by the other 4 vertices in each component. Any other vertex has eccentricity 2 with 5 eccentric vertices, all 4 vertices other than \(a\) in the other articulation component and the non-adjacent vertex from its own component. **Example 3.8**.: Similarly, Theorem 3.6(2) does not specify whether \(v\)'s eccentric vertices are in \(B\) or another \(G_{i}\). Consider a 4-cycle and a pair of 3-cycles attached at adjacent vertices. The 4-cycle is the central block \(B\) with block eccentricity 1, and both of its articulation points are central points with eccentricity 2 (realized at the opposite vertex on the 4-cycle and at the pair of non-articulation points on the 3-cycle it does not touch). A non-articulation point on one of the 3-cycles has 3 eccentric vertices, one on the 4-cycle and the other 2 on the other 3-cycle. Proof of Theorem 3.6.: Say \(G\) is type A. Let \(B_{i}\) be the block in \(G_{i}\) which contains the central point \(a\). By Theorem 3.3(1), \(B_{i}\) is central and has eccentricity \(\operatorname{r}(G)\); since it is strictly closer to all vertices in \(G_{i}\) than \(a\) is, its block eccentricity is not realized in \(G_{i}\). That is, for any chosen \(G_{i}\), there is some vertex \(v_{j}\) in another \(G_{j}\) of distance \(\operatorname{r}(G)\) from \(B_{i}\), i.e., \(\operatorname{dist}(a,v_{j})=\operatorname{r}(G)\). Now consider any vertex \(u_{i}\in G_{i}\): its distance to any \(v_{i}\in G_{i}\) is \[\operatorname{dist}(u_{i},v_{i})\leq\operatorname{dist}(u_{i},a_{i})+ \operatorname{dist}(a_{i},v_{i})\leq\operatorname{dist}(u_{i},a_{i})+ \operatorname{r}(G);\] however, it is separated from \(v_{j}\) by \(a\), and so its distance to \(v_{j}\) is \[\operatorname{dist}(u_{i},v_{j})=\operatorname{dist}(u_{i},a)+ \operatorname{dist}(a,v_{j})=\operatorname{dist}(u_{i},a)+\operatorname{r}(G).\] In other words, while the eccentricity of \(u_{i}\) may be realized inside its own \(G_{i}\), it is definitely realized in another \(G_{j}\). Say \(G\) is type B, and let \(a_{i}\) be the articulation point shared by \(G_{i}\) and the central block \(B\). Theorem 3.3(2) says \(\operatorname{ecc}(B)<\operatorname{r}(G)\), but \(\operatorname{ecc}(a_{i})\geq\operatorname{r}(G)\), so the eccentricity of any \(a_{i}\) is realized in \(B\) or another \(G_{j}\), i.e., there is some \(v_{j}\in G_{j}\) with \(\operatorname{dist}(a_{i},v_{j})\geq\operatorname{r}(G)\). For any other \(u\in G_{i}\), note that \(v_{i}\in G_{i}\) obeys \[\operatorname{dist}(u,v_{i})\leq\operatorname{dist}(u,a_{i})+ \operatorname{dist}(a_{i},v_{i})<dist(u,a_{i})+\operatorname{r}(G)\] while \(v_{j}\) obeys \[\operatorname{dist}(u,v_{j})=\operatorname{dist}(u,a_{i})+\operatorname{dist}( a_{i},v_{j})\geq\operatorname{dist}(u,a_{i})+\operatorname{r}(G).\] That is, the eccentricity of \(u\) is not realized in \(G_{i}\). ## 4. Application: Bounding radius and diameter Recall the classical bound \[\operatorname{r}(G)\leq\operatorname{d}(G)\leq 2\operatorname{r}(G).\] A cycle with an even number of edges realizes the lower bound, a path with an even number of edges realizes the upper bound, and the intermediate values are realized by, for example, the minimal order graphs of Ostrand [1]. While the classical bound is sharp, it still leaves some opportunities for further refinement; for example, we verify that graphs of type A all satisfy the upper bound, and graphs of type B may be given a sharper range using the diameter of the central block. **Theorem 4.1**.: _If \(G\) is type A,_ \[\operatorname{d}(G)=2\operatorname{r}(G).\] _If \(G\) is type B with central block \(B\),_ \[2\operatorname{r}(G)-\operatorname{d}(B)\leq\operatorname{d}(G)\leq 2 \operatorname{r}(G).\] The lower bound is an improvement of the classical one when \(\operatorname{d}(B)<\operatorname{r}(G)\), which is not rare in graphs with a significant number of articulation points or where several blocks are of comparable size, for example, in the wedge of two cycles of similar (but not the same) size. The proof below also demonstrates a new upper bound, \[\operatorname{d}(G)\leq 2\operatorname{br}(G)+\operatorname{d}(B),\] which is an improvement upon the classical one when \(\frac{1}{2}\operatorname{d}(B)<\operatorname{r}(G)-\operatorname{br}(G)\), which is rarer but still possible. Let us examine a situation where the lower bound is sharp, which will motivate the proof for the general case: **Example 4.2**.: Let \(G\) be a graph consisting of a \(2n\)-cycle \(B\) with a path of length \(l\) attached at each of its vertices. Then the diameter is realized by the endpoints of two paths attached at opposite points on the cycle, that is, \(\operatorname{d}(G)=2l+n\). Similarly, each vertex on the cycle itself is central, and the most distant vertex is the endpoint of the path immediately opposite it, so \(\operatorname{r}(G)=l+n.\) Since \(\operatorname{d}(B)=n\), \(\operatorname{d}(G)=2\operatorname{r}(G)-\operatorname{d}(B)\). Proof of Theorem 4.1.: For type A graphs, if \(a\) is the unique central point and \(G_{i}\) are the articulation components at \(a\) with \(B_{i}\) the block in \(G_{i}\) containing \(a\), then we verified in Theorem 3.3 that \(\operatorname{br}(G)=\operatorname{r}(G)\) and the \(B_{i}\) are central, i.e., \(\operatorname{ecc}(B_{i})=\operatorname{ecc}(a)\), and there are at least two \(G_{i}\) with eccentric vertices of \(a\), say \(v_{1}\in G_{1}\) and \(v_{2}\in G_{2}\). Then \(\operatorname{dist}(v_{1},v_{2})=2\operatorname{r}(G)\). For type B, for the lower bound, let \(B\) be the central block with articulation points \(a_{i}\) and articulation components \(G_{i}\) (where \(a_{i}\in G_{i}\)). If \(\operatorname{d}(B)\geq\operatorname{r}(G)\), then \(2\operatorname{r}(G)-\operatorname{d}(B)\leq\operatorname{r}(G)\), and the bound is no stronger than the classical one. Assume \(\operatorname{d}(B)<\operatorname{r}(G)\), in which case no vertex in \(B\) can have an eccentric vertex in \(B\). Select any vertex \(v\in B\) and select one of its eccentric vertices, without loss of generality, \(u_{1}\in G_{1}\). Then \[\operatorname{dist}(a_{1},u_{1})=\operatorname{dist}(v,u_{1})-\operatorname{ dist}(v,a_{1})\geq\operatorname{r}(G)-\operatorname{d}(B).\] Now \(a_{1}\) has its own eccentric vertices, including one we will name \(u_{2}\), which is not in \(G_{1}\) by Theorem 3.6. Then \[\operatorname{dist}(u_{1},u_{2})=\operatorname{dist}(u_{1},a_{1})+\operatorname{ dist}(a_{1},u_{2})\geq 2\operatorname{r}(G)-\operatorname{d}(B).\] The inequality is attained, as shown by Example 4.2. For the upper bound on type B, consider two vertices of maximal distance, say \(u_{1}\) and \(u_{2}\). By Theorem 3.6, they cannot be located in the same \(G_{i}\), i.e., any path between them must pass through \(B\). Say \(u_{1}\in G_{1}\) and \(u_{2}\in G_{2}\): then \[\operatorname{dist}(u_{1},u_{2})=\operatorname{dist}(u_{1},a_{1})+\operatorname {dist}(a_{1},a_{2})+\operatorname{dist}(a_{2},u_{2})\leq 2\operatorname{ecc}(B)+ \operatorname{d}(B).\] If \(u_{1}\in G_{1}\) and \(u_{2}\in B\), the equation is even simpler: \[\operatorname{dist}(u_{1},u_{2})=\operatorname{dist}(u_{1},a_{1})+\operatorname {dist}(a_{1},u_{2})\leq\operatorname{ecc}(B)+\operatorname{d}(B).\] This theorem allows us to prove a complementary lower bound to Theorem 3.3(2). **Corollary 4.3**.: _For a graph \(G\) of type B with central block \(B\),_ \[\operatorname{r}(G)-\operatorname{br}(G)\leq\operatorname{d}(B).\] Proof.: Theorem 4.1 says \[2\operatorname{r}(G)-\operatorname{d}(B)\leq 2\operatorname{br}(G)+ \operatorname{d}(B),\] and the result follows. ## 5. Application: A subgraph realizing maximal eccentricity We propose a subgraph which can be used to study both the vertex and block eccentricity. As suggested by the proofs of Lemma 3.2 and Theorem 3.3, much of the block structure of a graph is redundant: there is a unique central block or central articulation point), and the graph branches out in articulation components from this central block/vertex. We may reduce the complexity of the graph by pruning down the number and size of such components. **Construction 5.1**.: Let \(G\) be a graph. 1. Say \(G\) is type A with \(a\) the central point and \(\{G_{i}\}\) the articulation components at \(a\). For each \(i\), identify a vertex in \(G_{i}\) of maximal distance from \(a\). Retain a geodesic from this vertex to \(a\), along with all blocks from which it takes at least one edge. Delete the rest of \(G_{i}\). 2. Say \(G\) is type B with \(B\) the central block and \(\{G_{i}\}\) the articulation components at \(B\). For each \(i\), identify a vertex in \(G_{i}\) of maximal distance from \(B\). Retain a geodesic from this vertex to \(B\), along with all blocks from which it takes at least one edge. Delete the rest of \(G_{i}\). **Theorem 5.2**.: _Let \(G\) be a connected graph and \(G^{\prime}\) be the altered graph given by Construction 5.1:_ 1. \(G^{\prime}\) _is a connected subgraph of_ \(G\)_._ 2. \(\mathfrak{G}^{\prime}\) _is a subgraph of_ \(\mathfrak{G}\)_._ 3. \(\mathfrak{G}^{\prime}\) _is a path or starlike tree._ 4. \(G^{\prime}\) _has the same central points and blocks and the same vertex and block radius as_ \(G\) _._ 5. _The peripheral vertices of_ \(G^{\prime}\) _are a subset of the peripheral vertices of_ \(G\)_, and it has the same vertex diameter as_ \(G\)_._ Recall that a _starlike_ tree has exactly one vertex of degree greater than \(2\). Proof.: We may examine the construction in the BC-tree rather than in \(G\): for a graph of type A (respectively, type B), we pick a vertex \(v\) in \(G_{i}\) of maximal distance from \(a\) (resp., \(B\)), say \(v\) is in a block called \(D\), and find a path in \(\mathfrak{G}\) from \(\mathfrak{D}\) to \(\mathfrak{a}\) (resp., \(\mathfrak{B}\)). Alter \(\mathfrak{G}\) one step at a time by deleting leaves and their accompanying pendant edges, reflecting each deletion in \(G\) as we go by removing a block. Deleting a leaf \(\mathfrak{D}\) in \(\mathfrak{G}\) will correspond to deleting a block \(D\) in \(G\) which was connected to a single articulation point \(a_{D}\), which will reduce the size and order of the graph but not disconnect it. That is, \(G^{\prime}\) is connected, and \(G^{\prime}\) and \(\mathfrak{G}^{\prime}\) are subgraphs of \(G\) and \(\mathfrak{G}\), respectively. Additionally, as long as \(\deg\mathfrak{a}_{\mathfrak{D}}\geq 3\), removing \(D\) reduces the number of blocks intersecting at \(a_{D}\) but leaves it an articulation point, so \(\mathfrak{G}^{\prime}\) is the BC-tree of \(G^{\prime}\). On the other hand, if \(\deg\mathfrak{a}_{\mathfrak{D}}=2\), then \(a_{D}\) is an articulation point between two blocks, and removing \(D\) renders it a non-articulation point and converts \(\mathfrak{a}_{\mathfrak{D}}\) to a leaf itself, so \(\mathfrak{G}^{\prime}\) is not a BC-tree of anything until we also remove \(\mathfrak{a}_{\mathfrak{D}}\) and its pendant edge. By construction, all central blocks and so also all central points are retained in \(G^{\prime}\). Additionally, any \(u\) and \(v\) which survive to \(G^{\prime}\) will have the same distance there which they did in \(G\): removing blocks which correspond to leaves in the BC-tree will not interfere with geodesics between surviving vertices or affect the relative distance between them; in particular, the eccentricity of any vertex or block will not increase from \(G\) to \(G^{\prime}\). Similarly, the vertex eccentricity of \(a\) or the block eccentricity of \(B\) with respect to each \(G_{i}\) will remain the same. We next show that the radius does not change. If \(G\) is type A, then the eccentricity of \(a\) will not change: if \(v_{i}\) is a vertex in \(G_{i}\) of maximal distance from \(a\), then either \(v_{i}\) or some other vertex of equal distance in \(G_{i}\) will survive to \(G^{\prime}\). Additionally, if vertex \(v\in G_{i}\) survives to \(G^{\prime}\), then its eccentricity also does not change by Theorem 3.6 since it is realized by at least one vertex in some other \(G_{j}\), and this vertex is separated from \(v\) by \(a\), so either it or another vertex of equal distance in \(G_{j}\) will survive. Similarly, if \(G\) is type B, then the eccentricities of all the vertices in \(B\) will also remain the same because the distance between vertices within \(B\) will be unchanged, and the eccentricity of \(B\) and so all its vertices with respect to any given \(G_{i}\) will be the same from \(G\) to \(G^{\prime}\). Additionally, the eccentricity of no vertex outside \(B\) will change: by Theorem 3.6, the eccentricity of a vertex in \(G_{i}\) is attained only by some vertex from which it is separated by \(a_{i}\), and that vertex is either in \(B\) (so will survive) or is in some other \(G_{j}\) (in which case it or another vertex of the same distance will survive). We finally show that the diameter does not change. A type A graph remains type A because its central blocks remain central, and its radius is preserved, so its diameter is also preserved. For a type B graph, let \(u\) and \(v\) be vertices of maximal distance. Their geodesic must pass through \(B\) by Theorem 3.6, and any portion of it in \(B\) is preserved. If it contains any portion in some \(G_{i}\), then it passes through some \(a_{i}\), and, even if that portion does not itself survive, some other geodesic in \(G_{i}\) of the same length will. Note that this construction could be optimized further. For example, if we were interested exclusively in diameter or in type A graphs, we could select two vertices of maximal distance in different \(G_{i}\) and preserve only the path between them (which would retain the central block if type B and central point if type A); even if we wish only to study radius in type B graphs, we may still first select a set of all eccentric points of the central points along with geodesics back to their central points and retain the blocks with edges in these geodesics. ## 6. Application: The Randic index of a cactus graph We now turn towards an invariant from mathematical chemistry which is used to study the branching of a graph. It was originally defined by Milan Randic in 1975 in an attempt to mathematically characterize branching in a way consist with boiling point and other structure-related properties such as enthalpy of formation of alkanes and the relationship of vapor pressure to temperature [11]. It has been experimentally verified to be associated to boiling and reactivity of hydrocarbons and has now become a standard tool for evaluating molecular structure in quantitative structure-activity relationship (QSAR) models, that is, regressive models that predict biological activity, physicochemical properties, and toxicological responses of chemical compounds based on their molecular structure (see, for example, [12, 13, 14, 15]). Randic originally formulated his invariant in terms of the graph adjacency matrix, although the the common formulation today is due to Balaban [1]. First, define the _weight_ of an edge \([u,v]\) to be: \[w[u,v]=\frac{1}{\sqrt{\deg(u)\deg v}}\] and then define the Randic index of \(G\) as: \[\mathrm{R}(G)=\sum_{[u,v]\in E(G)}w[u,v]\] although an alternate formulation due to Caparorossi, Gurman, Hansen, and Pavlovic in 2003 is [1]: \[\mathrm{R}(G)=\frac{\#V(G)-n_{0}}{2}-\sum_{[u,v]\in E(G)}w^{*}[u,v]\] where \(n_{0}\) is the number of isolated vertices and \(w^{*}[u,v]\) is a measure of asymmetry of edge weights, that is, \[w^{*}[u,v]=\frac{1}{2}\left(\frac{1}{\sqrt{\deg u}}-\frac{1}{\sqrt{\deg v}} \right)^{2},\] which is sometimes easier to engage with as it shows, e.g., that \(\mathrm{R}(G)\leq\frac{\#V(G)}{2}\). The Randic index is worth studying from a graph theoretic point of view, in particular because it identifies a type of branching not easily encapsulated by other invariants. The Graffiti computer prediction program first identified a possible link to graph radius, although it has been resistant to repeated efforts to prove it. There have since been similar investigations into diameter: **Conjecture 6.1**.: _Let \(G\) be a graph._ 1. _[_1_]_ _If_ \(G\) _is not a path with an even number of vertices,_ \(\mathrm{R}(G)\geq\mathrm{r}(G)\) _._ 2. _[_1_, Theorem 1.3]_ _If_ \(G\) _is not an even path,_ \[\mathrm{R}(G)\geq\mathrm{r}(G).\] 3. _[_1_, Theorem 5]_ _If_ \(\#V(G)\geq 3\)_,_ \[\mathrm{R}(G)\geq\mathrm{d}(G)+\sqrt{2}-\frac{\#V(G)+1}{2}.\] We will apply Construction 5.1 to a generic cactus graph and then induct from the base case of chemical graphs, so we will first need a lemma about deleting blocks: **Lemma 6.4**.: _Let \(G\) be a graph with \(a\) an articulation point and \(\{G_{i}\}\) the articulation components at \(a\). Then:_ \[R(G)\geq\sum\mathrm{R}(G_{i})+\sqrt{\deg a}-\sum\sqrt{\deg a_{i}}.\] _In particular, \(\mathrm{R}(G)\geq\mathrm{R}(G_{i})\) for any \(i\)._ Proof.: Consider \(G_{i}\) as a graph and not a subgraph of \(G\). Let \(a_{i}\) be the copy of \(a\) in \(G_{i}\) and \(N(a_{i})\) all its adjacent vertices. All edges have the same weight in \(G\) and \(G_{i}\) except for those incident to \(a_{i}\). Let \(S(i)\) be the sum of the weights of all edges in \(G_{i}\) incident to \(a\), or \[S(i)=\sum_{w\in N(a_{i})}\frac{1}{\sqrt{\deg w}\sqrt{\deg a_{i}}}\leq\sum_{w\in N (a_{i})}\frac{1}{\sqrt{\deg a_{i}}}=\sqrt{\deg a_{i}}.\] Now: \[\mathrm{R}(G)-\sum\mathrm{R}(G_{i})=\sum-S(i)+S(i)\frac{\sqrt{ \deg a_{i}}}{\sqrt{\deg a}}\\ \geq\sum-\sqrt{\deg a_{i}}+\frac{\deg a_{i}}{\sqrt{\deg a}}=\sqrt {\deg a}-\sum\sqrt{\deg a_{i}}.\] Finally, a result of Bollobas and Erdos [1, Theorem 3] shows the minimal Randic index on a fixed size graph is realized at the star, or \(\mathrm{R}(G)\geq\sqrt{\#V(G)-1}\), so \(\mathrm{R}(G_{i})\geq\sqrt{\deg a_{i}}\). As an immediate consequence of Theorem 5.2, we have **Theorem 6.5**.: _If \(G\) is a cactus,_ \[\mathrm{R}(G) \geq\mathrm{r}(G)+\sqrt{2}-\frac{3}{2}.\] \[\mathrm{R}(G) \geq\mathrm{d}(G)+\sqrt{2}-\frac{\#V(G)+1}{2}.\] _In particular, Conjecture 6.1 is true for cactus graphs._ Proof.: Consider a subgraph \(G^{\prime}\) given by Construction 5.1. Its BC-tree is starlike or a tree. Since \(G\) is a cactus, any block is either a cycle or a bridge, and each articulation point connects exactly two blocks, so an articulation point between bridges has degree \(2\); between a bridge and a cycle degree \(3\); between two cycles degree \(4\). Therefore, \(G^{\prime}\) is a chemical graph, and the result holds for \(G^{\prime}\) by the work of referenced in Theorem 6.3. By Lemma 6.4, \(G^{\prime}\) has smaller Randic index than \(G\); by Theorem 5.2, it has the same radius and diameter.
2302.14419
Quasi-periodic relativistic shells in reflecting boundaries: How likely are black holes to form?
A system of two gravitating bodies floating around a restricted region of strong gravitational field is investigated. We consider two concentric spherically symmetric timelike shells spatially constrained by a perfectly reflecting inner and outer boundary. It is shown numerically that even when the gravitational radius of a contracting shell is larger than the radius of the inner boundary, energy transfer occurs due to the intersection with the other expanding shell before the contracting shell becomes a black hole, resulting nonlinearly stable motion. The system appears to be in a permanently stable periodic motion due to the repetition of forward and reverse energy transfer. The larger the specific energy of a shell, the more stable the motion is. In addition, the motion of the null shell as the fastest limit of the timelike shell is also investigated. Unlike the timelike shell, the motion of the two null shells reduces to exact recurrence equations. By analyzing the recurrence equations, we find the null shells also allow stable motions. Using the algebraic computation of the recurrence equations, we show numerical integration is not necessary for the nonlinear dynamics of the null shells in confined geometry.
Takafumi Kokubu
2023-02-28T08:51:36Z
http://arxiv.org/abs/2302.14419v2
# Quasi-periodic relativistic shells in reflecting boundaries: ###### Abstract A system of two gravitating bodies floating around a restricted region of strong gravitational field is investigated. We consider two concentric spherically symmetric timelike shells spatially constrained by a perfectly reflecting inner and outer boundary. It is shown numerically that even when the gravitational radius of a contracting shell is larger than the radius of the inner boundary, energy transfer occurs due to the intersection with the other expanding shell before the contracting shell becomes a black hole, resulting nonlinearly stable motion. The system appears to be in a permanently stable periodic motion due to the repetition of forward and reverse energy transfer. The larger the specific energy of a shell, the more stable the motion is. In addition, the motion of the null shell as the fastest limit of the timelike shell is also investigated. Unlike the timelike shell, the motion of the two null shells reduces to exact recurrence equations. By analyzing the recurrence equations, we find the null shells also allow stable motions. Using the algebraic computation of the recurrence equations, we show numerical integration is not necessary for the nonlinear dynamics of the null shells in confined geometry. Introduction Gravitational wave astronomy starting with the recently observed gravitational wave binary black hole merger showed that general relativity is still valid in strong gravitational fields [1]. Undoubtedly, the most attractive applications of general relativity are the phenomena associated with black holes (BHs). The model of a gravitating body in motion due to self-gravity has been known for a long time [2]. Research in recent decades has shown that when self-gravitating bodies or fields are spatially constrained, their behaviors become non-trivial. In confining geometries, it is highly non-trivial whether the final fate of the bodies/fields are a BH or stable periodic motion [3; 4; 5; 6; 7; 8]. It is therefore important to investigate nonlinear gravitational phenomena in confined geometries to understand the nature of gravity. We try in this paper to answer the question of how likely BHs are to form by considering the dynamics of spatially bounded gravitating bodies in a strong gravitational field. In general, however, the nonlinear time evolutions of gravitational fields cannot be solved analytically due to gravitational wave emission from gravitational sources, and requires powerful numerical calculations. "A gravitating shell" successfully avoids such numerical difficulties. The notion of a shell, an infinitely thin matter-layer, is a simple and idealized gravitating matter. Israel formulated the full general relativistic behavior of the single shell incorporating self-gravity [9]. Shell's nonlinear behavior have solved fundamental problems on nonlinear gravitational physics [10; 11; 12]. A few decades ago, an interesting method was developed for describing the nonlinear evolution of self-gravitating spherically symmetric concentric "two shells" [13; 14; 15; 16]. This is a method to follow the time evolution after a crossing of the two shells, by assuming that each shell interacts only gravitationally at the crossing event. Hereafter we call this a two-shell system. The motion of two gravitating shells, which can only gravitationally interact with each other, when spatially constrained, is described by a set of first-order ordinary differential equations and hence does not require powerful numerical methods for its numerical integration. Despite this simplification, it has been reported that this two-shell system capture nonlinear gravitational phenomena, e.g., BH critical behavior, [17; 18]. Thus, the two-shelf system is a good way to capture the nonlinear nature of gravity. It is also known that spatially bounded two-shells generally exhibit chaotic properties [19; 20; 21; 16]. Now, to answer our question, we consider the following situation: Consider two concentric spherically symmetric dust shells spatially bounded by a perfectly reflecting rigid inner and outer boundary around a strong gravitational field. Assuming that the shells interact gravitationally, do they collapse into a BH or can they become nonlinearly stable? It is obvious that if the gravitational radius of both shells is sufficiently smaller than the inner boundary, they will not become BH. On the other hand, it is non-trivial when the gravitational radius and the radius of the inner boundary are comparable. In fact, we will later show that the shells exhibit non-trivial behavior in the comparable situations. We comment on the relevance of this study to ultra compact objects. The situation we consider is a strong gravitational field, where the inner reflective boundary is slightly larger than the Schwarzschild's gravitational radius \(2M\) (\(M\): gravitational mass). In other words, the inner boundary can be interpreted as horizonless ultra compact objects. The gravastar is one of the horizonless ultra compact objects, which is connected to the inner de-Sitter spacetime and the outer Schwarzschild spacetime by a finite thickness of matter layers [22]. A model connected by an infinitesimally-thin shell has also been constructed [23]. Since the gravastar is known to reflect almost all gravitational waves when the waves hit the gravastar's surface [24; 25], it may be possible to interpret our inner boundary as the surface of the gravastar. If such an interpretation is possible, our study can also be interpreted as a nonlinear analysis of the system consist of the massive shell and the gravastar. The organization of the paper is as follows. In Sec. II, we set up a shell model. In Sec. III, we find the motion of two timelike shells in our confined system by numerical integration. It is numerically observed that if the shells are moving very fast, the shells do not collapse, but make a quasi-periodic motion. In Sec. IV, we consider the motion of two null shells, and show that the expression for the shell's equation of motion is simplified by taking the light-speed limit of the timelike shell. We also show that the motion of the null shell is reduced to the analysis of an "exact recurrence equations". The null shell analysis gives us a helpful hint for the stable periodic motion observed in timelike shells. Section V is devoted to summary and discussion. We take that the gravitational constant \(G\) and the speed of light \(c\) are unity. ## II Setup In this section, we setup crossings of two concentric dust thin-shells in the Schwarzschild spacetime. The line element of the Schwarzschild spacetime is given by \[\mathrm{d}s^{2}=-f(R)\mathrm{d}t^{2}+f(R)^{-1}\mathrm{d}R^{2}+R^{2}(\mathrm{d} \theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}),\quad f(R)=1-\frac{2M}{R}. \tag{1}\] \(M\) is the gravitational energy of the spacetime. ### Single dust shell Let us first introduce a single shell as a timelike hypersurface which partitions the spacetime into the inner and the outer region. On the hypersurface the line element is given by \(\mathrm{d}s^{2}_{\Sigma}=-\mathrm{d}\tau^{2}+r(\tau)^{2}(\mathrm{d}\theta^{2}+ \sin^{2}\theta\mathrm{d}\phi^{2})\) with the shell's radius \(r\) and the shell's proper time \(\tau\). By introducing Israel's junction conditions, Einstein equations for the shell are written as \[8\pi S^{a}_{\ b}=-[K^{a}_{\ b}]+[K]\delta^{a}_{\ b}, \tag{2}\] where \(K_{ab}\) is the extrinsic curvature and \(S_{ab}\) is the stress-energy tensor of the dust, \(S^{a}_{b}=\mathrm{diag}(-\rho,0,0)\) with the surface energy density \(\rho\). The latin indice run over \(\tau,\theta\) and \(\phi\). We defined a gap of a quantity \(X\) at the hypersurface, \([X]:=(X_{+}-X_{-})\). Subscript \(+\)\((-)\) denotes quantities in the outer (inner) region. When the space time is given by Eq. (1), the non-zero components of the extrinsic curvature are \(K^{\tau}_{\tau\pm}=\dot{\beta}_{\pm}/\dot{r}\) and \(K^{\theta}_{\theta\pm}=K^{\phi}_{\phi\pm}=\beta_{\pm}/r,\) where \(\dot{x}:=\partial x/\partial\tau\) and \(\beta_{\pm}:=\sqrt{f_{\pm}(r)+\dot{r}^{2}}.\) Junction condition Eq. (2.2) now reduces to \[-4\pi\rho =(\beta_{+}-\beta_{-})/r, \tag{2.3}\] \[0 =(\dot{\beta}_{+}-\dot{\beta}_{-})/\dot{r}+(\beta_{+}-\beta_{-})/r. \tag{2.4}\] Also, from the first fundamental form, one obtain \[\dot{t}_{\pm}:=\frac{\beta_{\pm}}{f_{\pm}(r)}. \tag{2.5}\] Because we consider a shell made of dust fluid, the following quantity is constant. \[m:=4\pi r^{2}\rho. \tag{2.6}\] \(m\) denotes the shell's rest mass. We assume \(m>0\) throughout this paper. By squaring Eq. (2.3), the energy equation for the dust shell in the Schwarzschild spacetime with Eq. (2.1) is given by \[\dot{r}^{2}+V(r)=0,\quad V(r)=1-E^{2}-\frac{M_{+}+M_{-}}{r}-\frac{m^{2}}{4r^{2}} \tag{2.7}\] with the shell's specific energy \(E\) and gravitational energy \(\varepsilon\), \[\varepsilon:=M_{+}-M_{-},\quad E:=\varepsilon/m. \tag{2.8}\] Since \(V(r\rightarrow\infty)=1-E^{2}\), the shell marginally reaches \(r=\infty\) with vanishing velocity for \(E=1\) (marginally bound case). For \(E>1\), the shell has a non vanishing velocity at infinity (unbound case). For \(E<1\), the shell cannot reach infinity and bounded in finite radius (bound case). ### Two shells and crossings To make a two shell system with confined geometry, we introduce the second dust shell outside of the first one. In addition, we place perfectly reflective inner and outer concentric boundaries in the spacetime. Shells are confined between the boundaries and will be purely reflected (\(\dot{r}\rightarrow-\dot{r}\)) when they hit the boundaries. Thus, the presence of double shell inevitably divide the spacetime into four regions having distinct gravitational mass \(M_{I}\) and time coordinates \(t_{I}\) (\(I=1,2,3,4\)). Fig. 1 explains our confined geometry and crossing of shells. The equation of the inner shell with radius \(r_{1}\) is obtained by taking \(M_{+}=M_{2}\) and \(M_{-}=M_{1}\) in Eq. (2.7), and similarly, the equation of the outer shell with radius \(r_{2}\) is obtained by setting \(M_{+}=M_{3}\) and \(M_{-}=M_{2}\). We assume for simplicity the both shells have an equal rest mass, \(m\). To follow the time evolution of the shells after the crossing, at the crossing point we impose the transparent condition, i.e., the four-velocity and the rest mass of each shell are invariant during the crossing [13; 14]. A possible change at the crossing is the gravitational energy \(M_{2}\) in the region between shells. \(M_{2}\) discontinuously varies at the crossing radius \(r=R\) as \(M_{2}\to M_{4}\), where \[M_{4}= M_{3}-M_{2}+M_{1}+\frac{1}{Rf_{2}}\left(M_{2}-M_{1}-\frac{m^{2}}{2R} \right)\left(M_{3}-M_{2}+\frac{m^{2}}{2R}\right)\] \[-\text{sgn}\left(\frac{\text{d}r_{1}}{\text{d}\tau_{1}}\right) \text{sgn}\left(\frac{\text{d}r_{2}}{\text{d}\tau_{2}}\right)\frac{1}{Rf_{2}} \sqrt{\left(M_{2}-M_{1}-\frac{m^{2}}{2R}\right)^{2}-m^{2}f_{2}}\sqrt{\left(M_{ 3}-M_{2}+\frac{m^{2}}{2R}\right)^{2}-m^{2}f_{2}} \tag{2.9}\] with \(f_{2}:=1-2M_{2}/R\). \(sgn(x)\) is the sign function. The right hand side of Eq. (2.9) is evaluated at \(r=R\). After the crossing, Eq. (2.7) is again applied to follow dynamics of each shell merely by replacing \(M_{2}\) with \(M_{4}\), but shell 1 (initially inner shell) becomes a new outer shell and shell 2 (initially outer) is a new inner shell. To follow general relativistic multiple bodies, a common time coordinate must be adopted because the proper time for each body is in general different. In the present case, to use a common time coordinate we take \(t_{2}\), a time measured between shells. By multiplying Eq. (2.7) by Eq. (2.5), under the common time coordinate we have the following energy equation for the shell, labeled as \(i\) (\(i=1,2\)), \[\left(\frac{\text{d}r_{i}}{\text{d}t_{2}}\right)^{2}+\frac{f_{2}(r_{i})^{2}V(r _{i})}{f_{2}(r_{i})-V(r_{i})}=0. \tag{2.10}\] Let us discuss the energy of shells. Shells can transport energy to each other by crossing. If shell \(i\) has gravitational energy \(\varepsilon_{i}\), then the energy \(\tilde{\varepsilon}_{i}\) changed by crossing can be written using the energy transfer \(\Delta\varepsilon\) as follows [14]. \[\tilde{\varepsilon}_{1}=\varepsilon_{1}-\Delta\varepsilon\quad\text{and}\quad \tilde{\varepsilon}_{2}=\varepsilon_{2}+\Delta\varepsilon, \tag{2.11}\] Figure 1: (a) Two shells (thick circles) between the inner and the outer boundaries (dashed circles), dark gray region with \(M_{1}\), light gray region with \(M_{2}\) and the outer white region with \(M_{3}\). (b) Schematic picture of shell crossing. The inner shell (shell 1) and the outer shell (shell 2) cross at a crossing point. where \[\Delta\varepsilon:=\gamma m^{2}/R,\qquad\gamma:=-f_{2}\left(\frac{ \mathrm{d}t_{1}}{\mathrm{d}\tau_{1}}\right)\left(\frac{\mathrm{d}t_{2}}{\mathrm{d }\tau_{2}}\right)+f_{2}^{-1}\left(\frac{\mathrm{d}r_{1}}{\mathrm{d}\tau_{1}} \right)\left(\frac{\mathrm{d}r_{2}}{\mathrm{d}\tau_{2}}\right). \tag{12}\] \(\gamma\) is evaluated at the crossing radius \(r=R\) and denotes the Lorentz factor of the relative velocity between the shells. Eq. (12) means that \(\Delta\varepsilon\) is always positive. This fact guarantees that inner shell always releases its energy to the outer shell. We also note that from Eq. (11) there is energy conservation, \[\tilde{\varepsilon}_{1}+\tilde{\varepsilon}_{2}=\varepsilon_{1}+ \varepsilon_{2}. \tag{13}\] ## III Timelike shells: numerical observations In this section, we follow the time evolution of two timelike shells by numerically integrating Eq. (10). As we can see from Eq. (10), the dynamics of each shell is described by a first-order ODE. Two shells cross each other many times because they are sandwiched by reflective boundaries, and the gravitational energy between the shells changes discontinuously at each cross. When enough energy is accumulated in one of the shells, that shell can collapse into a BH. We are interested in the motion of two shells, sandwiched between boundaries, starting from the same radius at the initial time in opposite directions to each other. Let the radii of the inner and outer boundaries be \(r_{b1}\) and \(r_{b2}\), respectively, and let \(2M_{1}<r_{b1}<r_{b2}\). In particular, the radius of the inner boundary is taken as \[r_{b1}=(1+\epsilon)2M_{1}, \tag{14}\] where \(\epsilon\) is positive constant. When \(1\gg\epsilon>0\), the inner boundary may be interpreted as the radius of the gravastar (See introduction). We discuss here the relation between the gravitational radius of the shell and the boundary: When the gravitational radius of the inner and outer shell, given by \(r_{g1}=2M_{2}\) and \(r_{g2}=2M_{3}\), satisfies the relation \(r_{g1}<r_{g2}<r_{b1}<r_{b2}\), the system is clearly stable. When \(r_{b1}<r_{g1}<r_{g2}<r_{b2}\), the contracting shell (shell 1) immediately collapses into a BH. Therefore, nontrivial choice for possibly stable motions is limited to \[r_{g1}<r_{b1}<r_{g2}<r_{b2}. \tag{15}\] Note that the relation of Eq. (15) means the gravitational radius of the outer shell is larger than the radius of the inner boundary. In this paper, we investigate the motion starting from the initial relation of Eq. (15). There are still many free parameters left in this setup, so to simplify the discussion, we choose the parameters as follows. \[M_{1}=1,\ M_{2}=1+0.5\delta,\ M_{3}=1+\delta,\ m=\delta/A\quad(A=2,8,16). \tag{16}\] \(A\) determines \(E\). When \(A=2,8,16\), then \(E=1,2,4\). The larger \(E\), the greater the velocity at infinity and the closer to the speed of light. The \(E=2,4\) correspond to an unbound case while \(E=1\) is a marginally bound case. Since \(M_{2}\) changes at each crossing, \(M_{2}\) in Eq. (18) represents the initial value. From the above choice of \(M_{1,2,3}\), the gravitational energy of each shell is the same at the initial time, i.e. \(\varepsilon_{1}=M_{3}-M_{2}=\delta/2,\varepsilon_{2}=M_{2}-M_{1}=\delta/2\). We also restrict the initial data to the following three types. * Type1 \((E=2):A=8,\epsilon=0.5,r_{b2}=20,r_{0}=7\). * Type2 \((E=4):A=16,\epsilon=0.5,r_{b2}=20,r_{0}=7\). * Type3 \((E=1):A=2,\epsilon=0.1,r_{b2}=3.5,r_{0}=2.5\). \(r_{0}\) is the shell's initial radius. It is of course possible to choose different initial data, but the above conditions are general enough. In other words, changing the initial conditions slightly different from those above does not qualitatively change the behavior of the motion. ### Type 1 initial data: \(E=2\) Under this initial data, the inner boundary is at \(r_{b1}=3\). For \(\delta<0.5\), BH formation is not possible in principle, thus the two-shell system is trivially stable. On the other hand, for \(\delta\geq 0.5\), the gravitational radius of the outer shell is larger than or equal the radius of the inner boundary, yielding that the outer shell immediately forms a BH _if the inner shell is absent_ (for \(\delta\geq 2.5\), the gravitational radius of the outer shell is larger than the initial radius). Numerical integration of Eq. (10) with Type 1 initial data reveals the number of crossings until BH formation, as a function of \(\delta\). See Fig. 21. The gray areas in the figure correspond to stable solutions (stable means that the evolution does not collapse into a BH at least up to the integration time \(t=2000\)). We find nontrivial values of \(\delta\) corresponding to the stable solution is Footnote 1: In the calculations of this paper we used Mathematica; both AccuracyGoal and PrecisionGoal were chosen to be 12 digits of sufficient accuracy. \[0.683\lesssim\delta\lesssim 0.832. \tag{19}\] It is noteworthy that the stable region extends over a considerable range of \(\delta>0.5\). This means that fine tuning is not necessary to obtain stable motion. When \(0.61\lesssim\delta\lesssim 0.64\), one may notice that the number of crossings is higher than for other unstable solutions. In these deltas, the number of crossings changes sensitively with energy, but the evolution ultimately result in a BH. We show an example where the evolution eventually forms a BH. Fig. 3(a) represents the BH formation when \(\delta=0.84\). The upper panel represents the motion of each shell. Red/Blue shells are launched outwardly/inwardly. The black horizontal line is the radius of the inner boundary, and \(r_{b1}=3\) for the Type 1 initial data. The red dashed line represents the gravitational radius of the red shell and the blue dashed line is that of the blue shell. The position of these gravitational radii changes discontinuously at each crossing. Middle panel is \(\min\{1-2M_{2}/r_{1},1-2M_{3}/r_{2}\}\), and the point at which this value becomes zero represents the BH formation. After the seventh crossing, the red shell becomes a BH at \(t_{2}\simeq 281\). Lower panel is the change in gravitational energy of the shell. since the total energy of the two shells is conserved (Eq. (13)), each energy has a symmetric shape with respect to each other. As can be seen from the figure, the energy changes gradually with each crossing, and after the seventh crossing the red shell gains enough energy to become a BH. Meanwhile, let us look at a stable solution with \(\delta>0.5\). The solution for \(\delta=0.75\) is Fig. 3(b). This is a "stable quasi periodic motion". As can be seen from the upper panel of Fig. 3(b), the first crossing occurs before the outer shell (red shell) becomes a BH, and the decrease in the gravitational radius (dashed red lines) prevents the new inner shell (initially outer shell) from becoming a BH. Such a "inner shell's sabotage" continues throughout the whole evolution. Clearly, the presence of the second shell is preventing the first shell from becoming a BH. The lower panel of Fig. 3 (b) is the gravitational energy \(\varepsilon\), and it is observed that the energy transfer is not one-way but periodic as a result of repeating forward- and reverse-transfer. When \(0.5\lesssim\delta\lesssim 0.683\), BH formation again occurs. Fig. 4(a) is the BH formation when \(\delta=0.6\). As a result of multiple crossings, energy accumulates in the blue shell, and after the twelfth crossing, the blue shell finally collapses into a BH. The qualitative difference between BH formation in the large delta region (i.e., \(\delta\geq 0.832\)) and the small delta region (\(0.5\lesssim\delta\lesssim 0.683\)) is the following: In the small delta region, typically (but not always) one shell has so little energy that it cannot reach the outer boundary. This is because the total energy of the two shells is relatively small, so when energy is concentrated in the other shell due to energy transfer, the remaining shell will have less energy. Fig. 4(a) shows that this energy imbalance breaks the periodicity. Also, although not shown in the figure, all motions with \(\delta\lesssim 0.683\) always exhibit non-periodic or "chaotic" behavior (as far as we have confirmed numerically). Figure 2: The number of crossings until BH formation, as a function of \(\delta\). The gray areas in the figure correspond to stable solutions. \(\delta<0.5\) is trivially stable because BH cannot form in principle. Stable solutions exist in the non-trivial region of \(0.683\lesssim\delta\lesssim 0.832\). Now let us look at the motion when the delta is extremely small. Fig. 4(b) represents the motion with \(\delta=0.01\). Here we also see a clear periodicity in the motion. This periodicity is due to the fact that the shell behaves almost as a test shell due to the small gravitational energy. Since there is almost no energy that can be transported, the shells have motions that are almost independent of each other. As a result, the crossing of the shells is restricted to only _two_ specific radii (see appendix for the reason). Thus, the periodic motion with particularly small \(\delta\) is qualitatively different from other periodic motions with larger \(\delta\). Figure 4: (a) BH formation with \(\delta=0.6\) for Type 1 initial data. As a result of multiple crossings, energy accumulates in the blue shell, and after the twelfth crossing, the blue shell finally collapses into a BH. (b) Stable quasi-periodic evolution with \(\delta=0.01\) for Type 1 initial data. Two shells almost behave as test shells. Figure 3: (a) **Upper panel:** Time evolution of two timelike shells with \(\delta=0.84\) for Type 1 initial data. After the seventh crossing, the red shell becomes a BH. **Middle panel:** Lower value of \(1-2M_{2}/r_{1}\) and \(1-2M_{3}/r_{2}\). The zeroth of the value indicates a BH formation. **Lower panel:** Shell’s gravitational energy\(\varepsilon\). The energy of the red shell eventually becomes larger than the energy of the blue shell and finally becomes a BH. (b) Stable quasi-periodic evolution with \(\delta=0.75\) for Type 1 initial data. It behaves periodically by repeating energy forward and reverse transfer. ### Type 2 initial data: \(E=4\) In this type, the inner boundary is at \(r_{b1}=3\). There may exist nontrivial stable motion when \(\delta\geq 0.5\). On the other hand, \(\delta<0.5\) does not allow BH in principle. We numerically integrated the shell's equation of motion by continuously changing delta and found stable motion in the following range of \(\delta\), \[\delta\lesssim 0.821. \tag{10}\] It can be seen that Type 2 has a wider distribution of stable regions than Type 1. We plot the stable motion with \(\delta=0.6\) in Fig. 5(a). It can be seen that it is qualitatively the same as stable motions in Type 1. Comparing Types 1 and 2, we see that the larger \(E\) is, the more stable the region becomes. This implies that stable motion is more likely when the shell has relativistic speed. When \(\delta\) is made extremely small (\(\delta\ll 1\)), the shell behaves like a test shell, as observed in Type 1. ### Type 3 initial data: \(E=1\) Type 3 is the initial data that restricts shells near the gravitational radius of the central body. At this parameter, the inner boundary is at \(r_{b1}=2.2\). For \(\delta<0.1\), it is trivially stable, i.e., the gravitational radius of the outer shell is smaller than the inner boundary, thus BH is not possible in principle. Therefore \(\delta\geq 0.1\) corresponds to the possibility of nontrivial motion. So far, we have focused our observations only on the motion of shells with large specific energy \(E>1\). These shells are sufficiently relativistic at infinity. On the other hand, Type 3 is the marginally bound shells with zero velocity at infinity, denoting shell are energetically weak. However, even this "weak" shell can be relativistic near the gravitational radius of the central body because shell's kinetic energy Figure 5: (a) Stable quasi-periodic evolution with \(\delta=0.6\) for Type 2 initial data. (b) Stable quasi-periodic evolution with \(\delta=0.135\) for Type 3 initial data. becomes large due to strong gravity of the central body. Numerical integration with continuously varying delta shows that stable motion exists, albeit within a relatively narrow range, \[0.132\lesssim\delta\lesssim 0.138. \tag{10}\] We show Fig. 5 (b), the time evolution of shells with \(\delta=0.135\) for Type 3. When the delta is extremely small (\(\delta\ll 1\)), it behaves like a test shell, similar to other types of initial data. ## IV Null shells: exact algebraic analysis In this section we analyze null shells as the light-speed limit of timelike shells. Null shells can be interpreted as a rough approximation of gravitational shock waves [10]. As can be seen from the discussion in the previous section, the evolution of two timelike shells can be thought of as a "problem of finding the crossing points of two waves whose frequencies discontinuously vary with each crossing". It is generally difficult (or simply impossible) to solve this problem in analytic sense. In this section we consider the most relativistic situation where the motion is accelerated to the speed of light. As we will see below, the analysis becomes surprisingly simple in this extremal situation. More specifically, the motion of null shells can be integrated in the \((t_{2},r)\) coordinate, and furthermore, the motion of the two shells between the boundaries is reduced to "exact simultaneous recurrence equations". This means that numerical integration of equation of motions is not necessary to analyze the nonlinear dynamics of the null shell under consideration. We will discuss periodic motion and BH formation by analyzing the recurrence equations. It will be explained how the analysis of null shells can help us understand the stable periodic motion of timelike shells. ### Exact recurrence equations The energy equation for the null shell is obtained by taking the light-speed limit of the timelike equation Eq. (7). For this purpose, we formally take the limit \(m\to 0\) in Eq. (7). This limit diverges in the expression of proper time, but in the time coordinate between shells \((t_{2})\), the potential \(\tilde{V}\) is finite. Formally taking the null limit for timelike shells of Eq. (10), the potential takes the following form [15], \[\tilde{V}_{null}:=\lim_{m\to 0}\tilde{V}=-f_{2}(r)^{2}. \tag{11}\] Thus, the energy equation becomes \[\left(\frac{\mathrm{d}r_{i}}{\mathrm{d}t_{2}}\right)^{2}+\tilde{V}_{null}(r_{ i})=0\qquad(i=1,2). \tag{12}\] This equation can be easily integrated with respect to \(r_{i}\). The trajectory of shell \(i\) starting from the initial radius \(r_{0}\) is given by \[t_{2}=\sigma_{i}\left(r_{i}-R_{0}+2M_{2}\log\left|\frac{r_{i}-2M_{2}}{R_{0}-2 M_{2}}\right|\right), \tag{13}\] where \(\sigma_{i}\) takes \(\pm 1\), denoting an expanding shell for \(+1\) and a contracting shell for \(-1\). Now, what we are interested in is the motion of two shells, sandwiched between two boundaries, starting from the same radius in opposite directions. The inner/outer shells going in opposite directions reflect off the inner/outer boundary, and change its direction of motion, eventually approaching each other. After that, the two shells cross at a certain time. Let us find the crossing radius, say, \(R_{1}\). Since shell 1 moves inward and shell 2 moves outward, \(\sigma_{1}=-1,\sigma_{2}=+1\) at the initial time. \(\sigma_{i}\) changes its sign each time a shell reflects off a boundary. We assume that the shells do not form a BH by the time they crosses. We also assume that \(2M_{2}<r_{b1}\leq r_{1}\leq r_{2}\leq r_{b2}\). Shell 1 departing inwardly reaches the crossing radius \(R_{1}\) after being reflected by the inner boundary \(r_{b1}\). Let \(T\) denote the time taken for this process. Similarly, shell 2 departing outwardly reaches \(R_{1}\) after being reflected by the outer boundary \(r_{b2}\). Since the time taken for this process is also \(T\), the following equation follows from Eq. (4.3). \[R_{1}-R_{0}-4r_{b1} +2M_{2}\log\left|\frac{(R_{1}-2M_{2})(R_{0}-2M_{2})}{(r_{b1}-2M_ {2})^{2}}\right|\] \[=-R_{1}-R_{0}+4r_{b2}-2M_{2}\log\left|\frac{(R_{1}-2M_{2})(R_{0}- 2M_{2})}{(r_{b2}-2M_{2})^{2}}\right|. \tag{4.4}\] Solving this equation for \(R_{1}\) yields \[R_{1}=2M_{2}\left(1+W\left(\frac{x_{b1}e^{x_{b1}}x_{b2}e^{x_{b2}}}{x_{0}e^{x_{0 }}}\right)\right),\quad x_{b1,b2}:=\frac{r_{b1,b2}}{2M_{2}}-1,\quad x_{0}:= \frac{R_{0}}{2M_{2}}-1, \tag{4.5}\] where \(W(x)\) is the positive branch of Lambert's \(W\) function [28]. Since we are assuming periodic motion here, there is also a second crossing, which is the intersection of the two shells in opposite direction starting at \(R_{1}\). Obviously, the above calculation can be repeated. It is important to note that the gravitational energy between the shells is no longer \(M_{2}\) after the first cross; the first crossing has changed the value of this energy. If we take the null limit, the new energy value, \(\tilde{M}_{2}\), is determined by the following relation [15]. \[\left(1-\frac{2\tilde{M}_{2}}{R_{1}}\right)\left(1-\frac{2M_{2}}{R_{1}} \right)=\left(1-\frac{2M_{1}}{R_{1}}\right)\left(1-\frac{2M_{3}}{R_{1}}\right). \tag{4.6}\] This relation is well known the Dray-'t Hooft-Redmount relation (DTR relation). See Refs.[10, 11] for detailed discussions on the DTR relation. Obviously, the same procedure can be repeated to find the second and subsequent crossing radius. Eventually, the \((n+1)\)th crossing radius \(R_{(n+1)}\) is obtained using the \(n\)th crossing radius \(R_{(n)}\) and the gravitational energy \(M_{2(n)}\) as follows. \[R_{(n+1)}=2M_{2(n)}\left(1+W\left(\frac{y_{b1}e^{y_{b1}}y_{b2}e^{y_{b2}}}{y_{n} e^{y_{n}}}\right)\right),\quad y_{b1,b2}:=\frac{r_{b1,b2}}{2M_{2(n)}}-1,\quad y _{n}:=\frac{R_{(n)}}{2M_{2(n)}}-1, \tag{4.7}\] where \(n=0,1,2,3,\dots\) and \(R_{(0)},M_{2(0)}\) denote the radius and the gravitational energy between the shells at the initial time. Also, the \((n+1)\)th gravitational energy \(M_{2(n+1)}\) can be easily deduced from the DTR relation as \[\left(1-\frac{2M_{2(n+1)}}{R_{(n+1)}}\right)\left(1-\frac{2M_{2(n)}}{R_{(n+1) }}\right)=\left(1-\frac{2M_{1}}{R_{(n+1)}}\right)\left(1-\frac{2M_{3}}{R_{(n+1 )}}\right). \tag{4.8}\] Or equivalently, solving the above relation with respect to \(M_{2(n+1)}\), we have \[M_{2(n+1)}=\frac{(M_{1}+M_{3}-M_{2(n)})R_{(n+1)}-2M_{1}M_{3}}{R_{(n+1)}-2M_{2(n)}}. \tag{4.9}\] The set of Eq. (4.7) and Eq. (4.9) is a simultaneous recurrence equations for two variables \(R_{(n)}\) and \(M_{2(n)}\). Given initial values \(R_{(0)}\) and \(M_{2(0)}\), we can _algorithmically_ obtain \(R_{(n)}\) and \(M_{2(n)}\) at any \(n\) by Eq. (4.7) and Eq. (4.9) _without directly performing numerical integration of equations of motion_. Thus, the dynamics of null shells that cross multiple times is reduced to the problem of dealing with the simultaneous recurrence equations. Let us now discuss the conditions of BH formation. The condition for the formation of a BH after the \(n\)th crossing is that the inner shell or outer shell contracts to its gravitational radius. In terms of the recursion relation, this BH criterion reduces to \[2M_{2(n)}\geq r_{b1}\quad\text{or}\quad R_{(n+1)}\leq 2M_{3}. \tag{4.10}\] ### Evolutions of confined null shells We solve the time evolution of the null shells using the recursion relations Eq. (4.7) and Eq. (4.9). Although we have obtained the exact recursion relations, it still seems to be difficult to obtain the general term because of the nonlinearity of the recursion relations. However, we can demonstrate iteration of the recursion relations to check whether a given initial data result in BH formation or periodic motion. When a shell forms a BH during its evolution, \(R_{(n)}\) or \(M_{2(n)}\) meets the BH criterion Eq. (4.10) while increasing \(n\). On the other hand, when the solution is not a BH but an oscillating solution, \(R_{(n)}\) or \(M_{2(n)}\) does not meet the BH criterion by increasing \(n\). To compare with the evolution of the timelike shells, we consider the following initial conditions. * \(\epsilon=0.5,r_{b2}=20,r_{0}=7\) With these initial parameters the inner boundary is at \(r_{b1}=3\). For \(\delta<0.5\), BH is not possible in principle. On the other hand, for \(\delta\geq 0.5\), there are two possibilities of BH or stable motion. After continuously varying \(\delta\) and iterating the recurrence equations Eq. (4.7) and Eq. (4.9) up to \(n=1000\), we found that the evolution exhibits stable periodic motion in the following range. \[\delta\lesssim 0.817. \tag{4.11}\] It is noteworthy that this value is quite close to the stable region of the timelike shell in Type 2 (Eq. (3.5)). In fact, shells with Type 2 is quite fast since it is a motion with \(E=4\). The stable evolution of the null shell with \(\delta=0.6\) is shown in Fig. 6 (here, we did numerical integration of equation of motion just to compare with the stable motion of the timelike shell). As can be seen from this figure, it is very similar to the timelike shell motion with the same delta in Type 2 (Fig. 5 (a)). When the shell oscillates stably, the evolution of \(R_{(n)}\) varies considerably with the value of the initial energy \(\delta\) and exhibits a rich behavior. Fig. 7 shows crossing radius \(R_{(n)}\) as a function of number of crossings \(n\) for \(\delta=0.8,0.79,0.78\) and \(0.6\), with the help of the recursion relations of Eq. (4.7) and Eq. (4.9). The first \(500\) crossings are shown. These figures indicate sensitive behavior of \(R_{(n)}\) (\(M_{2(n)}\) shows qualitatively same behaviors as \(R_{(n)}\), although not explicitly plotted in this paper). For certain values of \(\delta\), \(R_{(n)}\) (and also \(M_{2(n)}\)) may transit almost continuously. This means that the gravitational energy \(\varepsilon\) of each shell has effectively continuous transitions (see \(\delta=0.6,0.79\) in Fig. 7). On the other hand, for another choice of \(\delta\), \(R_{(n)}\) behaves strangely, staying within a limited and very narrow range (see \(\delta=0.8\) in Fig. 7). Such a rich structure of crossing radii as Figure 6: Stable quasi-periodic evolution of two null shells with \(\delta=0.6\). Figure 7: Crossing radius \(R_{(n)}\) as a function of numbers of crossing \(n\) for \(\delta=0.8,0.79,0.78\) and \(0.6\). These figures indicate sensitive behavior of \(R_{(n)}\). a function of all stable \(\delta\) is comprehensively seen in Fig. 8. All plotted trajectories correspond to stable motions [27]. As a final analysis, let us see how the trajectory \((R_{(n)},M_{2(n)})\) of the recurrence equations changes from stable periodic motion to BH formation in the \(R_{(n)}-M_{2(n)}\) plane. Fig. 9 represents the trajectory of the recurrence equations (Eq. (4.7) and Eq. (4.9)) for each delta. The horizontal straight line is the gravitational radius of the outer shell, \(R_{(n)}=2M_{3}\). The vertical straight line is \(M_{2(n)}=r_{in}/2\). When either \(M_{2(n)}\) or \(R_{(n)}\) crosses these lines, a BH forms (Eq. (4.10)). When \(\delta=0\), it describes the motion of the test shell, and in this case the crossing points are restricted to two specific radii determined by the initial values. See Appendix for the exact solution of the test shell's recurrence equations. For larger \(\delta\), the orbit is circular. As \(\delta\) is further increased, the circular orbit approaches the two straight lines of the BH criterion. At the threshold value (\(\delta=0.81744\)), the circular orbit is disrupted and a cusp is formed in a part of the orbit (see the right panel in Fig. 9 for \(\delta=0.81744\)). At \(\delta\) above the threshold, for a finite \(n\), the orbit meets the BH formation criterion and exceeds the lines. This signals BH formation. Figure 8: Crossing radii as a function of \(\delta\) in stable periodic evolutions. Values of \(\delta\) corresponding to stable motions spread into \(0\leq\delta\lesssim 0.817\). Nontrivial stable motions correspond to \(0.5<\delta\lesssim 0.817\). ## V Summary and discussion The motion of two timelike shells confined between boundaries in strong gravity was numerically investigated. Non-trivial stable motions of the shells were found with initial values for which a BH could form. It was explicitly shown that stable motions are more likely to occur when the shells move at a high speed close the speed of light. In stable motions, by energy transfer, energy is accumulated in one shell at first, but after a while, the transfer in the opposite direction begins, and the energy begins to accumulate in the other shell. This energy transfer and reverse transfer are alternately repeated. As a result, neither shell becomes energetic enough to be BH. On the other hand, at the initial value which eventually becomes BH, energy gradually accumulates in one shell, eventually promoting BH formation. To a large extent, the quasi-periodic motion appears to be almost pure periodic motion. Considering the fact that two-shell systems are generally associated with chaotic properties [19; 20; 21], the existence of such non-chaotic oscillations is somewhat surprising. Although two-shell systems generally exhibit chaotic behavior even in Newtonian, our study shows that chaotic nature behind gravity disappears when the speed of the shell is quite relativistic. The motion of a null shell as the null limit of a timelike shell is investigated using "exact" recurrence equations. Even for null shells as the fastest limit of timelike shells, the relation between initial conditions and stable motion was found to be non-trivial, presenting us with a rich behaviors. As we have seen, the recurrence equations exhibit a rich behavior depending on the initial parameters, so it is difficult to solve the general term. Since the general term is not known, it Figure 9: Left panel: \(R_{(n)}-M_{2(n)}\) plane for each \(\delta\). A BH forms when the the point \((R_{(n)},M_{2(n)})\) at \(n\)-th crossing crosses either the horizontal (\(R_{(n)}=3\)) or vertical straight line (\(M_{2(n)}=1.5\)). Right panel: zoom-in picture of the left panel. seems impossible to determine anlatically the end state of the motion (BH or stable) from the initial data. However, since the obtained recurrence equations is exact, it is possible to investigate the end state of the system algorithmically by applying the equation to the given initial values in sequence with arbitrary precision. All of the stable quasi-periodic motions of timelike shells found in this study were found only when the shells were quite fast. Needless to say, a fast timelike shell can be regarded as almost a null shell, so we presume that the periodic motion of timelike shells is _essentially the same_ as the stable periodic motion of null shells. In fact, as an example, the \(R_{(n)}-M_{2(n)}\) plane orbit of the timelike shell is Fig. 10, which has the same characteristics of a circular orbit as that of the null shell (Fig. 9). Finally, we discuss the validity of the boundaries. In our setup, we have placed a rigid inner and outer boundary, which is somewhat artificial. A more physical model that provides an outer wall is to consider an asymptotically anti de-Sitter spacetime. In this spacetime, infinity can be reached in finite time, providing a natural outer wall. On the other hand, there are several physical factors that form an inner wall. A charged shell moving around a charged central body is repelled from the center due to an inner potential barrier. Matter with angular momentum can also form an inner barrier as well. Shells composed of collisionless particles with angular momentum effectively form an inner barrier. The BH formation/stable motion of two confined shells in these more physical situations is beyond the scope of this paper. ###### Acknowledgements. The author is grateful to C. Yoo, Y. Koga and T. Harada for fruitful discussions on the early stage of the investigation. This work was supported by JSPS KAKENHI Grants No. JP20H05853 from the Japan Society for the Promotion of Science. Figure 10: Development of \((R_{(n)},M_{2(n)})\) of timelike shells with \(\delta=0.6\) in Type 2, forming a circular orbit as in the null shells. ## Appendix A Test shell limit of the recurrence equations The simultaneous recurrence equation of confined null shells is given by Eq. (4.7), Eq. (4.9). It seems difficult to find the general term of this relation, but under the test shell limit, the general term can be exactly obtained. The test null shell limit is obtained by setting \(M_{2(n)}=M_{1}=M_{3}\). In this case, \(M_{2}\) becomes a constant, and the DTR relation Eq. (4.9) becomes trivial. The remaining equation Eq. (4.7) becomes \[R_{(n+1)}=2M_{1}\left(1+W\left(\chi_{n}\right)\right),\quad\chi_{n}:=c_{1} \left(\frac{R_{(n)}}{2M_{1}}-1\right)^{-1}\exp\left(c_{2}-\frac{R_{(n)}}{2M_{1 }}\right), \tag{42}\] where \(c_{1,2}\) is constants and represents the \(R_{(n)}\)-independent term of the argument of \(W\) in Eq. (4.7). Introducing \(a_{n}=R_{(n)}/(2M_{1})-1\), Eq. (42) reduces to \[a_{n+1}=W(\frac{c_{3}}{a_{n}e^{a_{n}}}), \tag{43}\] where \(c_{3}=c_{1}e^{c_{2}}\) is constant. The inverse solution of the above equation is \[\frac{c_{3}}{a_{n}e^{a_{n}}}=W^{-1}(a_{n+1})=a_{n+1}e^{a_{n+1}}. \tag{44}\] The last equality used \(W^{-1}(x)=xe^{x}\). If we let \(b_{n}=a_{n}e^{a_{n}}\), we arrive \[b_{n+1}=\frac{c_{3}}{b_{n}}. \tag{45}\] Eq. (45) generally represents a simple oscillating solution with the two values \(b_{0},b_{1},b_{0},b_{1},\cdots\). A special case is when the first term is \(b_{0}=\sqrt{c_{3}}\), resulting \(b_{n}=\sqrt{c_{3}}\). This is an identity map, and the shells intersect at the same radius each time. From the above, we proved that test null shells generally intersect at two specific radii.
2309.08342
Achievable Rate of a STAR-RIS Assisted Massive MIMO System Under Spatially-Correlated Channels
Reconfigurable intelligent surfaces (RIS)-assisted massive multiple-input multiple-output (mMIMO) is a promising technology for applications in next-generation networks. However, reflecting-only RIS provides limited coverage compared to a simultaneously transmitting and reflecting RIS (STAR-RIS). Hence, in this paper, we focus on the downlink achievable rate and its optimization of a STAR-RIS-assisted mMIMO system. Contrary to previous works on STAR-RIS, we consider mMIMO, correlated fading, and multiple user equipments (UEs) at both sides of the RIS. In particular, we introduce an estimation approach of the aggregated channel with the main benefit of reduced overhead links instead of estimating the individual channels. {Next, leveraging channel hardening in mMIMO and the use-and-forget bounding technique, we obtain an achievable rate in closed-form that only depends on statistical channel state information (CSI). To optimize the amplitudes and phase shifts of the STAR-RIS, we employ a projected gradient ascent method (PGAM) that simultaneously adjusts the amplitudes and phase shifts for both energy splitting (ES) and mode switching (MS) STAR-RIS operation protocols.} By considering large-scale fading, the proposed optimization can be performed every several coherence intervals, which can significantly reduce overhead. Considering that STAR-RIS has twice the number of controllable parameters compared to conventional reflecting-only RIS, this accomplishment offers substantial practical benefits. Simulations are carried out to verify the analytical results, reveal the interplay of the achievable rate with fundamental parameters, and show the superiority of STAR-RIS regarding its achievable rate compared to its reflecting-only counterpart.
Anastasios Papazafeiropoulos, Le-Nam Tran, Zaid Abdullah, Pandelis Kourtessis, Symeon Chatzinotas
2023-09-15T11:53:55Z
http://arxiv.org/abs/2309.08342v1
# Achievable Rate of a STAR-RIS Assisted Massive MIMO System Under Spatially-Correlated Channels ###### Abstract Reconfigurable intelligent surfaces (RIS)-assisted massive multiple-input multiple-output (mMIMO) is a promising technology for applications in next-generation networks. However, reflecting-only RIS provides limited coverage compared to a simultaneously transmitting and reflecting RIS (STAR-RIS). Hence, in this paper, we focus on the downlink achievable rate and its optimization of a STAR-RIS-assisted mMIMO system. Contrary to previous works on STAR-RIS, we consider mMIMO, correlated fading, and multiple user equipments (UEs) at both sides of the RIS. In particular, we introduce an estimation approach of the aggregated channel with the main benefit of reduced overhead links instead of estimating the individual channels. Next, leveraging channel hardening in mMIMO and the use-and-forget bounding technique, we obtain an achievable rate in closed-form that only depends on statistical channel state information (CSI). To optimize the amplitudes and phase shifts of the STAR-RIS, we employ a projected gradient ascent method (PGAM) that simultaneously adjusts the amplitudes and phase shifts for both energy splitting (ES) and mode switching (MS) STAR-RIS operation protocols. By considering large-scale fading, the proposed optimization can be performed every several coherence intervals, which can significantly reduce overhead. Considering that STAR-RIS has twice the number of controllable parameters compared to conventional reflecting-only RIS, this accomplishment offers substantial practical benefits. Simulations are carried out to verify the analytical results, reveal the interplay of the achievable rate with fundamental parameters, and show the superiority of STAR-RIS regarding its achievable rate compared to its reflecting-only (under the simultaneously transmitting and reflecting RIS), correlated Rayleigh fading, imperfect CSI, achievable rate, 6G networks. ## I Introduction Reconfigurable intelligent surfaces (RIS) have emerged as a promising technology to meet the requirements of sixth-generation (6G) networks such as a 1000-fold capacity increase together with increased connectivity among billions of devices [1, 2, 3]. A RIS consists of a metamaterial layer of low-cost controllable elements. Among its significant benefits is that its control signals can be dynamically adjusted to steer the impinging waves in specific directions and shape the propagation environment while providing uninterrupted service not only with low hardware cost, but also with low power consumption due to the absence of any power amplifiers. Most of the existing works on RIS have assumed that both the transmitter and the receiver are found on the same side of the surface, i.e., only reflection takes place [1, 2, 4, 5, 6, 7]. However, practical applications might include user equipments (UEs) on both sides of the RIS, which contain the spaces in front and behind the surface. Recently, advancements in programmable metamaterials have enabled the technology of simultaneously transmitting and reflecting RIS (STAR-RIS).1 Hence, STAR-RIS has been proposed as a technology to satisfy this demand, i.e., it provides full space coverage by changing the amplitudes and phases of the impinging waves [8, 9, 10, 11, 12]. For instance, in [8], the authors provided a general hardware model and two-channel models corresponding to the near-field region and the far-field region of STAR-RIS with only two UEs. Also, they showed that the coverage and diversity gain are greater than reflecting-only/conventional RIS-assisted systems. Furthermore, in [9], three operating protocols for adjusting the transmission and reflection coefficients of the transmitted and reflected signals were suggested, namely, energy splitting (ES), mode switching (MS), and time switching (TS). Footnote 1: We note that the word “transmitting” does not correspond to active transmission but implies coverage of the UEs at the other side of the RIS. In particular, most existing works on RIS-aided systems have assumed perfect CSI, but this is a highly unrealistic assumption since practical systems have imperfect CSI. The accuracy of the channel state information (CSI) at the transmitter side is crucial to achieving a high beamforming gain of RIS, which includes channels between the transmitter and the UEs [13]. However, the acquisition of CSI is challenging because of the following reasons. First, RIS, in general, consists of passive elements to perform the desired reflecting operation, which makes any active transmission or reception infeasible, i.e., it cannot perform any sampling or processing of the pilots [1]. For this reason, an alternative method is the estimation of the aggregated transmitter-RIS-receiver channel by sending appropriate pilot symbols [14]. Second, RIS are generally large and consist of a large number of elements, and thus, induce high training overhead for channel estimation (CE), which results in spectral efficiency (SE) reduction [15]. Various CE schemes have been proposed to address this issue [16, 17, 18, 19, 20]. For example, in [17], an ON/OFF CE method was proposed, where the estimates of all RIS-assisted channels for a single-user MISO system are obtained one-by-one. Note that in the case of multi-user systems, this model was extended, assuming all RIS elements to be active during training, but the number of sub-phases is required to be at least equal to the number of RIS elements [20]. Although that method provides better CE as the number of sub-phases increases, the achievable rate decreases because the data transmission phase takes a smaller fraction of the coherence time due to excessive training overhead. Also, that method computes the estimates of the channels of the individual RIS elements but the covariance of the channel vector from all RIS elements to a specific UE is unknown. Especially, in the case of STAR-RIS, CE becomes more challenging because UEs are located in both transmission and reflection regions, which requires different passive beamforming matrices (PBMs). In [11], a CE scheme was presented but did not account for multiple antennas at the BS, multiple UEs, and correlated fading. In parallel, many early works on conventional RIS assumed independent Rayleigh fading such as [1], but recently, it was shown that RIS correlation should be considered because it is unavoidable in practical systems [4]. To this end, several works on conventional RIS have taken into account the impact of RIS correlation [5, 6], but only [21] has considered fading correlation on a STAR-RIS assisted system. Furthermore, except [9, 10, 12], all other works have assumed a single-antenna transmitter. Also, all previous studies on STAR-RIS have only considered a single UE on each side of the STAR-RIS. In this paper, we consider a more general case where multiple UEs are present on each side of the STAR-RIS. _Contributions_: The observations above indicate the topic of this work, which concerns the study and design of a STAR-RIS assisted mMIMO system under the realistic conditions of imperfect CSI and correlated fading. These realistic assumptions and the consideration of multiple UEs at each side of the STAR-RIS make it extremely difficult for the derivations of the achievable rate and the resulting optimization of the amplitudes and phase shifts of the STAR-RIS. Our main contributions are summarized as follows: * Aiming to characterize the potentials of STAR-RIS under realistic assumptions, we include the effect of spatially correlated fading at both the BS and the STAR-RIS.2 In particular, we consider a massive multiple-input multiple-output (mMIMO) system with a BS having a large but finite number of antennas. Under this general setup, we derive the downlink achievable spectral efficiency (SE) of a STAR-RIS-assisted mMIMO system with imperfect CSI and correlated fading in closed form that depends only on large-scale statistics, which has not been known previously. Moreover, we achieve this by a unified analysis of the channel estimation and data transmission phases for UEs located in either the \(t\) or \(r\) regions, which distinguishes our work from previous research. Footnote 2: In the case of the active beamforming, being MRT in this work, it is designed based on the instantaneous channel, which depends on the correlation of the aggregated channel described by (6). In the case of the passive beamforming, it is designed based on statistical CSI in terms of path loss and correlation. Specifically, the sum-rate expression depends only on these large scale statistics, which vary every several coherence intervals. Hence, passive beamforming can be optimized at every several coherence intervals. Moreover, given that we rely on the statistical CSI approach, if no correlation is considered, the aggregated correlation will not depend on the phase shifts, which means that the sum rate cannot be optimized with respect to the phase shifts. * Contrary to [8, 9, 10, 11, 12] which have assumed a single UE at each side of the stake, which are served in the same time-frequency response. * We apply the linear minimum mean square error (LMMSE) method to perform CE, and obtain closed-form expressions with lower overhead than other CE methods suggested for RIS-assisted systems. Specifically, we demonstrate that LMMSE can be applied without the need for a tailored design for STAR-RIS under conditions of statistical CSI. Note that previous works do not provide analytical expressions and/or do not take into account the spatial correlation at the RIS [8, 9, 10, 11, 12]. * Our analysis relies on statistical CSI, meaning that our closed-form expressions are dependent only on large-scale fading that changes at every several coherence intervals. Thus, the proposed optimization of the STAR-RIS can take place at every several coherence intervals, which saves significant overhead. On the contrary, previous studies, which are based on instantaneous CSI changing at each coherence interval, might not be feasible in practice due to inherent large overheads.3 Footnote 3: In this work, we have followed the two-timescale transmission protocol approach as in [22], where a maximisation of the achievable sum rate of a RIS-assisted multi-user multi-input single-output (MU-MISO) system took place. According to this approach, the precoding is designed in terms of instantaneous CSI, while the RIS phase shifts is optimized by using statistical CSI. Notably, all works, which are based on statistical CSI, have relied on the two-timescale protocol. Examples are the study of the impact of hardware impairments on the sum rate and the minimum rate in [6] and [23], respectively. * We formulate the problem of finding the amplitudes and phase shifts of the STAR-RIS to maximize the achievable sum SE. Our optimization framework considers multiple users at each side of the STAR-RIS in a unified manner. Despite its non-convexity, we derive an iterative efficient method based on the projected gradient ascent method in which both amplitudes and phase shifts of the STAR-RIS are updated simultaneously at each iteration. To the best of our knowledge, we are the first to optimize simultaneously the amplitudes and the phase shifts of the PBM in a STAR-RIS system. This is a significant contribution since other works optimize only the phase shifts or optimize both the amplitudes and the phase shifts in an alternating optimization manner. Moreover, as large-scale fading is considered, our optimization has very lower overhead in terms of complexity, training, and feedback compared to other works which rely on instantaneous CSI such as [9]. Notably, this property is important for STAR-RIS applications, which have twice the number of optimization variables compared to reflecting-only RIS. We also remark that the beamforming optimization based on statistical CSI for STAR-RIS has not been investigated previously. * Simulations and analytical results are provided to shed light on the impact of various parameters and to show the superiority of STAR-RIS over conventional RIS. For example, we find that the system performance decreases as the RIS correlation increases. _Paper Outline_: The remainder of this paper is organized as follows. Section II presents the system model of a STAR-RIS-assisted mMIMO system with correlated Rayleigh fading. Section III provides the CE. Section IV presents the downlink data transmission with the derived downlink sum SE. Section V provides the simultaneous amplitudes and phase-shifts configuration concerning both the PBMs for the transmission and reflection regions. The numerical results are placed in Section VI, and Section VII concludes the paper. _Notation_: Vectors and matrices are denoted by boldface lower and upper case symbols, respectively. The notations \((\cdot)^{\mathsf{r}}\), \((\cdot)^{\mathsf{n}}\), and \(\mathrm{tr}(\cdot)\) describe the transpose, Hermitian transpose, and trace operators, respectively. Moreover, the notations \(\arg\left(\cdot\right)\), \(\mathbb{E}\left[\cdot\right]\), and \(\mathrm{Var}(\cdot)\) express the argument function, the expectation, and variance operators, respectively. The notation \(\mathrm{diag}\left(\mathbf{A}\right)\) describes a vector with elements equal to the diagonal elements of \(\mathbf{A}\), the notation \(\mathrm{diag}\left(\mathbf{x}\right)\) describes a diagonal matrix whose elements are \(\mathbf{x}\), while \(\mathbf{b}\sim\mathcal{CN}(\mathbf{0},\mathbf{\Sigma})\) describes a circularly symmetric complex Gaussian vector with zero mean and a covariance matrix \(\mathbf{\Sigma}\). ## II System Model We consider a STAR-RIS-aided system, where a BS with an \(M\)-element uniform linear array (ULA) serves simultaneously \(K\) single-antenna UEs that are distributed on both sides of the STAR-RIS, as illustrated in Fig. 1. Specifically, \(\mathcal{K}_{t}=\{1,\ldots,K_{t}\}\) UEs are located in the transmission region \((t)\) and \(\mathcal{K}_{r}=\{1,\ldots,K_{r}\}\) UEs are located in the reflection region \((r)\), respectively, where \(K_{t}+K_{r}=K\). Also, we denote by \(\mathcal{W}=\{w_{1},w_{2},...,w_{K}\}\) be set that defines the RIS operation mode for each of the \(K\) UEs. In particular, if the \(k\)th UE is located behind the STAR-RIS (i.e., \(k\in\mathcal{K}_{t}\)), then \(w_{k}=t\), while \(w_{k}=r\) when the \(k\)th UE is facing the STAR-RIS (i.e., \(k\in\mathcal{K}_{r}\)). Moreover, we assume direct links between the BS and UEs. The RIS consists of a uniform planar array (UPA) composed of \(N_{\mathrm{h}}\) horizontally and \(N_{\mathrm{v}}\) vertically passive elements, which belong to the set of \(\mathcal{N}=\{1,\ldots,N\}\) elements, where \(N=N_{\mathrm{h}}\times N_{\mathrm{v}}\) is the total number of RIS elements. The STAR-RIS is able to configure the transmitted (\(t\)) and reflected (\(r\)) signals by two independent coefficients. In particular, let \(t_{n}=(\beta_{n}^{t}e^{j\phi_{n}^{t}})s_{n}\) and \(r_{n}=(\beta_{n}^{r}e^{j\phi_{n}^{r}})s_{n}\) denote the transmitted and reflected signal by the \(n\)th STAR-RIS element, respectively.4 The amplitude and phase parameters \(\beta_{n}^{w_{k}}\in[0,1]\) and \(\phi_{n}^{w_{k}}\in[0,2\pi)\), where the \(k\)th UE can be in any of the two regions that corresponds also to the RIS mode, i.e. transmission (\(t\)) or reflection (\(r\)) [8], are independent. This model suggests that \(\phi_{n}^{t}\) and \(\phi_{n}^{r}\) can be chosen independently, but the choice of the amplitudes is based on the relationship expressed by the law of energy conservation as Footnote 4: Note that here, we use \(\beta_{i}^{w_{k}}\), instead of \(\sqrt{\beta_{i}^{w_{k}}}\) as in [8], to denote the amplitude of the \(i\)th RIS element in mode \(w_{k}\). The reason for this change will become clear when we present our proposed algorithm in Section V. \[(\beta_{n}^{t})^{2}+(\beta_{n}^{r})^{2}=1,\forall n\in\mathcal{N}. \tag{1}\] Henceforth, for the sake of exposition, we denote \(\theta_{i}^{w_{k}}=e^{j\phi_{i}^{w_{k}}}\). ### _Operation Protocols_ Our analysis is dedicated to the ES/MS protocols, which were presented in [9]. Herein, we outline them by providing their main points. #### Ii-A1 ES protocol All RIS elements serve simultaneously all UEs in both \(t\) and \(r\) regions. Especially, the PBM for the \(k\)th UE is expressed as \(\mathbf{\Phi}_{\begin{subarray}{c}w_{k}\\ w_{k}\end{subarray}}^{\mathrm{ES}}=\text{diag}(\beta_{1}^{w_{k}}\theta_{1}^{w_ {k}},\ldots,\beta_{n}^{w_{k}}\theta_{N}^{w_{k}})\in\mathbb{C}^{N\times N}\), where \(\beta_{n}^{w_{k}}\geq 0\), \((\beta_{n}^{t})^{2}+(\beta_{n}^{r})^{2}=1\), and \(|\theta_{n}^{w_{k}}|=1,\forall n\in\mathcal{N}\). #### Ii-A2 MS protocol The RIS elements are partitioned into two groups of \(N_{t}\) and \(N_{r}\) elements that serve UEs in the \(t\) and \(r\) regions, respectively. In other words, \(N_{t}+N_{r}=N\). The PBM for \(k\in\mathcal{K}_{t}\) or \(k\in\mathcal{K}_{r}\) is given by \(\mathbf{\Phi}_{\begin{subarray}{c}w_{k}\\ w_{k}\end{subarray}}^{\mathrm{MS}}=\text{diag}(\beta_{1}^{w_{k}}\theta_{1}^{w_ {k}},\ldots,\beta_{N}^{w_{k}}\theta_{N}^{w_{k}})\in\mathbb{C}^{N\times N}\), where \(\beta_{n}^{w_{k}}\in\{0,1\}\), \((\beta_{n}^{t})^{2}+(\beta_{n}^{r})^{2}=1\), and \(|\theta_{i}^{w_{k}}|=1,\forall n\in\mathcal{N}\). As can be seen, this protocol is a special case of the ES protocol, where the amplitude coefficients for transmission and reflection are restricted to binary values. As a result, the MS protocol is inferior of the ES counterpart since it cannot achieve the full-dimension transmission and reflection beamforming gain. Despite this performance degradation, it brings the advantage of lower computational complexity regarding the PBM design. ### _Channel Model_ We assume narrowband quasi-static block fading channels with each block having a duration of \(\tau_{c}\) channel uses. We adopt the standard time-division-duplex (TDD) protocol, which is preferable in mMIMO systems. Within TDD, we assume that each block includes \(\tau\) channel uses for the uplink training phase and \(\tau_{c}-\tau\) channel uses for the downlink data transmission phase. Notably, contrary to other works, we aim to achieve a unified analysis regarding the channel estimation and data transmission phase that applies to a UE found in any of the \(t\) or \(r\) regions. Let \(\mathbf{G}=[\mathbf{g}_{1}\ldots,\mathbf{g}_{N}]\in\mathbb{C}^{M\times N}\) be the channel between the BS and the STAR-RIS with \(\mathbf{g}_{i}\in\mathbb{C}^{M\times 1}\) for \(i\in\mathcal{N}\). Also, \(\mathbf{q}_{k}\in\mathbb{C}^{N\times 1}\) denotes the channel between the STAR-RIS and UE \(k\) that can be found on either side. The direct link between the BS and UE \(k\) is denoted as \(\mathbf{d}_{k}\). On this ground, we assume Fig. 1: A mMIMO STAR-RIS assisted system with multiple UEs at transmission and reflection regions. that all links are subject to correlated Rayleigh fading, which is normally the case in practice [4].5 In particular, we have Footnote 5: The consideration of correlated Rician fading, which includes an LoS component, is the topic of future work. \[\mathbf{G} =\sqrt{\bar{\beta}_{g}}\mathbf{R}_{\mathrm{BS}}^{1/2}\mathbf{D} \mathbf{R}_{\mathrm{RIS}}^{1/2}, \tag{2}\] \[\mathbf{q}_{k} =\sqrt{\bar{\beta}_{k}}\mathbf{R}_{\mathrm{RIS}}^{1/2}\mathbf{c} _{k},\] (3) \[\mathbf{d}_{k} =\sqrt{\bar{\beta}_{k}}\mathbf{R}_{\mathrm{BS}}^{1/2}\bar{ \mathbf{c}}_{k}, \tag{4}\] where \(\mathbf{R}_{\mathrm{BS}}\in\mathbb{C}^{M\times M}\) and \(\mathbf{R}_{\mathrm{RIS}}\in\mathbb{C}^{N\times N}\), assumed to be known by the network, express the deterministic Hermitian-symmetric positive semi-definite correlation matrices at the BS and the RIS respectively.6 Regarding \(\mathbf{R}_{\mathrm{BS}}\), it can be modeled e.g., as in [24], and \(\mathbf{R}_{\mathrm{RIS}}\) is modeled as in [4]. Moreover, \(\bar{\beta}_{g}\), \(\bar{\beta}_{k}\), and \(\bar{\beta}_{k}\) express the path-losses of the BS-RIS, BS-UE \(k\), and RIS-UE \(k\) links in \(t\) or \(r\) region, respectively. Also, \(\mathrm{vec}(\mathbf{D})\sim\mathcal{CN}\left(\mathbf{0},\mathbf{I}_{MN}\right)\), \(\mathbf{c}_{k}\sim\mathcal{CN}\left(\mathbf{0},\mathbf{I}_{N}\right)\), and \(\bar{\mathbf{c}}_{k}\sim\mathcal{CN}\left(\mathbf{0},\mathbf{I}_{N}\right)\) express the corresponding fast-fading components. Footnote 6: Many previous works have assumed that the channel between the BS and the RIS is deterministic expressing a line-of-sight (LoS) component [6, 20], while the analysis here is more general since we assume that all links are correlated Rayleigh fading distributed. In particular, \(\mathbf{G}\), as expressed in (2) is based on the Kronecker channel model. We note that the correlation matrices \(\mathbf{R}_{\mathrm{RIS}}\) and \(\mathbf{R}_{\mathrm{BS}}\) can be assumed to be known by the network since they can be obtained by existing estimation methods [25, 26]. Alternatively, we can practically calculate the covariance matrices for both \(\mathbf{R}_{\mathrm{RIS}}\) and \(\mathbf{R}_{\mathrm{BS}}\), despite the fact that \(\mathbf{R}_{\mathrm{RIS}}\) is passive. Especially, the expressions for these covariance matrices depend on the distances between the RIS elements and the BS antennas, respectively, as well as the angles between them. The distances are known from the construction of the RIS and the BS, and the angles can be calculated when the locations are given. Hence, the covariance matrices can be considered to be known. Given the PBM, the aggregated channel vector for UE \(k\)\(\mathbf{h}_{k}=\mathbf{d}_{k}+\mathbf{G}\mathbf{\Phi}_{w_{k}}\mathbf{q}_{k}\) has a covariance matrix \(\mathbf{R}_{k}=\mathbb{E}\{\mathbf{h}_{k}\mathbf{h}_{k}^{\mathsf{tr}}\}\) given by \[\mathbf{R}_{k}=\bar{\beta}_{k}\mathbf{R}_{\mathrm{BS}}+\hat{\beta}_{k}\, \mathrm{tr}\big{(}\mathbf{R}_{\mathrm{RIS}}\mathbf{\Phi}_{w_{k}}\mathbf{R}_{ \mathrm{RIS}}\mathbf{\Phi}_{w_{k}}^{u}\big{)}\mathbf{R}_{\mathrm{BS}}, \tag{5}\] where we have used the independence between \(\mathbf{G}\) and \(\mathbf{q}_{k}\), \(\hat{\beta}_{k}=\bar{\beta}_{g}\bar{\beta}_{k},\mathbb{E}\{\mathbf{q}_{k} \mathbf{q}_{k}^{u}\}=\bar{\beta}_{k}\mathbf{R}_{\mathrm{RIS}}\), and \(\mathbb{E}\{\mathbf{V}\mathbf{U}\mathbf{V}^{\mathsf{H}}\}=\mathrm{tr}(\mathbf{ U})\mathbf{I}_{M}\) with \(\mathbf{U}\) being a deterministic square matrix, and \(\mathbf{V}\) being any matrix with independent and identically distributed (i.i.d.) entries of zero mean and unit variance. Notably, when \(\mathbf{R}_{\mathrm{RIS}}=\mathbf{I}_{N}\), \(\mathbf{R}_{k}\) does not depend on the phase shifts but only on the amplitudes, as also observed in [7]. **Remark 1**: _As shown in (5), when independent Rayleigh fading is assumed, i.e., \(\mathbf{R}_{\mathrm{RIS}}=\mathbf{I}_{N}\) and \(\mathbf{R}_{\mathrm{BS}}=\mathbf{I}_{M}\), the covariance matrix of the aggregated channel becomes \(\mathbf{R}_{k}=(\bar{\beta}_{k}+\hat{\beta}_{k}\sum_{i=1}^{N}(\beta_{i}^{w_{k}} )^{2})\mathbf{I}_{M}\), which is independent of the phase shifts. This reduces significantly the capability of the RIS in forming narrow beams and thus the performance is degraded accordingly. Therefore, it is not possible to optimize the achievable rate with respect to the phase shifts under independent Rayleigh fading conditions.7 However, in practice, correlated fading is unavoidable, which enables the optimization of the surface in terms of the phase shifts._ Footnote 7: It is important to note that the recent works based on statistical CSI, such as [5, 6, 7, 22, 23], have shown a similar observation, i.e., in the case of no RIS correlation, the covariance matrix of the aggregated channel \(\mathbf{R}_{k}\) does not depend on the phase shifts. ## III Channel Estimation In practical systems, perfect CSI cannot be obtained. Especially, in mMIMO systems, the TDD protocol is adopted and channels are estimated by an uplink training phase with pilot symbols [27]. However, a RIS, being implemented by nearly passive elements without any RF chains, cannot process the estimated channels and obtain the received pilots by UEs. Also, it cannot transmit any pilot sequences to the BS for channel estimation. In general, there are two approaches to channel estimation for RIS-aided communication systems, one focusing on the estimation of the individual channels such as [11, 13, 14, 11], and the other obtaining the estimated aggregated channel such as [6, 20, 28]. The first benefit of the latter approach is that its implementation does not require any extra hardware and power cost. Also, the estimated aggregated BS-RIS-user channel is sufficient for the transmission beamforming design for the RIS-related links. It is easy to see that the BS-RIS channel has a large dimension in the considered system, which results in a prohibitively high pilot overhead if individual channels need to be estimated. This issue motivates us to employ the second approach in this paper, which has lower overhead and allows estimated channels to be expressed in closed form. We will now provide the details of the adopted channel estimation method. We assume that all UEs either in \(t\) or \(r\) region send orthogonal pilot sequences. Specifically, we denote by \(\mathbf{x}_{k}=[x_{k,1},\ldots,x_{k,\tau}]^{\mathsf{tr}}\in\mathbb{C}^{\tau \times 1}\) the pilot sequence of UE \(k\) that can be found in any of the two regions since the duration of the uplink training phase is \(\tau\) channel uses. Note that \(\mathbf{x}_{k}^{\mathsf{tr}}\mathbf{x}_{l}=0\ \forall k\neq l\) and \(\mathbf{x}_{k}^{\mathsf{tr}}\mathbf{x}_{k}=\tau P\) joules with \(P=|x_{k,i}|^{2},\ \forall k,i\), i.e., it is assumed that all UEs use the same average transmit power during the training phase. The received signal by the BS for the whole uplink training period is written as \[\mathbf{Y}^{\mathrm{tr}}=\sum_{i=1}^{K}\mathbf{h}_{i}\mathbf{x}_{i}^{\mathsf{ tr}}+\mathbf{Z}^{\mathrm{tr}}, \tag{6}\] where \(\mathbf{Z}^{\mathrm{tr}}\in\mathbb{C}^{M\times\tau}\) is the received AWGN matrix having independent columns with each one distributed as \(\mathcal{CN}\left(\mathbf{0},\sigma^{2}\mathbf{I}_{M}\right)\). Obviously, in (6), there is a contribution from UEs of both regions. Multiplication of (6) with the transmit training sequence from UE \(k\) removes the interference by other UEs that can be found in the same or in the opposite region, and gives \[\mathbf{r}_{k}=\mathbf{h}_{k}+\frac{\mathbf{z}_{k}}{\tau P}, \tag{7}\] where \(\mathbf{z}_{k}=\mathbf{Z}^{\mathrm{tr}}\mathbf{x}_{k}\). **Lemma 1**: _The LMMSE estimate of the aggregated channel \(\mathbf{h}_{k}\) between the BS and UE \(k\) is given by_ \[\hat{\mathbf{h}}_{k}=\mathbf{R}_{k}\mathbf{Q}_{k}\mathbf{r}_{k}, \tag{8}\] _where \(\mathbf{Q}_{k}=\left(\mathbf{R}_{k}+\frac{\sigma^{2}}{\tau P}\mathbf{I}_{M} \right)^{-1}\), and \(\mathbf{r}_{k}\) is the noisy channel given by (7)._ Proof:: Please see Appendix A. The property of the orthogonality of LMMSE estimation gives the overall perfect channel in terms of the estimated channel \(\hat{\mathbf{h}}_{k}\) and estimation channel error vectors \(\tilde{\mathbf{h}}_{k}\) as \[\mathbf{h}_{k}=\hat{\mathbf{h}}_{k}+\tilde{\mathbf{h}}_{k}. \tag{9}\] Both \(\hat{\mathbf{h}}_{k}\) and \(\tilde{\mathbf{h}}_{k}\) have zero mean, and have variances (cf. (40)) \[\mathbf{\Psi}_{k} = \mathbf{R}_{k}\mathbf{Q}_{k}\mathbf{R}_{k}, \tag{10}\] \[\tilde{\mathbf{\Psi}}_{k} = \mathbf{R}_{k}-\mathbf{\Psi}_{k}, \tag{11}\] respectively. Given that \(\mathbf{h}_{k}\) is not Gaussian, \(\hat{\mathbf{h}}_{k}\) and \(\tilde{\mathbf{h}}_{k}\) are not independent, but they are uncorrelated and each of them has zero mean [27]. It is clear from the above derivations that we indeed follow a conventional channel estimation method from standard mMIMO systems, where only the aggregated instantaneous BS-UE channel \(\mathbf{h}_{k}\) is estimated. In this way, the minimum pilot sequence length is \(K\), which is independent of the dimensions \(M\) and \(N\). We note that if individual channels need to be estimated, the required complexity increases with \(M\) and \(N\), which are large in the considered system. Thus, the presented method of estimating aggregated channels offers significant overhead reduction. It is important to note that this channel estimation method is sufficient for the design of the precoder at the BS in statistical CSI-based approaches. We also remark that the method of estimating aggregated channels presented above has not been previously applied to STAR-RIS-aided systems based on the two-timescale method, which adds to the novelty of our paper. Specifically, we have demonstrated that the same expression for estimated channels can be used for users in either \(t\) or \(r\) regions, which has not been reported in previous papers studying STAR-RIS. Note also that the proposed two-timescale transmission approach has a channel estimation phase that does not depend on \(N\), and thus, it is applicable to both ES and MS protocols. However, the ES protocol has twice the number of optimizable variables and thus requires higher complexity compared to the MS protocol. An advantage of the two-timescale approach is that the surface needs to be redesigned only when the statistical CSI changes. In contrast, instantaneous CSI-based schemes require beamforming calculations and information feedback in every channel coherence interval, leading to high computational complexity, power consumption, and feedback overhead. For such schemes, the ES protocol is not practically appealing, and thus, our proposed two-timescale approach is certainly more viable. **Remark 2**: _Our analysis presented above relies on large-scale statistics for a given PBM, which is obtained at every several coherence intervals. Thus, the optimization of the PBM that will be studied in the sequel is more practically appealing. Note that our method provides the estimated aggregated channel vector in closed-form. Other methods in the RIS literature such as [18] do not result in analytical expressions, and do not capture the correlation effect since they obtain the estimated channel per RIS element [20]. Moreover, in the case of STAR-RIS, the only work on channel estimation is [11] but it does not consider practical effects such as correlation and multiple antennas at the BS._ ## IV Downlink Data Transmission The downlink data transmission from the BS to UE \(k\) in \(t\) or \(r\) region relies on TDD, which exploits channel reciprocity, i.e., the downlink channel equals the Hermitian transpose of the uplink channel. Hence, the received signal by UE \(k\) is expressed as \[r_{k}=\mathbf{h}_{k}^{\mathsf{s}}\mathbf{s}+z_{k}, \tag{12}\] where \(\mathbf{s}=\sqrt{\lambda}\sum_{i=1}^{K}\sqrt{p_{i}}\mathbf{f}_{i}l_{i}\) expresses the transmit signal vector by the BS, \(p_{i}\) is the power allocated to UE \(i\), and \(\lambda\) is a constant which is found such that \(\mathbb{E}[\mathbf{s}^{\mathsf{s}}\mathbf{s}]=\rho\), where \(\rho\) is the total average power budget. Also, \(z_{k}\sim\mathcal{CN}(0,\sigma^{2})\) is the additive white complex Gaussian noise at UE \(k\). Moreover, \(\mathbf{f}_{i}\in\mathbb{C}^{M\times 1}\) is the linear precoding vector and \(l_{i}\) is the corresponding data symbol with \(\mathbb{E}\{|l_{i}|^{2}\}=1\). In this paper we adopt equal power allocation among all UEs as usually happens in the mMIMO literature, i.e. \(p_{i}=\rho/K\)[24]. Thus, \(\lambda\) is found to ensure \(\mathbb{E}[\mathbf{s}^{\mathsf{s}}\mathbf{s}]=\rho\), which gives \(\lambda=\frac{K}{\mathbb{E}[\{\mathbf{r}\mathbf{F}\mathbf{F}\mathbf{F}^{ \mathsf{m}}\}]}\), where \(\mathbf{F}=[\mathbf{f}_{1},\ldots,\mathbf{f}_{K}]\in\mathbb{C}^{M\times K}\). According to the technique in [29] and by exploiting that UEs do not have instantaneous CSI but are aware of only statistical CSI, the received signal by UE \(k\) can be written as \[r_{k} = \frac{\sqrt{\lambda\rho}}{K}(\mathbb{E}\{\mathbf{h}_{k}^{\mathsf{s }}\mathbf{f}_{k}\}l_{k}+\mathbf{h}_{k}^{\mathsf{s}}\mathbf{f}_{k}l_{k}-\mathbb{ E}\{\mathbf{h}_{k}^{\mathsf{s}}\mathbf{f}_{k}\}l_{k} \tag{13}\] \[+ \sum_{i\neq k}^{K}\mathbf{h}_{k}^{\mathsf{s}}\mathbf{f}_{i}l_{i}Z +z_{k}.\] Now, by using the use-and-then-forget bound [27], which relies on the common assumption of the worst-case uncorrelated additive noise for the inter-user interference, we obtain a lower bound on downlink average SE in bps/Hz. We remark that this lower bound is tight for mMIMO as the number of antennas is very large. Specifically, the achievable sum SE is given by \[\mathrm{SE}=\frac{\tau_{\mathrm{c}}-\tau}{\tau_{\mathrm{c}}}\sum_{k=1}^{K} \log_{2}{(1+\gamma_{k})}, \tag{14}\] where \(\gamma_{k}\) is the downlink signal-to-interference-plus-noise ratio (SINR), and the pre-log fraction corresponds to the percentage of samples per coherence block for downlink data transmission. Note that according to the use-and-forget bounding technique, the downlink SINR is given by \[\gamma_{k}=\frac{S_{k}}{I_{k}}, \tag{15}\] where \[S_{k} =|\mathbb{E}\{\mathbf{h}_{k}^{u}\mathbf{f}_{k}\}|^{2} \tag{16}\] \[I_{k} =\mathbb{E}\big{\{}\big{|}\mathbf{h}_{k}^{u}\mathbf{\hat{h}}_{k}- \mathbb{E}\big{\{}\mathbf{h}_{k}^{u}\mathbf{\hat{h}}_{k}\big{\}}\big{|}^{2} \big{\}}+\sum_{i\neq k}^{K}|\mathbb{E}\{\mathbf{h}_{k}^{u}\mathbf{f}_{i}\}|^{2 }+\frac{K\sigma^{2}}{\rho\lambda}. \tag{17}\] It is obvious that the final expressions for \(S_{k}\) and \(I_{k}\) depend on the choice of the precoder and the derivation of the SINR. In this regard, we note that maximum ratio transmission (MRT) and regularized zero-forcing (RZF) precoders are common options in the mMIMO literature. Herein, we select MRT for the sake of simplicity while RZF will be investigated in a future work. Since aggregated channels between the BS and users involve the indirect link through the STAR-RIS, it is challenging to evaluate (16) and (17) in closed-form. In the following proposition we present tight approximations of (16) and (17), which are then used to optimize the phase shifts. **Proposition 1**: _Let \(\mathbf{f}_{k}=\hat{\mathbf{h}}_{k}\), i.e., MRT precoding being used, then a tight approximation of the downlink achievable SINR of UE \(k\) for a given PBM \(\Phi_{w_{k}}\) in a STAR-RIS assisted mMIMO system, accounting for imperfect CSI, is given by_ \[\gamma_{k}\approx\frac{S_{k}}{\tilde{I}_{k}}, \tag{18}\] _where_ \[S_{k} =\mathrm{tr}^{2}\left(\mathbf{\Psi}_{k}\right), \tag{19}\] \[\tilde{I}_{k} =\sum_{i=1}^{K}\mathrm{tr}(\mathbf{R}_{k}\mathbf{\Psi}_{i})- \mathrm{tr}\left(\mathbf{\Psi}_{k}^{2}\right)+\frac{K\sigma^{2}}{\rho}\sum_{i =1}^{K}\mathrm{tr}(\mathbf{\Psi}_{i}). \tag{20}\] Proof:: Please see Appendix B. ## V Simultaneous Amplitudes and Phase Shifts Configuration It is critical to find the PBM to optimize a performance measure of STAR-RIS assisted systems. In this paper, assuming infinite-resolution phase shifters, we formulate the optimization problem for maximizing the sum SE with imperfect CSI and correlated fading. As mentioned in the preceding section, there are two operation protocols: ES protocol and MS protocol. In the following two subsections we deal with these two protocols. ### _Optimization of Amplitudes and Phase Shifts for ES protocol_ For the ES protocol the formulated problem reads \[\max_{\mathbf{\theta},\mathbf{\beta}} f(\mathbf{\theta},\mathbf{\beta})\triangleq\sum_{k=1}^{K}\log_{2}(1+ \frac{S_{k}}{\tilde{I}_{k}})\] \[\mathrm{s.t} (\beta_{n}^{t})^{2}+(\beta_{n}^{r})^{2}=1,\forall n\in\mathcal{N}\] \[\beta_{n}^{t}\geq 0,\beta_{n}^{r}\geq 0,\ \forall n\in\mathcal{N}\] \[|\theta_{n}^{t}|=|\theta_{n}^{r}|=1,\ \forall n\in\mathcal{N}\] where \(\mathbf{\theta}=[(\mathbf{\theta}^{t})^{\mathrm{ T}},(\mathbf{\theta}^{r})^{ \mathrm{ T}}]^{\mathrm{ T}}\) and \(\mathbf{\beta}=[(\mathbf{\beta}^{t})^{\mathrm{ T}},(\mathbf{\beta}^{r})^{ \mathrm{ T}}]^{\mathrm{ T}}\). Note that to achieve a compact description we have vertically stacked \(\mathbf{\theta}^{t}\) and \(\mathbf{\theta}^{r}\) into a single vector \(\mathbf{\theta}\), and \(\mathbf{\beta}^{t}\) and \(\mathbf{\beta}^{r}\) into a single vector \(\mathbf{\beta}\), respectively. Also note that in the above problem formulation we have used the tight approximation of the SINR given in Proposition 1 to maximize the approximate sum SE, denoted by \(f(\mathbf{\theta},\mathbf{\beta})\). For ease of exposition, we define two sets: \(\Theta=\{\mathbf{\theta}\ |\ |\theta_{i}^{t}|=|\theta_{i}^{r}|=1,i=1,2,\ldots N\}\), and \(\mathcal{B}=\{\mathbf{\beta}\ |\ (\beta_{i}^{t})^{2}+(\beta_{i}^{r})^{2}=1,\beta_{i}^{t} \geq 0,\beta_{i}^{r}\geq 0,i=1,2,\ldots N\}\), which in fact together describe the feasible set of (\(\mathcal{P}1\)). Notably, the introduction of STAR-RIS imposes new challenges. In particular, the first constraint is not simple but includes the two types of passive beamforming, namely transmission and reflection beamforming, to be optimized, which are coupled with each other due to the energy conservation law. The problem (\(\mathcal{P}1\)) is non-convex and is coupled among the optimization variables, which are the amplitudes and the phase shifts for transmission and reflection. For the development of an efficient algorithm to solve (\(\mathcal{P}1\)) we remark that the sets \(\Theta\) and \(\mathcal{B}\) are simple in the sense that their projection operators can be done in closed-form. This motivates us to apply the projected gradient ascent method (PGAM) [30, Ch. 2] to optimize \(\mathbf{\theta}\) and \(\mathbf{\beta}\), which is described next. However, in the case of independent Rayleigh fading, \(\mathrm{SE}\) does not depend on \(\mathbf{\theta}\), which means that optimization can take place only with respect to \(\mathbf{\beta}\). The proposed PGAM consists of the following iterations \[\mathbf{\theta}^{n+1} =P_{\Theta}(\mathbf{\theta}^{n}+\mu_{n}\nabla_{\mathbf{\theta}}f(\mathbf{ \theta}^{n},\mathbf{\beta}^{n})), \tag{21a}\] \[\mathbf{\beta}^{n+1} =P_{\mathcal{B}}(\mathbf{\beta}^{n}+\mu_{n}\nabla_{\mathbf{\beta}}f(\mathbf{ \theta}^{n},\mathbf{\beta}^{n})). \tag{21b}\] In the above equations, the superscript denotes the iteration count. From the current iterate \((\mathbf{\theta}^{n},\mathbf{\beta}^{n})\) we move along the gradient direction to increase the objective. In (21), \(\mu_{n}\) is the step size for both \(\mathbf{\theta}\) and \(\mathbf{\beta}\). Also, in (21), \(P_{\Theta}(\cdot)\) and \(P_{\mathcal{B}}(\cdot)\) are the projections onto \(\Theta\) and \(\mathcal{B}\), respectively. The choice of the step size in (21a) and (21b) is important to make the proposed PGAM converge. The ideal step size should be inversely proportional to the Lipschitz constant of the corresponding gradient but this is difficult to find for the considered problem. For this reason, we apply the Armijo-Goldstein backtracking line search to find the step size at each iteration. To this end, we define a quadratic approximation of \(f(\mathbf{\theta},\mathbf{\beta})\) as \[Q_{\mu}(\mathbf{\theta},\mathbf{\beta};\mathbf{x},\mathbf{y})=f(\mathbf{ \theta},\mathbf{\beta})+\langle\nabla_{\mathbf{\theta}}f(\mathbf{\theta},\mathbf{\beta}), \mathbf{x}-\mathbf{\theta}\rangle\] \[-\frac{1}{\mu}\|\mathbf{x}-\mathbf{\theta}\|_{2}^{2}+\langle\nabla_{ \mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta}),\mathbf{y}-\mathbf{\beta}\rangle-\frac{1}{\mu} \|\mathbf{y}-\mathbf{\beta}\|_{2}^{2}. \tag{22}\] Note that in this paper we define \(\langle\mathbf{x},\mathbf{y}\rangle=2\,\mathrm{Re}\{\mathbf{x}^{u}\mathbf{y}\}\) for complex-valued \(\mathbf{x}\) and \(\mathbf{y}\) and \(\langle\mathbf{x},\mathbf{y}\rangle=\mathbf{x}^{\mathrm{ T}}\mathbf{y}\) for non complex-valued \(\mathbf{x}\) and \(\mathbf{y}\). Now, let \(L_{n}>0\), and \(\kappa\in(0,1)\). Then the step size \(\mu_{n}\) in (21) can be found as \(\mu_{n}=L_{n}\kappa^{m_{n}}\), where \(m_{n}\) is the smallest nonnegative integer satisfying \[f(\mathbf{\theta}^{n+1},\mathbf{\beta}^{n+1})\geq Q_{L_{n}\kappa^{m_{n}}}(\mathbf{\theta}^{n },\mathbf{\beta}^{n};\mathbf{\theta}^{n+1},\mathbf{\beta}^{n+1}), \tag{23}\] which can be done by an iterative procedure. In the proposed PGAM, we use the step size at iteration \(n\) as the initial step size at iteration \(n+1\). The proposed PGAM is summarized in Algorithm 1. We present the complex-valued gradients in the following lemma. **Lemma 2**: _The complex gradients \(\nabla_{\mathbf{\theta}}f(\mathbf{\theta},\mathbf{\beta})\) and \(\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta})\) are given in closed-forms by \[\nabla_{\mathbf{\theta}}f(\mathbf{\theta},\mathbf{\beta}) =[\nabla_{\mathbf{\theta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta})^{\mathrm{T}}, \nabla_{\mathbf{\theta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta}^{n}))\] \[\nabla_{\mathbf{\theta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta}) =\frac{\tau_{\mathrm{c}}-\tau}{\tau_{\mathrm{c}}\log 2}\sum_{k=1}^{K} \frac{\tilde{I}_{k}\nabla_{\mathbf{\theta}^{\ast}}S_{k}-S_{k}\nabla_{\mathbf{\theta}^{ \ast}}\tilde{I}_{k}}{(1+\gamma_{k})\tilde{I}_{k}{}^{2}}, \tag{24b}\] \[\nabla_{\mathbf{\theta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta}) =\frac{\tau_{\mathrm{c}}-\tau}{\tau_{\mathrm{c}}\log 2}\sum_{k=1}^{K} \frac{\tilde{I}_{k}\nabla_{\mathbf{\theta}^{\ast}}S_{k}-S_{k}\nabla_{\mathbf{\theta}^ {\ast}}\tilde{I}_{k}}{(1+\gamma_{k})\tilde{I}_{k}{}^{2}}, \tag{24c}\] where \[\nabla_{\mathbf{\theta}^{\ast}}S_{k} =\begin{cases}\nu_{k}\text{diag}\big{(}\mathbf{A}_{\mathrm{r}}\text{ diag}(\mathbf{\beta}^{t})\big{)}&w_{k}=t\\ 0&w_{k}=r\end{cases} \tag{25a}\] \[\nabla_{\mathbf{\theta}^{\ast}}S_{k} =\begin{cases}\nu_{k}\text{diag}\big{(}\mathbf{A}_{\mathrm{r}}\text{ diag}(\mathbf{\beta}^{\ast})\big{)}&w_{k}=r\\ 0&w_{k}=t\end{cases}\] (25b) \[\nabla_{\mathbf{\theta}^{\ast}}\tilde{I}_{k} =\text{diag}\big{(}\tilde{\mathbf{A}}_{k\text{r}}\text{diag}(\mathbf{\beta} ^{t})\big{)}\] (25c) \[\nabla_{\mathbf{\theta}^{\ast}}\tilde{I}_{k} =\text{diag}\big{(}\tilde{\mathbf{A}}_{k\text{r}}\text{diag}(\mathbf{\beta} ^{r})\big{)} \tag{25d}\] with \(\mathbf{\mathrm{A}}_{w_{k}}=\mathbf{\mathrm{R}}_{\mathrm{R}\mathrm{I}\mathrm{S}}\mathbf{ \Phi}_{w_{k}}\mathbf{\mathrm{R}\mathrm{I}\mathrm{S}}\) for \(w_{k}\in\{t,r\}\), \(\nu_{k}=2\hat{\beta}_{k}\operatorname{tr}\left(\mathbf{\Psi}_{k}\right) \operatorname{tr}((\mathbf{\mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{k}+\mathbf{\mathrm{R}}_{k} \mathbf{\mathrm{Q}}_{k}-\mathbf{\mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{\mathrm{Q}}_{k} )\mathbf{\mathrm{R}\mathrm{S}}_{\mathrm{S}})\), \[\tilde{\mathbf{\mathrm{A}}}_{ku}=\begin{cases}(\tilde{\nu}_{k}+\sum_{i\in\mathcal{ X}_{e}}^{K}\tilde{\nu}_{ki})\mathbf{\mathrm{A}}_{u}&w_{k}=u\\ \sum_{i\in\mathcal{X}_{e}}^{K}\tilde{\nu}_{ki}\mathbf{\mathrm{A}}_{u}&w_{k}\neq u,\end{cases} \tag{26}\] \(u\in\{t,r\}\), \(\tilde{\nu}_{k}=\hat{\beta}_{k}\operatorname{tr}\big{(}\tilde{\mathbf{\Psi}}_{k} \mathbf{\mathrm{R}\mathrm{S}}_{\mathrm{S}}\big{)}\), \(\tilde{\nu}_{ki}=\hat{\beta}_{k}\operatorname{tr}\big{(}\tilde{\mathbf{\mathrm{R}} }_{ki}\mathbf{\mathrm{R}\mathrm{S}}_{\mathrm{S}}\big{)}\), \(\tilde{\mathbf{\Psi}}_{k}=\mathbf{\Psi}-2(\mathbf{\mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{ \mathrm{\Psi}}_{k}+\mathbf{\Psi}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{\mathrm{Q}}_{k}-\mathbf{ \mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{\mathrm{R}}_{k}\mathbf{ \mathrm{Q}}_{k})\), \(\mathbf{\Psi}=\sum_{i=1}^{K}\mathbf{\Psi}_{i}\), \(\tilde{\mathbf{\mathrm{R}}}_{ki}=\mathbf{\mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{i}\tilde{\mathbf{ \mathrm{R}}}_{k}-\mathbf{\mathrm{Q}}_{k}\mathbf{\mathrm{R}}_{i}\tilde{\mathbf{\mathrm{R}}}_{ k}\mathbf{\mathrm{R}}_{i}\mathbf{\mathrm{Q}}_{i}+\tilde{\mathbf{\mathrm{R}}}_{k}\mathbf{ \mathrm{R}}_{i}\mathbf{\mathrm{Q}}_{i}\), and \(\tilde{\mathbf{\mathrm{R}}}_{k}=\mathbf{\mathrm{R}}_{k}+\frac{K\sigma^{2}}{\rho}\mathbf{ \mathrm{I}}_{M}\). Similarly, the gradient \(\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta})\) is given by \[\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta}) =[\nabla_{\mathbf{\beta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta})^{\mathrm{T} },\nabla_{\mathbf{\beta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta})^{\mathrm{T}}]^{\mathrm{T}}, \tag{27a}\] \[\nabla_{\mathbf{\beta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta}) =\frac{\tau_{\mathrm{c}}-\tau}{\tau_{\mathrm{c}}\log 2}\sum_{k=1}^{K} \frac{\tilde{I}_{k}\nabla_{\mathbf{\beta}^{\ast}}S_{k}-S_{k}\nabla_{\mathbf{\beta}^{ \ast}}\tilde{I}_{k}}{(1+\gamma_{k})\tilde{I}_{k}{}^{2}},\] (27b) \[\nabla_{\mathbf{\beta}^{\ast}}f(\mathbf{\theta},\mathbf{\beta}) =\frac{\tau_{\mathrm{c}}-\tau}{\tau_{\mathrm{c}}\log 2}\sum_{k=1}^{K} \frac{\tilde{I}_{k}\nabla_{\mathbf{\beta}^{\ast}}S_{k}-S_{k}\nabla_{\mathbf{\beta}^{ \ast}}\tilde{I}_{k}}{(1+\gamma_{k})\tilde{I}_{k}{}^{2}}, \tag{27c}\] where \[\nabla_{\mathbf{\beta}^{\ast}}S_{k} =\begin{cases}2\nu_{k}\operatorname{Re}\big{\{}\text{diag}\big{(} \mathbf{\mathrm{A}}_{k}^{\mathrm{u}}\text{diag}(\mathbf{\theta}^{\ast})\big{)}&w_{k}=t \\ 0&w_{k}=r\end{cases} \tag{28a}\] \[\nabla_{\mathbf{\beta}^{\ast}}S_{k} =\begin{cases}2\nu_{k}\operatorname{Re}\big{\{}\text{diag}\big{(} \tilde{\mathbf{\mathrm{A}}}_{k}^{\mathrm{u}}\text{diag}(\mathbf{\theta}^{\prime})\big{)}&w_{k}= r\\ 0&w_{k}=t\end{cases}\] (28b) \[\nabla_{\mathbf{\beta}^{\ast}}\tilde{I}_{k} =2\operatorname{Re}\big{\{}\text{diag}\big{(}\tilde{\mathbf{\mathrm{A}}}_ {k\text{r}}^{\mathrm{u}}\text{diag}(\mathbf{\theta}^{\prime})\big{)}\big{\}}\] (28c) \[\nabla_{\mathbf{\beta}^{\ast}}\tilde{I}_{k} =2\operatorname{Re}\big{\{}\text{diag}\big{(}\tilde{\mathbf{\mathrm{A}}}_ {k\text{r}}^{\mathrm{u}}\text{diag}(\mathbf{\theta}^{\prime})\big{)}\big{\}}. \tag{28d}\] Note that \(\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta})\) is real-valued. **Remark 3**: _As mentioned earlier we use \(\beta_{i}^{w_{k}}\), instead of \(\sqrt{\beta_{i}^{w_{k}}}\) as in [8], to denote the amplitude of the \(i\)th RIS element in mode \(w_{k}\). The purpose of that maneuver is now clear. In fact, if \(\sqrt{\beta_{i}^{w_{k}}}\) were used to represent the amplitude, then the gradient \(\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta})\) would be similar to (27) but contain the term \(\sqrt{\beta_{i}^{w_{k}}}\) in the denominator. This will make \(\nabla_{\mathbf{\beta}}f(\mathbf{\theta},\mathbf{\beta})\) ill-conditioned (i.e., extremely large), which in turn can cause numerical issues in the execution of Algorithm 1 in practice._ To conclude the description of Algorithm 1, we now provide the projection onto the sets \(\Theta\) and \(\mathcal{B}\). First, it is straightforward to check that, for a given \(\mathbf{\theta}\in\mathbb{C}^{2N\times 1}\)\(P_{\Theta}(\mathbf{\theta})\) is given by \[P_{\Theta}(\mathbf{\theta})=\mathbf{\theta}/|\mathbf{\theta}|=e^{j\mathbf{\cdot}\mathbf{\theta}}, \tag{29}\] where the operations in the right-hand side of the above equation are performed entrywise. The projection \(P_{\mathcal{B}}(\mathbf{\beta})\) deserves special attention. Note that the As a result, the complexity to compute \(\operatorname{tr}\bigl{(}\mathbf{A}_{w_{k}}\mathbf{\Phi}_{w_{k}}^{\mu}\bigr{)}\) is \(O(N^{2}+N)\), and thus, the complexity to obtain \(\mathbf{R}_{k}\) is \(O(N^{2}+N+M^{2})\) since \(O(M^{2})\) additional complexity multiplications are required to obtain \(\operatorname{tr}\bigl{(}\mathbf{A}_{w_{k}}\mathbf{\Phi}_{w_{k}}^{\mu}\bigr{)} \mathbf{R}_{\mathrm{BS}}\). Recall that \(\mathbf{\Psi}_{k}=\mathbf{R}_{k}\mathbf{Q}_{k}\mathbf{R}_{k}=\mathbf{R}_{k} \bigl{(}\mathbf{R}_{k}+\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M} \bigr{)}^{-1}\mathbf{R}_{k}\) and thus it would take \(O(M^{3})\) to compute it, which is due to the calculation of the involving matrix inversion. Here we present a more efficient way to compute \(\mathbf{\Psi}_{k}\). Let \(\mathbf{R}_{\mathrm{BS}}=\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\mathsf{H}}\), where \(\mathbf{\Sigma}\) is diagonal and \(\mathbf{U}\) is unitary, be the eigenvalue decomposition (EVD) of \(\mathbf{R}_{\mathrm{BS}}\) and \(\alpha_{k}=\hat{\beta}_{k}\operatorname{tr}\bigl{(}\mathbf{A}_{w_{k}}\mathbf{ \Phi}_{w_{k}}^{\mu}\bigr{)}\). We remark that _the EVD of \(\mathbf{\Sigma}\) is only performed once_ before Algorithm 1 is executed. Then, we can write \[\mathbf{Q}_{k} =\bigl{(}\mathbf{R}_{k}+\frac{\sigma^{2}}{\tau P}\mathbf{I}_{M} \bigr{)}^{-1}\] \[=\bigl{(}\alpha_{k}\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\mathsf{H }}+\frac{\sigma^{2}}{\tau P}\mathbf{I}_{M}\bigr{)}^{-1}\] \[=\mathbf{U}\bigl{(}\alpha_{k}\mathbf{\Sigma}+\frac{\sigma^{2}}{ \tau P\alpha_{k}^{2}}\mathbf{I}_{M}\bigr{)}^{-1}\mathbf{U}^{\mathsf{H}}, \tag{31}\] where we have used the fact that \(\mathbf{R}_{k}=\alpha_{k}\mathbf{R}_{\mathrm{BS}}=\alpha_{k}\mathbf{U}\mathbf{ \Sigma}\mathbf{U}^{\mathsf{H}}\). Substituting (31) into (10), we immediately have \[\mathbf{\Psi}_{k} =\alpha_{k}^{2}\mathbf{U}\mathbf{\Sigma}\bigl{(}\alpha_{k} \mathbf{\Sigma}+\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M}\bigr{)}^ {-1}\mathbf{\Sigma}\mathbf{U}^{\mathsf{H}}\] \[=\mathbf{U}\mathbf{\Sigma}\bigl{(}\alpha_{k}^{-1}\mathbf{\Sigma} +\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M}\bigr{)}^{-1}\mathbf{ \Sigma}\mathbf{U}^{\mathsf{H}}\] \[=\mathbf{U}\mathbf{\Sigma}\mathbf{\Sigma}_{k}\mathbf{\Sigma} \mathbf{U}^{\mathsf{H}}, \tag{32}\] where \(\mathbf{\Sigma}_{k}=\bigl{(}\alpha_{k}^{-1}\mathbf{\Sigma}+\frac{\sigma^{2}}{ \tau P\alpha_{k}^{2}}\mathbf{I}_{M}\bigr{)}^{-1}\). Note that \(\mathbf{\Sigma}_{k}\) is diagonal and takes \(O(M)\) complex multiplications to compute. Now it is clear that \(\operatorname{tr}(\mathbf{\Psi}_{k})=\operatorname{tr}\bigl{(}\mathbf{\Sigma} \mathbf{\Sigma}_{k}\mathbf{\Sigma}\bigr{)}\) requires \(O(M)\) complex multiplications to compute, which is indeed the complexity to compute \(S_{k}\) in (19). To compute \(I_{k}\) we have \[\mathbf{\Psi} =\sum\nolimits_{i=1}^{K}\mathbf{\Psi}_{i}\] \[=\mathbf{U}\mathbf{\Sigma}\Bigl{(}\sum\nolimits_{i=1}^{K} \mathbf{\Sigma}_{i}\Bigr{)}\mathbf{\Sigma}\mathbf{U}^{\mathsf{H}}\] \[=\mathbf{U}\mathbf{\Sigma}\mathbf{\Sigma}\mathbf{\Sigma}\mathbf{U} ^{\mathsf{H}}, \tag{33}\] where \(\mathbf{\Sigma}=\sum_{i=1}^{K}\mathbf{\Sigma}_{i}\). We note that obtaining \(\mathbf{\Sigma}\) once all \(\mathbf{\Sigma}_{i}\)'s are known requires only \(KM\) complex additions, which is negligible. Thus, it follows that \[\sum\nolimits_{i=1}^{K}\operatorname{tr}(\mathbf{R}_{k}\mathbf{ \Psi}_{i}) =\operatorname{tr}(\mathbf{R}_{k}\mathbf{\Psi})\] \[=\alpha_{k}\operatorname{tr}\bigl{(}\mathbf{\Sigma}^{2}\mathbf{ \Sigma}\mathbf{\Sigma}\mathbf{U}^{\mathsf{H}}\bigr{)} \tag{34}\] and that \[\sum\nolimits_{i=1}^{K}\operatorname{tr}(\mathbf{\Psi}_{i}) =\operatorname{tr}(\mathbf{\Psi})=\operatorname{tr}\bigl{(}\mathbf{\Sigma} \mathbf{\Sigma}\mathbf{\Sigma}\bigr{)}. \tag{35}\] Summarizing the above results, we can conclude that the complexity to compute \(f(\boldsymbol{\theta},\boldsymbol{\beta})\) is \(O(K(N^{2}+M^{2}))\). Next we present the complexity to compute \(\nabla_{\boldsymbol{\theta}}f(\boldsymbol{\theta},\boldsymbol{\beta})\) and \(\nabla_{\boldsymbol{\beta}}f(\boldsymbol{\theta},\boldsymbol{\beta})\). Recall that \(v_{k}\) in (25a) and (25b) is given by \(v_{k}=2\hat{\beta}_{k}\operatorname{tr}\left(\mathbf{\Psi}_{k}\right) \operatorname{tr}((\mathbf{Q}_{k}\mathbf{R}_{k}+\mathbf{R}_{k}\mathbf{Q}_{k}- \mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{Q}_{k})\mathbf{R}_{\mathrm{BS}})\). Following the above analysis, we can write \(\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{R}_{\mathrm{BS}}\) as \(\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{R}_{\mathrm{BS}}\) as \(\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{Q}_{k}=\alpha_{k}\mathbf{U}(\alpha_{k} \mathbf{\Sigma}+\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M})^{-1} \mathbf{\Sigma}^{2}\mathbf{U}^{\mathsf{H}}\) and \(\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{Q}_{k}\mathbf{R}_{\mathrm{BS}}\) as \(\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{Q}_{k})\mathbf{R}_{\mathrm{BS}}=\alpha_{k} \mathbf{U}(\alpha_{k}\mathbf{\Sigma}+\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}} \mathbf{I}_{M})^{-2}\mathbf{\Sigma}^{2}\mathbf{U}^{\mathsf{H}}\). Thus, we can rewrite \(v_{k}\) equivalently as \[v_{k} =2\hat{\beta}_{k}\alpha_{k}\operatorname{tr}\left(\mathbf{\Psi}_{ k}\right)\] \[\times\Bigl{(}2\operatorname{tr}\bigl{(}(\alpha_{k}\mathbf{ \Sigma}+\frac{\sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M}\bigr{)}^{-1} \mathbf{\Sigma}^{2}\bigr{)}\] \[-\operatorname{tr}\bigl{(}(\alpha_{k}\mathbf{\Sigma}+\frac{ \sigma^{2}}{\tau P\alpha_{k}^{2}}\mathbf{I}_{M})^{-2}\mathbf{\Sigma}^{2}\bigr{)} \Bigr{)}, \tag{36}\] which requires \(O(M)\) complex multiplications to obtain since \(\operatorname{tr}\left(\mathbf{\Psi}_{k}\right)\) is already computed and the involving matrices in the second term of the above equation are diagonal. Next, to obtain \(\nabla_{\boldsymbol{\theta}}S_{k}\) in (25a), we further need to calculate the diagonal elements of \(\mathbf{A}_{t}\text{diag}(\boldsymbol{\beta}^{t})\), which can be obtained by multiplying each diagonal element of \(\mathbf{A}_{t}\) with the corresponding entry of \(\boldsymbol{\beta}^{t}\), i.e., \(\text{diag}\bigl{(}\mathbf{A}_{t}\text{diag}(\boldsymbol{\beta}^{t})\bigr{)}= \text{diag}\bigl{(}\mathbf{A}_{t}\bigr{)}\odot\boldsymbol{\beta}^{t}\), where \(\odot\) represents the entry-wise multiplication. We remark that the term \(\text{diag}\bigl{(}\mathbf{A}_{t}\bigr{)}\) is already computed when calculating \(\mathbf{R}_{k}\), and thus, the complexity to compute \(\nabla_{\boldsymbol{\theta}^{s}}S_{k}\) is \(O(N)\). Apparently, the same complexity is required to obtain \(\nabla_{\boldsymbol{\theta}^{s}}S_{k}\) in (25b). The complexity of calculating \(\nabla_{\boldsymbol{\theta}^{s}}I_{k}\), \(u\in\{t,r\}\) follows similar lines. More specifically, both \(\bar{\nu}_{k}\) and \(\sum_{i\in\mathcal{K}_{n}}^{K}\bar{\nu}_{ki}\) require \(O(M^{2})\) to compute. In summary, the complexity of computing the gradients for each iteration is \(O(K(N^{2}+M^{2}))\). #### Convergence Analysis of Algorithm 1 The convergence of Algorithm 1 is guaranteed by following standard arguments for projected gradient methods. First, the gradients \(\nabla_{\boldsymbol{\theta}}f(\boldsymbol{\theta},\boldsymbol{\beta})\) and \(\nabla_{\boldsymbol{\beta}}f(\boldsymbol{\theta},\boldsymbol{\beta})\) are Lipschitz continuous8 over the feasible to the nonconvexity of (\(\mathcal{P}1\)). We also note that \(L_{\mathbf{\theta}}\) and \(L_{\mathbf{\beta}}\) are not required to run Algorithm 1. **Remark 4**: _A question naturally arising is why we have optimized the amplitude, \(|\mathbf{\beta}_{u}|_{i}\), and phase shift, \(|\mathbf{\theta}_{u}|_{i}\), \(u\in\{t,r\}\) separately, rather than optimizing them as a single complex, e.g, \([v_{u}]_{i}=[\mathbf{\beta}_{u}]_{i}e^{j|\mathbf{\theta}_{u}|_{i}}\). The latter would certainly make the presentation of the proposed method more elegant. However, interestingly enough, we find by extensive numerical experiments that both ways give the same performance in many cases. We note that this does not mean the amplitudes corresponding to the reflection or transmission mode of most of the STAR-IRS elements are close to 1. There is indeed a significant gap between the ES and MS mode as shown in the next section. However, in some cases, using two separate variables yields a better performance. This numerical observation has led to the current presentation of the proposed method where amplitudes and phase shifts are optimized separately._ ### _Optimization of Amplitudes and Phase Shifts for MS protocol_ In the case of the MS scheme, the values of amplitude are forced to be binary, i.e., \(\mathbf{\beta}_{n}^{t}\in\{0,1\}\) and \(\mathbf{\beta}_{n}^{r}\in\{0,1\}\). Thus, the optimization problem for the MS protocol is stated as \[\begin{split}\max_{\mathbf{\theta},\mathbf{\beta}}& f(\mathbf{ \theta},\mathbf{\beta})\\ \mathrm{s.t}&\beta_{n}^{t}+\beta_{n}^{r}=1,\forall n \in\mathcal{N}\\ &\beta_{n}^{t}\in\{0,1\},\beta_{n}^{r}\in\{0,1\},\ \forall n\in\mathcal{N}\\ &|\theta_{n}^{t}|=|\theta_{n}^{r}|=1,\ \forall n\in\mathcal{N}.\end{split}\] ( \[\mathcal{P}2\] ) The binary constraints on \(\mathbf{\beta}_{n}^{t}\) and \(\mathbf{\beta}_{n}^{r}\) in (\(\mathcal{P}2\)) make it far more difficult to solve. In fact, (\(\mathcal{P}2\)) belongs to the class of binary nonconvex programming, which is generally NP-hard. For this type of problems, a pragmatic approach is to find a high-performing solution. To this end, we find that the simple solution obtained by rounding off the solution obtained by solving (\(\mathcal{P}1\)) to the nearest binary value can produce a reasonably good performance. This shall be numerically demonstrated in the next section. More advanced methods for solving (\(\mathcal{P}2\)) are thus left for future work. ## VI Numerical Results In this section, we present numerical results for the sum SE in STAR-RIS-aided systems, using both analytical techniques and Monte Carlo simulations. Specifically, our analytical results for the sum SE are derived from equations (18)-(20), while for Monte Carlo simulations, we perform 1000 independent channel realizations to evaluate the expressions in equations (15)-(17). This is to verify the tightness of the approximation stated in Proposition 1 and the derivations in Appendix B. The results shown in Figs. 3 and 5 clearly demonstrate a close match between the analytical results and MC simulations, and thus, confirming that Proposition 1 indeed presents a very tight approximation of the SINR. The simulation setup includes a STAR-RIS with a UPA of \(N=64\) elements assisting the communication between a uniform linear array (ULA) of \(M=64\) antennas at the BS that serves \(K=4\) UEs. The \(xy-\)coordinates of the BS and RIS are given as \((x_{B},\ y_{B})=(0,\ 0)\) and \((x_{R},\ y_{R})=(50,\ 10)\), respectively, all in meter units. In addition, users in \(r\) region are located on a straight line between \((x_{R}-\frac{1}{2}d_{0},\ y_{R}-\frac{1}{2}d_{0})\) and \((x_{R}+\frac{1}{2}d_{0},\ y_{R}-\frac{1}{2}d_{0})\) with equal distances between each two adjacent users, and \(d_{0}=20\) m in our simulations. Similarly, users in the \(t\) region are located between \((x_{R}-\frac{1}{2}d_{0},\ y_{R}+\frac{1}{2}d_{0})\) and \((x_{R}+\frac{1}{2}d_{0},\ y_{R}+\frac{1}{2}d_{0})\). The size of each RIS element is \(d_{\mathrm{H}}=d_{\mathrm{V}}=\lambda/4\). Distance-based path-loss is considered in our work, such that the channel gain of a given link \(j\) is \(\tilde{\beta}_{j}=Ad_{j}^{-\alpha_{j}}\), where \(A\) is the area of each reflecting element at the RIS, and \(\alpha_{j}\) is the path-loss exponent. Regarding \(\tilde{\beta}_{g}\), we assume the same values as for \(\tilde{\beta}_{j}\). Similar values are assumed for \(\tilde{\beta}_{k}\) but we also consider an additional penetration loss equal to \(15\) dB. The correlation matrices \(\mathbf{R}_{\mathrm{BS}}\) and \(\mathbf{R}_{\mathrm{RIS}}\) are computed according to [24] and [4], respectively. Also, \(\sigma^{2}=-174+10\log_{10}B_{\mathrm{c}}\) in dBm, where \(B_{\mathrm{c}}=200\ \mathrm{kHz}\) is the bandwidth. As a baseline scheme, we consider the RIS, which consists of transmitting-only or reflecting-only elements, each with \(N_{t}\) and \(N_{r}\) elements, such that \(N_{t}+N_{r}=N\). Notably, this scheme resembles the MS protocol, where the first \(N_{t}\) elements operate in transmission mode and the \(N_{r}\) elements operate in reflection mode. Also, we have applied an ON/OFF scheme for channel estimation by following the idea in [17], the direct links are estimated with all sub-surfaces turned off and the cascaded links are estimated with one element turned on at transmission/reflection mode sequentially. In the first numerical experiment, we demonstrate the convergence of the proposed projected gradient algorithm. Specifically, we plot the achievable sum SE against the iteration count returned by Algorithm 1 from 5 different randomly generated initial points as shown in Fig. 2. More specifically, the initial points for Algorithm 1 are generated as follows. First, we set the amplitudes to \([\mathbf{\beta}_{r}^{(0)}]_{n}=[\mathbf{\beta}_{t}^{(0)}]_{n}=\sqrt{0.5}\), for all \(n\in\mathcal{N}\), i.e, equal power splitting between transmission and reception Fig. 2: Convergence of Algorithm 1 for an STAR-RIS assisted MIMO system with imperfect CSI (\(M=64\), \(N=64\), \(K=4\)) for five different initial points. mode for all elements of the STAR-RIS. The initial values for the phase shifts are taken as \([\mathbf{\theta}_{n}^{(0)}]_{n}=e^{j\phi_{n}^{t}}\) and \([\mathbf{\theta}_{t}^{(0)}]_{n}=e^{j\phi_{n}^{t}}\), where \(\phi_{n}^{r}\) and \(\phi_{n}^{t}\) are independently drawn from the uniform distribution over \([0,2\pi]\). We terminate Algorithm 1 when the increase of the objective between two last iterations is less than \(10^{-5}\) or the number of iterations is larger than \(200\). Note that the considered problem in (\(\mathcal{P}1\)) is nonconvex, and thus, the proposed projected gradient algorithm can only guarantee a stationary solution that is not necessarily optimal. As a result, Algorithm 1 may converge to different points starting from different initial points, which is clearly seen in Fig. 2. Moreover, we can see that different initial points may lead to different convergence rates. Thus, to mitigate this performance sensitivity of Algorithm 1 on the initial points, we need to run it from different initial points and take the best convergent solutions. Through our extensive simulations, it is best to run Algorithm 1 from 5 randomly generated initial points to achieve a good trade-off between complexity and obtained sum SE. Fig. 3 shows the achievable sum SE versus the number of STAR-RIS elements \(N\) while varying the effect of spatial correlation in terms of the size of each RIS element. First, as can be seen, the downlink sum SE increases with \(N\) as expected. Next, by focusing on the impact of spatial correlation at the STAR-RIS, we show that the performance decreases as the correlation increases. In particular, the sum SE decreases with increased correlation as the inter-element distance of the STAR-RIS decreases. Moreover, the MS protocol achieves a lower performance because it is a special case of the ES protocol. Especially, for a low number of RIS elements the curves coincide, while as \(N\) increases, an increasing gap appears. Furthermore, for the sake of comparison, we provide the performance of conventional RIS with reflection-only operation but this also appears lower performance since fewer degrees of freedom for just reflection can be exploited. We have also depicted the performance in the case of blocked direct signal. Obviously, the STAR-RIS contributes to the performance since the line corresponding the case with no direct signal is lower, which means that the performance is worse. Figs. 4(a) and 4(b) illustrate the achievable sum SE versus the number of BS antennas \(M\) while shedding light on various effects. Obviously, the sum SE increases with \(M\). In particular, regarding the RIS correlation, in Fig. 4(a), we observe that an increased correlation by reducing the distance among the RIS elements degrades the performance due to reduced diversity gains among the RIS elements. In the case of no RIS correlation, represented by the dashed cyan line, the performance is quite low due to the absence of capability for phase shift optimization as mentioned in Remark 1. Moreover, the ES protocol achieves better performance but with higher complexity compared to the MS protocol. The performance increases with more BS antennas. Also, in the case of random phase shifts, the sum SE is lower. Notably, for \(N=64\) elements, the two lines Fig. 4: Downlink achievable sum SE versus the number of BS antennas \(M\) of a STAR-RIS assisted MIMO system with imperfect CSI for: (a) \(N=64\), \(K=4\), (b) \(N=144\), \(K=4\) under varying conditions (Analytical results). Fig. 3: Downlink achievable sum SE versus the number of RIS elements antennas \(N\) of a STAR-RIS assisted MIMO system with imperfect CSI (\(N=64\), \(K=4\)) for varying conditions (Analytical results and MC simulations). corresponding to the ES and MS protocols coincide but, according to Fig.4(b), which assumes \(N=144\) elements, a gap between the lines appears. The gap increases with increasing \(M\). Similar to the previous figure, we have included the baseline scenario with reflecting-only capabilities having the half elements \(N=20\), and we witness the superiority of STAR-RIS. Moreover, a comparison of the two figures, reveals that an increase in RIS elements spacing has a greater impact on a lower number of RIS elements, i.e., \(N=64\). Furthermore, Fig. 4(b) shows that the scenario of no RIS correlation performs worse than the cases with random phases when \(M\) is large. Also, in this figure, it is shown that the achievable rate is higher than Fig 4(b). In addition, in Fig. 4(b), we have added a line corresponding to channel estimation based on the ON/OFF scheme in [17]. We observe that the achievable rate is higher in this case because in the case of statistical CSI we have loss of information. In other words, we observe a trade-off between a lower overhead of the proposed approach and a higher rate in the case of estimating individual channels. Fig. 5 depicts the achievable sum SE versus the SNR under similar conditions, i.e., in the cases of \(N=100\) (solid lines) and \(N=64\) (dotted lines). As expected, when \(N=100\), the performance is better since a higher SE is achieved. In each case, for low SNR, the ES and MS protocols coincide, while for high SNR, an increasing gap is observed. In the case \(N=100\), conventional RIS and random MS protocols exhibit the same performance at low SNR, but a gap appears as the SNR increases. The behavior at low SNR is similar, however, the corresponding gaps are smaller. The reasons for these observations can be explained as follows. At low SNR, it is more beneficial to focus on users in the reflection region as they are closer to the BS. This is confirmed by the fact that, after running the proposed algorithm, \(\beta_{n}^{r}\approx 1,\forall n\in\mathcal{N}\). As a result, the performances of the ES and MS protocols, as well as the conventional RIS, are nearly the same. However, as SNR increases, the increase in the sum SE becomes minimal if we continue to focus on users in the reflection region. Thus, at high SNR, directing some power to users in the transmission region can improve the total SE. This leads to performance differences between the ES and MS protocols, as well as the conventional RIS. ## VII Conclusion This paper presented a study of the achievable rate of STAR-RIS assisted mMIMO systems while accounting for imperfect CSI and correlated Rayleigh fading. Notably, we considered several UEs, each of which can lie on either side of the RIS, and we derived the achievable rate in closed-form. Also, we provided a low-complexity iterative optimization approach to maximizing the achievable rate, in which the amplitudes and the phase shifts of the RIS are updated simultaneously at each iteration. Furthermore, we provided useful insights into the impact of RIS correlation and showed that STAR-RIS is more beneficial compared to the traditional RIS which is reflecting only. ## Appendix A Proof of Lemma 1 The LMMSE estimator of \(\mathbf{h}_{k}\), obtained by minimizing \(\operatorname{tr}\!\left(\mathbb{E}\!\left\{\!\left(\hat{\mathbf{h}}_{k}- \mathbf{h}_{k}\right)\!\left(\hat{\mathbf{h}}_{k}-\mathbf{h}_{k}\right)\! \right\}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where in (45), we have used (9) in the main text. In (46), to simplify the second term, we resort to the well-known channel hardening property in massive MIMO which intuitively states that channels behave as deterministic. The same property is also applied to the estimated channels, which means \(\mathbf{h}_{k}^{n}\mathbf{\hat{h}}_{k}\approx\mathbb{E}\{\hat{\mathbf{h}}_{k}^{n} \mathbf{\hat{h}}_{k}\}\) with high accuracy [27]. Using this property, we have \[\mathbb{E}\{\hat{\mathbf{h}}_{k}^{n}\hat{\mathbf{h}}_{k}\hat{\mathbf{h}}_{k}^{n} \tilde{\mathbf{h}}_{k}\}\approx\mathbb{E}\{\hat{\mathbf{h}}_{k}^{n}\hat{ \mathbf{h}}_{k}\}\mathbb{E}\{\hat{\mathbf{h}}_{k}^{n}\tilde{\mathbf{h}}_{k}\}=0 \tag{47}\] and thus \[\mathbb{E}\big{\{}\big{|}\mathbf{h}_{k}^{n}\hat{\mathbf{h}}_{k}- \mathbb{E}\big{\{}\mathbf{h}_{k}^{n}\hat{\mathbf{h}}_{k}\big{\}}\big{|}^{2} \big{\}}\!\approx\!\mathbb{E}\{|\tilde{\mathbf{h}}_{k}^{n}\hat{\mathbf{h}}_{k }|^{2}\} \tag{48}\] \[=\mathrm{tr}(\mathbf{R}_{k}\mathbf{\Psi}_{k})-\mathrm{tr}\left( \mathbf{\Psi}_{k}^{2}\right). \tag{49}\] We note that (49) holds because \(\mathbb{E}\{|\tilde{\mathbf{h}}_{k}^{n}\hat{\mathbf{h}}_{k}|^{2}\}=\mathrm{tr} \Big{(}\mathbb{E}\{\tilde{\mathbf{h}}_{k}\tilde{\mathbf{h}}_{k}^{n}\tilde{ \mathbf{h}}_{k}\tilde{\mathbf{h}}_{k}^{n}\}\Big{)}\!\approx\!\mathrm{tr} \Big{(}\mathbb{E}\{\tilde{\mathbf{h}}_{k}\tilde{\mathbf{h}}_{k}^{n}\}\mathbb{ E}\{\tilde{\mathbf{h}}_{k}\tilde{\mathbf{h}}_{k}^{n}\}\Big{)}=\mathrm{tr}(( \mathbf{R}_{k}-\mathbf{\Psi}_{k})\mathbf{\Psi}_{k})\), where we have applied the approximations \(\hat{\mathbf{h}}_{k}^{n}\hat{\mathbf{h}}_{k}\approx\mathbb{E}\{\hat{\mathbf{h }}_{k}^{n}\tilde{\mathbf{h}}_{k}\}\) and \(\tilde{\mathbf{h}}_{k}^{n}\hat{\mathbf{h}}_{k}\approx\mathbb{E}\{\hat{ \mathbf{h}}_{k}^{n}\tilde{\mathbf{h}}_{k}\}\), which is due to the channel hardening property in massive MIMO as explained above. For the second term of \(I_{k}\) in (17) it is easy to check that \[\mathbb{E}\big{\{}\big{|}\mathbf{h}_{k}^{n}\hat{\mathbf{h}}_{i} \big{|}^{2}\big{\}}=\mathbb{E}\big{\{}\big{|}(\hat{\mathbf{h}}_{k}^{n}+\tilde {\mathbf{h}}_{k})\hat{\mathbf{h}}_{i}\big{|}^{2}\big{\}} \tag{50}\] \[=\mathbb{E}\big{\{}\big{|}\hat{\mathbf{h}}_{k}^{n}\hat{\mathbf{h} }_{i}\big{|}^{2}\big{\}}+\mathbb{E}\big{\{}\big{|}\tilde{\mathbf{h}}_{k}\hat{ \mathbf{h}}_{i}\big{|}^{2}\big{\}}+2\,\mathrm{Re}\{\mathbb{E}\{\hat{\mathbf{h }}_{k}^{n}\hat{\mathbf{h}}_{i}\hat{\mathbf{h}}_{k}^{n}\}\}\] (51) \[=\mathrm{tr}(\mathbf{R}_{k}\mathbf{\Psi}_{i})\,. \tag{52}\] We note that (52) is true because the third term in (51) is zero as can be shown below. Specifically, this term can be written as \[2\,\mathrm{Re}\{\mathbb{E}\{\hat{\mathbf{h}}_{k}^{n}\hat{\mathbf{ h}}_{i}\hat{\mathbf{h}}_{i}^{n}\tilde{\mathbf{h}}_{k}\}\}\!=\!2\,\mathrm{Re}\left\{ \mathbb{E}\{\mathrm{tr}\Big{(}\!\Big{(}\!\Big{(}\!\hat{\mathbf{h}}_{i}\hat{ \mathbf{h}}_{i}^{n}\!\Big{)}\!\Big{(}\!\tilde{\mathbf{h}}_{k}\hat{\mathbf{h}}_ {k}^{n}\Big{)}\!\Big{)}\!\Big{)}\!\right\} \tag{53}\] \[=\!2\,\mathrm{Re}\left\{\mathrm{tr}\Big{(}\mathbb{E}\{\hat{ \mathbf{h}}_{i}\hat{\mathbf{h}}_{i}^{n}\}\mathbb{E}\{\tilde{\mathbf{h}}_{k} \hat{\mathbf{h}}_{k}^{n}\}\Big{)}\!\Big{\}}\right\}\] (54) \[=0 \tag{55}\] where, in (54), we have accounted for the independence between \(\hat{\mathbf{h}}_{k}\) and \(\hat{\mathbf{h}}_{i}\), and, in (55), we have considered that \(\hat{\mathbf{h}}_{k}\) and \(\hat{\mathbf{h}}_{k}\) are uncorrelated. The normalization parameter is written as \[\lambda=\frac{1}{\sum_{i=1}^{K}\mathbb{E}\{\mathbf{\hat{\mathbf{h}}}_{i}^{n} \mathbf{\hat{\mathbf{h}}}_{i}\}}=\frac{1}{\sum_{i=1}^{K}\mathbb{E}\{\hat{ \mathbf{h}}_{i}^{n}\hat{\mathbf{h}}_{i}\}}=\frac{1}{\sum_{i=1}^{K}\mathrm{tr}( \mathbf{\Psi}_{i})}. \tag{56}\] Combining (49), (52), and (56), we can approximate \(I_{k}\) as \(\tilde{I}_{k}\) in (20), and thus complete the proof. ## Appendix C Proof of Lemma 2 Let us first derive \(\nabla_{\boldsymbol{\theta}^{t}}f(\boldsymbol{\theta},\boldsymbol{\beta})\) the complex gradient of the achievable sum SE with respect to \(\boldsymbol{\theta}^{ts}\). From (14), it is easy to see that \[\nabla_{\boldsymbol{\theta}^{t}}f(\boldsymbol{\theta},\boldsymbol{\beta})=c\sum _{k=1}^{K}\frac{\tilde{I}_{k}\nabla_{\boldsymbol{\theta}^{t}}S_{k}-S_{k}\nabla_ {\boldsymbol{\theta}^{t}}\tilde{I}_{k}}{(1+\gamma_{k})\tilde{I}_{k}^{2}}, \tag{57}\] where \(c=\frac{\tau_{k}-\tau}{\tau_{c}\log(c)}\). To compute \(\nabla_{\boldsymbol{\theta}^{t}}S_{k}\) for a given user \(k\), we immediately note that \(\nabla_{\boldsymbol{\theta}^{t}}S_{k}=0\) if \(w_{k}=r\), i.e., if UE \(k\) is in the reflection region. This is obvious from (19), (10) and (5). Thus, we only need to find \(\nabla_{\boldsymbol{\theta}^{t}}S_{k}\) when \(w_{k}=t\). In such a case, we can explicitly write \(\mathbf{R}_{k}\) \[\mathbf{R}_{k} =\tilde{\beta}_{k}\mathbf{R}_{\mathrm{BS}}+\hat{\beta}_{k}\, \mathrm{tr}(\mathbf{R}_{\mathrm{RIS}}\mathbf{\Phi}_{t}^{n}\mathbf{R}_{\mathrm{ RIS}}\mathbf{\Phi}_{t}^{n})\mathbf{R}_{\mathrm{BS}}\] \[=\tilde{\beta}_{k}\mathbf{R}_{\mathrm{BS}}+\hat{\beta}_{k}\, \mathrm{tr}(\mathbf{A}_{t}\mathbf{\Phi}_{t}^{n})\mathbf{R}_{\mathrm{BS}}. \tag{58}\] where \(\hat{\beta}_{k}=\tilde{\beta}_{g}\tilde{\beta}_{k}\) and \(\mathbf{A}_{t}=\mathbf{R}_{\mathrm{RIS}}\mathbf{\Phi}_{t}\mathbf{R}_{\mathrm{ RIS}}\). When \(w_{k}=r\), we define \(\mathbf{A}_{r}=\mathbf{R}_{\mathrm{RIS}}\mathbf{\Phi}_{r}\mathbf{R}_{\mathrm{ RIS}}\). To calculate \(\nabla_{\boldsymbol{\theta}^{t}}S_{k}\), we follow steps detailed in [31, Chap. 3]. First, let us denote \(d(\cdot)\) the complex differential of the function in the argument. Then, it holds that \[d(S_{k}) =d\big{(}\mathrm{tr}(\mathbf{\Psi}_{k})^{2}\big{)}\] \[=2\,\mathrm{tr}(\mathbf{\Psi}_{k})d\,\mathrm{tr}(\mathbf{\Psi}_{k})\] \[=2\,\mathrm{tr}(\mathbf{\Psi}_{k})\,\mathrm{tr}(d\mathbf{\Psi}_{k}). \tag{59}\] Next, we apply [31, Eq. (3.35)], which gives \[d(\mathbf{\Psi}_{k})=d(\mathbf{R}_{k}\mathbf{Q}_{k}\mathbf{R}_{k})\] \[=d(\mathbf{R}_{k})\mathbf{Q}_{k}\mathbf{R}_{k}+\mathbf{R}_{k}d( \mathbf{Q}_{k})\mathbf{R}_{k}+\mathbf{R}_{k}\mathbf{Q}_{k}d(\mathbf{R}_{k}). \tag{60}\] The differentials \(d(\mathbf{R}_{k})\) and \(d(\mathbf{Q}_{k})\) are derived as follows. First, from (58) it is easy to check that \[d(\mathbf{R}_{k})=\hat{\beta}_{k}\mathbf{R}_{\mathrm{BS}}\, Now we turn our attention to \(\nabla_{\mathbf{\theta}^{t}}\tilde{I}_{k}\). To this end, from (20), it is straightforward to check that \[d(\tilde{I}_{k})=\sum\nolimits_{i=1}^{K}\operatorname{tr}(d(\mathbf{R}_{k}) \mathbf{\Psi}_{i})+\sum\nolimits_{i=1}^{K}\operatorname{tr}(\mathbf{R}_{k}d(\mathbf{ \Psi}_{i}))\\ -2\operatorname{tr}\bigl{(}\mathbf{\Psi}_{k}d(\mathbf{\Psi}_{k})\bigr{)}+ \frac{K\sigma^{2}}{\rho}\sum\nolimits_{i=1}^{K}\operatorname{tr}(d(\mathbf{\Psi}_ {i})). \tag{68}\] Note that \(d(\mathbf{\Psi}_{i})=0\) if \(w_{i}\neq t\) since \(\mathbf{\Psi}_{i}\) is independent of \(\mathbf{\theta}^{t}\) in this case. Thus, the above equation is reduced to \[d(\tilde{I}_{k})=\operatorname{tr}(\mathbf{\Psi}d(\mathbf{R}_{k}))-2\operatorname{ tr}\bigl{(}\mathbf{\Psi}_{k}d(\mathbf{\Psi}_{k})\bigr{)}+\sum\nolimits_{i\in\mathcal{K}_{ t}}\operatorname{tr}\bigl{(}\tilde{\mathbf{R}}_{k}d(\mathbf{\Psi}_{i})\bigr{)}, \tag{69}\] where \(\mathbf{\Psi}=\sum_{i=1}^{K}\mathbf{\Psi}_{i}\) and \(\tilde{\mathbf{R}}_{k}=\mathbf{R}_{k}+\frac{K\sigma^{2}}{\rho}\mathbf{I}_{M}\). Using (64) into (69) gives \[d(\tilde{I}_{k})=\operatorname{tr}(\mathbf{\Psi}d\mathbf{R}_{k})\] \[+\sum\nolimits_{i\in\mathcal{K}_{t}}\bigl{(}\tilde{\mathbf{R}}_ {k}\bigl{(}d(\mathbf{R}_{i})\mathbf{Q}_{i}\mathbf{R}_{i}-\mathbf{R}_{i} \mathbf{Q}_{i}d(\mathbf{R}_{i})\mathbf{Q}_{i}\mathbf{R}_{i}+\mathbf{R}_{i} \mathbf{Q}_{i}d(\mathbf{R}_{i})\bigr{)}\bigr{)}\] \[-2\operatorname{tr}\bigl{(}\mathbf{\Psi}_{k}\bigl{(}d(\mathbf{R}_{k}) \mathbf{Q}_{k}\mathbf{R}_{k}-\mathbf{R}_{k}\mathbf{Q}_{k}d(\mathbf{R}_{k}) \mathbf{Q}_{k}\mathbf{R}_{k}+\mathbf{R}_{k}\mathbf{Q}_{k}d(\mathbf{R}_{k}) \bigr{)}\bigr{)}\] \[=\operatorname{tr}\bigl{(}\tilde{\mathbf{\Psi}}_{k}d(\mathbf{R}_{k}) \bigr{)}+\sum\nolimits_{i\in\mathcal{K}_{t}}\operatorname{tr}\bigl{(}\tilde{ \mathbf{R}}_{ki}d(\mathbf{R}_{i})\bigr{)}, \tag{70}\] where \[\tilde{\mathbf{R}}_{k}=\mathbf{\Psi}-2\bigl{(}\mathbf{Q}_{k}\mathbf{R}_{k}\mathbf{ \Psi}_{k}+\mathbf{\Psi}_{k}\mathbf{R}_{k}\mathbf{Q}_{k}-\mathbf{Q}_{k}\mathbf{R}_ {k}\mathbf{\Psi}_{k}\mathbf{R}_{k}\mathbf{Q}_{k}\bigr{)} \tag{71}\] and \[\tilde{\mathbf{R}}_{ki}=\mathbf{Q}_{i}\mathbf{R}_{i}\tilde{\mathbf{R}}_{k}- \mathbf{Q}_{i}\mathbf{R}_{i}\tilde{\mathbf{R}}_{k}\mathbf{R}_{k}\mathbf{Q}_{i }+\tilde{\mathbf{R}}_{k}\mathbf{R}_{i}\mathbf{Q}_{i},i\in\mathcal{K}_{t}. \tag{72}\] Again, we note that \(d(\mathbf{R}_{k})=0\) if \(w_{k}\neq t\). Thus, by using (62), we can write \(\nabla_{\mathbf{\theta}^{t}}\tilde{I}_{k}\) as \[\nabla_{\mathbf{\theta}^{t}}\tilde{I}_{k}=\frac{\partial}{\partial\mathbf{\theta}^{t *}}\tilde{I}_{k}=\text{diag}\bigl{(}\tilde{\mathbf{A}}_{kt}\text{diag}(\mathbf{ \beta}^{t})\bigr{)}, \tag{73}\] where \(\bar{\nu}_{k}=\hat{\beta}_{k}\operatorname{tr}\bigl{(}\tilde{\mathbf{\Psi}}_{k} \mathbf{R}_{\text{BS}}\bigr{)}\), \(\tilde{\nu}_{ki}=\hat{\beta}_{k}\operatorname{tr}\bigl{(}\tilde{\mathbf{R}}_{ ki}\mathbf{R}_{\text{BS}}\bigr{)}\), and \[\tilde{\mathbf{A}}_{kt}=\begin{cases}\bar{\nu}_{k}\mathbf{A}_{t}+\sum_{i\in \mathcal{K}_{t}}^{K}\tilde{\nu}_{ki}\mathbf{A}_{t}&w_{k}=t\\ \sum_{i\in\mathcal{K}_{t}}\tilde{\nu}_{ki}\mathbf{A}_{t}&w_{k}\neq t,\end{cases} \tag{74}\] which is in fact the special case of (26) when \(u=t\), meaning that (25c) has been proved. Following the same steps we can prove (25d), but again, we skip the details for the sake of brevity. The expression for \(\nabla_{\mathbf{\beta}^{t}}S_{k}\) is derived as follows. First we only need to consider \(\nabla_{\mathbf{\beta}^{t}}S_{k}\) when \(w_{k}=t\). For this case, from (61), we can write \[d(\mathbf{R}_{k}) =\hat{\beta}_{k}\mathbf{R}_{\text{BS}}\operatorname{tr}\bigl{(} \mathbf{A}_{t}^{\mu}d(\mathbf{\Phi}_{t})+\mathbf{A}_{t}d\bigl{(}\mathbf{\Phi}_{t}^{u} \bigr{)}\bigr{)} \tag{75a}\] \[=\hat{\beta}_{k}\mathbf{R}_{\text{BS}}\bigl{(}\text{diag}\bigl{(} \mathbf{A}_{t}^{\mu}\text{diag}(\mathbf{\theta}^{t})\bigr{)}^{\top}d(\mathbf{\beta}^{t} )\] \[+\text{diag}\bigl{(}\mathbf{A}_{t}\text{diag}(\mathbf{\theta}^{t*}) \bigr{)}^{\top}d(\mathbf{\beta}^{t})\bigr{)}\] (75b) \[=2\hat{\beta}_{k}\mathbf{R}_{\text{BS}}\operatorname{Re}\bigl{\{} \text{diag}\bigl{(}\mathbf{A}_{t}^{\mu}\text{diag}(\mathbf{\theta}^{t})\bigr{)}^{ \top}d(\mathbf{\beta}^{t}). \tag{75c}\] Now, using (75) in (65) yields \[\nabla_{\mathbf{\beta}^{t}}S_{k}=2\nu_{k}\operatorname{Re}\bigl{\{} \text{diag}\bigl{(}\mathbf{A}_{t}^{\mu}\text{diag}(\mathbf{\theta}^{t})\bigr{)} \bigr{\}}. \tag{76}\] Similarly, we can write \(\nabla_{\mathbf{\beta}^{r}}S_{k}\) as \[\nabla_{\mathbf{\beta}^{r}}S_{k}=2\nu_{k}\operatorname{Re}\bigl{\{} \text{diag}\bigl{(}\mathbf{A}_{\tau}^{u}\text{diag}(\mathbf{\theta}^{r})\bigr{)} \bigr{\}}. \tag{77}\] For \(\nabla_{\mathbf{\beta}^{t}}\tilde{I}_{k}\) and \(\nabla_{\mathbf{\beta}^{r}}\tilde{I}_{k}\), we can follow the same steps above, which gives \[\nabla_{\mathbf{\beta}^{t}}\tilde{I}_{k} =2\operatorname{Re}\bigl{\{}\text{diag}\bigl{(}\tilde{\mathbf{A}}_{ kt}^{u}\text{diag}(\mathbf{\theta}^{t})\bigr{)}\bigr{\}}. \tag{78}\] \[\nabla_{\mathbf{\beta}^{r}}\tilde{I}_{k} =2\operatorname{Re}\bigl{\{}\text{diag}\bigl{(}\tilde{\mathbf{A}}_{ k\prime}^{u}\text{diag}(\mathbf{\theta}^{r})\bigr{)}\bigr{\}}. \tag{79}\]
2309.09048
Generative AI-Driven Storytelling: A New Era for Marketing
This paper delves into the transformative power of Generative AI-driven storytelling in the realm of marketing. Generative AI, distinct from traditional machine learning, offers the capability to craft narratives that resonate with consumers on a deeply personal level. Through real-world examples from industry leaders like Google, Netflix and Stitch Fix, we elucidate how this technology shapes marketing strategies, personalizes consumer experiences, and navigates the challenges it presents. The paper also explores future directions and recommendations for generative AI-driven storytelling, including prospective applications such as real-time personalized storytelling, immersive storytelling experiences, and social media storytelling. By shedding light on the potential and impact of generative AI-driven storytelling in marketing, this paper contributes to the understanding of this cutting-edge approach and its transformative power in the field of marketing.
Marko Vidrih, Shiva Mayahi
2023-09-16T17:13:34Z
http://arxiv.org/abs/2309.09048v1
# Generative AI-Driven Storyelling: A New Era for Marketing ###### Abstract This paper delves into the transformative power of Generative AI-driven storytelling in the realm of marketing. Generative AI, distinct from traditional machine learning, offers the capability to craft narratives that resonate with consumers on a deeply personal level. Through real-world examples from industry leaders like Google, Netflix and Stitch Fix, we elucidate how this technology shapes marketing strategies, personalizes consumer experiences, and navigates the challenges it presents. The paper also explores future directions and recommendations for generative AI-driven storytelling, including prospective applications such as real-time personalized storytelling, immersive storytelling experiences, and social media storytelling. By shedding light on the potential and impact of generative AI-driven storytelling in marketing, this paper contributes to the understanding of this cutting-edge approach and its transformative power in the field of marketing. CCS CONCEPTS Computing methodologies Artificial intelligence Applied computing Operations research Marketing Decision analysis Information systems Information systems applications Collaborative and social computing systems and tools Multimedia information systems Multimedia content creation **Keywords and Phrases:** Generative AI, AI-Driven Storytelling, Marketing Analytics, Ethical Considerations in AI, Machine Learning, AI in Business, Data Management, AI and Creativity, Digital Marketing Trends, AI Infrastructure, Deep Learning, AI-Powered Content Creation, Customer Engagement, Conversational AI, Chatbots in Marketing, Natural Language Processing, Future of AI in Marketing. ## 1 Introduction ### 1.1 Background of marketing analytics In the ever-evolving landscape of marketing, the confluence of data analytics and storytelling has emerged as a potent force. Historically, marketing analytics relied heavily on raw data, numbers, and statistics. However, the modern marketer recognizes the unparalleled power of a well-crafted narrative. As venture capital investments in the domain of Generative AI have exceeded $1.7 billion over the past three years, it's evident that the industry stands on the brink of a transformative era. Generative AI, distinct from traditional machine learning, has the capability to craft narratives tailored to individual consumers, based on their behaviors, preferences, and histories. Early generative models like ChatGPT focused on augmenting creative work. Yet, by 2025, it's anticipated that over 30% of generative AI models will be discovered. This projection aligns with insights from Agarwal et al. (1), who posit that by 2025, AI will generate 30% of outbound marketing messages, a significant leap from less than 2% in 2022. In this new era of marketing analytics, it's not enough to rely solely on raw data. This data must be transformed into detailed, actionable information to drive effective marketing strategies. Storytelling, a strategy that conveys insights and emotionally connects with the target audience, serves as an invaluable tool in this transformation. By presenting data in a narrative form, marketers can ensure their messages are impactful, memorable, and resonate deeply with their audience. In essence, with the advent of Generative AI, storytelling in marketing has evolved to be both an art and a science. ### 1.2 Definition of generative AI-driven storytelling Generative AI-driven storytelling refers to the application of generative models, a subset of artificial intelligence, to craft narratives. Distinct from traditional machine learning, which operates based on pattern recognition and prediction, generative AI has the capability to generate new content. This generation is not random but is based on vast amounts of data it has been trained on, making its output relevant and contextually apt. ### 1.3 Purpose and scope of the paper The purpose of this paper is to delve into the transformative potential of generative AI-driven storytelling within the realm of marketing. We examine the evolution and applications of generative AI technology in modern marketing strategies. This exploration deepens our comprehension of how this technology enhances conversion rates, bolsters customer engagement, and provides profound customer insights. Through an extensive study of this avant-garde approach, we aim to highlight the revolutionary era ushered in by generative AI-driven storytelling. This paper underscores the significance of generative AI in marketing, emphasizing the necessity for transparency, fairness, and accountability in its ethical deployment. ## 2 Generative AI in Marketing ### Progression and enhancement of generative AI over time The progression and enhancement of generative AI have transitioned from early rule-based systems or Traditional AI excels to more sophisticated deep-learning models. The tradition AI excels were used to analyze large volumes of data, distill it into actionable insight and identify patterns beyond human comprehension [4, 2023, p. 23]. Generative AI uses this technology to generate meaningful content because it trains the existing information by creating new images, texts and computer codes. With the transition, AI algorithms have become increasingly capable of generating and understanding human-like content [14, 2023, p. 19]. For instance, companies like Coca-Cola implemented generative AI-driven storytelling to create personalized marketing campaigns. This company used AI algorithms to collect, analyze and interpret consumer data [3, 2022, p. 34]. Through these algorithms, the company tailored its messages to match the customer segment. Consequently, the company achieved higher customer engagement and brand loyalty. Other examples of companies that have successfully embraced Generative AI-driven storytelling include The New York Times has been at the forefront of exploring generative AI-driven storytelling. This company has developed an AI system called "Journalism AI" that helps journalists generate insights, analyze data sets and automate reporting tasks [18, 2022, p. 34]. OpenAI is a pioneer in generative AI and has developed models like GPT-3 that have been utilized in various creative storytelling applications. These models utilize creative storytelling applications to generate virtual characters, interactive narratives, and assist in creative writing [15, 2022, p. 35]. Besides, the Associated Press (AP) has experimented with AI-driven journalism to automate and create financial reports and sports articles. The AI-driven storytelling as helped the companies to speed content creation and news coverage. The adoption of AI-driven storytelling in workplace has grown over different generations. As of 2023, in the United States of America, the highest generative AI adoption rate is in Gen Z resulting in 29% [14, 2023, p. 21]. On the other hand, with the mere difference, the adoption rate in Gen X is 28% while it is 27% in millennials. However, there are challenges facing the adoption of generative AI technology; for instance, Facebook has faced challenges with AI-driven content moderation when it utilized AI algorithms to detect and remove inappropriate or harmful content from its platform [19, 2022, 10]. These algorithms faced criticisms for mistakenly removing and flagging legitimate content including historical images and news articles. This highlights the challenge of training AI models to accurately differentiate between acceptable and problematic content. Google faced challenges with its AI-powered image recognition system [13, 2021]. This company faced challenges with the image recognition algorithm that labeled images of African Americans as "gorillas". This incident highlighted the challenge of ensuring AI systems are trained on diverse and representative datasets to avoid perpetuating biases [20, 2023, 2023]. Overcoming these challenges requires careful consideration of ethical implications, robust training data, and ongoing monitoring and refinement of the AI systems. Further, the progression has been fueled by improvements in computing power, large data sets, and improvements in algorithms such as recurrent neural networks (RNNs). It employs techniques such as recurrent neural networks (RNNs) and generative adversarial networks (GANs) to analyze existing data and create new and compelling narratives [21, 2022, 2017]. This sequential nature enables RNNs to capture the flow and coherence of narratives, ensuring that the generated stories make sense and follow a logical progression. [4, 2017, 2025]. This breakthrough opens up exciting possibilities for marketers to craft engaging and personalized stories that resonate with their target audiences. These progressions and enhancements have led to product design and innovation through data augmentation Figure 1: Adoption of generative AI technology [Barbosa B, Saura JR, Zekan SB, Ribeiro-Soriano D., 2023] and simulation. Dwivedi et al. (6) explore how companies have employed generative AI in generating synthetic data which enhances accurate machine learning and training datasets. For instance, companies like Amazon and Netflix leverage AI algorithms to personalize recommendations based on user behavior and preferences, enhancing the overall customer experience (23, 2013, p. 287). These companies employ AI-driven storytelling techniques in their product recommendation system. ### Importance of storytelling for the marketing professional In the realm of marketing, storytelling transcends mere information dissemination. It has the power to engage, resonate, and evoke emotions. Generative AI augments this process by ensuring that every narrative is tailored, not just to a segment of consumers, but to individual preferences and histories. Companies like Amazon harness this power to recommend products, while streaming giants like Netflix curate personalized watch lists. ### Applications of AI in marketing The application of AI in marketing can be deduced from its application in customer data analysis. Through AI, marketers can analyze customer data to personalize marketing campaigns, identify patterns and trends and improve customer segmentation (8, 2023, p. 64). AI-powered chatbots and virtual assistants help in personalizing interactions with customers leading to better customer interactions (5, 2023, p. 29). With generative AI-driven storytelling, marketers can take these applications further by creating dynamic and engaging narratives that capture the essence of their brand and products. Digital marketers can leverage the capabilities of generative AI to deliver meaningful customer experiences (5, 2023, p. 28). For instance, Adobe Firefly complements human imagination and creativity in digital marketing, enhancing creativity and productivity. This AI-powered tool enhances creativity and productivity by assisting marketers in generating compelling and personalized content (24, 2020, p. 483). RNNs and transformers help the tool to analyze large volume of data including customer preferences, demographics, and historical engagement patterns. Generative AI could have an impact on most business functions; however, a few stand out when measured by the technology's impact as a share of functional cost (Fig. 2). The analysis identified marketing as one of the four sectors that could account for approximately 75 percent of the total annual value from generative AI use cases. ## 3 Generative AI-Driven Storytelling for Marketing ### Overview of generative AI techniques for storytelling Generative AI techniques for storytelling include training AI models on large datasets of existing narratives, such as articles, books, or movies to learn structures and patterns. These models generate compelling content based on the learned patterns, incorporating specific characters, themes, or brand messaging (25, 2023, p. 29). Through techniques like recurrent neural networks (RNNs) and transformers, marketers can generate coherent and contextually relevant narratives. These techniques enable the AI algorithms to understand and learn from patterns in the data, allowing for the creation of compelling stories. RNNs are particularly effective in generating sequential data, making them well-suited for storytelling (8, 2023, p. 64). They have the ability to process and generate text in a sequential manner capturing the flow and coherence of narratives. By utilizing these advanced AI techniques, marketers can leverage the power of generative AI-driven storytelling to create personalized and engaging narratives that captivate their audiences and drive marketing success (27, 2022, p. 55). Google Figure 2: The value potential of generative AI across business functions. [McKinsey, 2023] has extensively used recurrent neural networks (RNNs) and transformers in services like Google Translate where Google's search engine algorithms have been used to improve the accuracy and relevance of search results. ### Advantages of generative AI-driven storytelling in marketing Generative AI-driven storytelling offers several advantages in the realm of marketing. To begin with, Generative AI-driven storytelling enables the creation of tailored or personalized narratives for customers and target segments [28, 2023, p. 44]. These tailored narratives enhance engagement and resonate with specific customer preferences. Besides, AI models can generate content at scale reducing the time and effort required to produce high-quality marketing materials. Generative AI-driven storytelling creates opportunities for marketers to experiment with different storytelling approaches, optimize content and iterate quickly on real-time performance metrics and feedback [8, 2023, p. 67]. It also enables the creation of interactive narratives and dynamics that can be personalized to user input or contextual cues, leading to an overall user experience. A significant part of marketing is providing the right content, at the right tone, time and through the right channel. Digital marketers can leverage on generative AI-driven storytelling to deliver meaningful customer experience to the target market. [29, 2020, p. 91] These technologies introduce a new level of creativity to marketing operations and content creation by generating unique and fresh content. For instance, generative AI applications, like Adobe Firefly, adds a layer of creative intelligence by improving human creativity and imagination in marketing analytics [30, 2023, p. 29]. Generative AI can amplify and support digital marketers' creativity by providing valuable suggestions for future content, summarizing complex topics and offering thought-provoking ideas. ### Examples of successful generative AI-driven storytelling in marketing The prowess of generative AI-driven storytelling is best understood through real-world applications. Stitch Fix, an online personal styling service, employs generative AI to curate fashion choices tailored to individual users, enhancing user engagement and increasing sales. Netflix's recommendation system is another exemplar, with its curated watchlists being generated based on user preferences and viewing history. Companies like Coca-Cola leverage this technology for targeted ad campaigns, ensuring each narrative resonates with its intended audience. ## 4 Impact of Generative AI-Driven Storytelling on Marketing ### Improved customer engagement Generative AI-driven storytelling has a profound impact on customer engagement because it creates personalized and emotionally compelling narratives. Marketers can use AI-driven storytelling to capture the attention of target customers and foster deeper connections (33, 2021, p. 22). By generating content resonating with individuals on a personal level, marketers can lead to increased interaction, engagement, and sharing, leading to brand advocacy and loyalty. ### Increased conversion rates Research by Dwivedi et al. (6) has found that effective storytelling has a direct impact on the conversion rates because it enables marketers to curate messages that address the customers desires, product benefits and sense of urgency. Pataranutaporn et al. (33) have identified the importance of aligning storytelling approach with the customer journey. Through this alignment it is easy to guide the customers through the sales funnel, increasing the likelihood of purchases and conversions. ### Enhanced customer insights Research by Dwivedi et al. (6) has found that effective storytelling has a direct impact on the conversion rates because it enables marketers to curate messages that address the customers desires, product benefits and sense of urgency. Pataranutaporn et al. (33) have identified the importance of aligning storytelling approach with the customer journey. Through this alignment it is easy to guide the customers through the sales funnel, increasing the likelihood of purchases and conversions. ## 5 Challenges and limitations of generative AI-driven storytelling in marketing analytics ### Ethical concerns and considerations Generative AI-driven storytelling raises ethical concerns about the creation and dissemination of manipulative misleading narratives. The ethical concerns and considerations for generative AI have been echoed by the World Economic Forum which has highlighted the importance of addressing the collection and dissemination of misleading or false information through AI (35, 2023, p. 43). These concerns have pushed developers like ChatGPT and OpenAI to reduce the potential for harmful or false outputs. These companies have introduced bias detection and evaluation systems which regularly evaluate their AI systems for biases using techniques such as disparate impact analysis, fairness metrics and statistical tests. Bias mitigation has been identified in research and studies like the Gender Shades project that evaluated facial recognition systems from leading companies and identified higher error rates for females and people with darker skin tones (36, 2011, p. 127). Besides, the human input remains the critical step for ensuring the integrity and reliability of AI-generated content. Marketers rarely ensure the transparency and ethical use of AI-generated materials, for this reason, they rarely disclose its source and nature. Generative AI presents threats for copyright issues, deep-fakes, and other malicious use which target specific individuals, organizations and government. This challenge has led to biases inherent in training datasets to avoid discrimination and stereotypes (37, 2022, p. 158). To avoid these challenges, marketers can strike a balance between ethical consideration and storytelling before adopting generative AI in marketing. Organizations should prioritize fairness, transparency, accountability and privacy when integrating generative AI systems. For instance, Google have established ethical frameworks and review processes to ensure continuous monitoring and accountability for AI systems (38, 2019, p. 84). Besides, other companies have ventured into algorithmic transparency and interpretability by employing algorithms that are interpretable and transparent. Through transparency, these companies identify and mitigate biases in the decision-making. LIME (Local Interpretable Model-agnostic Explanations) is an algorithm that provides explanations for individual predictions (39, 2017). ### Dependence on data quality and accuracy The ethical use and effectiveness of generative AI are dependent on the data quality and accuracy of the underlying database. Incomplete or biased datasets lead to the generation of narratives that exclude or misrepresent a certain portion of the target audience (40, 2022, p. 28). In 2016, Microsoft released a chatbot named Tay on Twitter, designed to interact and learn from users' conversations. Unfortunately, Tay started posting inflammatory and offensive tweets because of the poor quality of training data (41, 2022, p. 4). The users then deliberately fed the AI with biased and inappropriate information leading to a public relations disaster for Microsoft. While generating information using AI-driven storytelling, marketers need to ensure data integrity and validate the accuracy of training datasets (10, 2023). By addressing potential bias, these marketers will ensure that the generated narratives align with the intended goals and objectives, while at the same time resonating with the diverse consumer groups. Generative AI systems create things like audio, pictures, and writing samples which can be discriminative (42, 2022, p. 123). These systems can mis-identify things such as people, words or pictures leading to discrimination. Even though these systems are based on neutral network models, they can modify the data presented using their own internal structure can change the output based on the feedback generated. For instance, an investigation by ProPublica in 2016 found that Facebook's ad platform allowed promoters to exclude certain ethnic and racial groups from seeing their housing-related ads (43, 2023, p. 34). The biases in training data used by the model led to discriminatory practices that violated fair housing regulations. These AI models perpetuate the biases that are fed leading to consequences ranging from discriminatory advertising to perpetuating harmful stereotypes. ### The demand for proficient professionals The implementation of generative AI-driven storytelling in marketing calls for a skilled workforce in data analysis, AI technologies and storytelling. The future calls for professionals who can understand the intricacies of AI technologies and leverage on the generative AI technologies effective storytelling (44, 2023, p. 27). Organizations need to invest in training and development programs which equip the marketing teams with necessary skills to leverage the power of generative in AI storytelling. Wiredu (44) recommend companies to invest in acquiring talent with expertise in generative AI, including machine-learning engineers, data scientists and AI researchers. The author also call for an opportunity for these individuals to develop their skills through continuous learning opportunities and skill development. Organizations can offer workshops, training programs, and collaborate with academic institutions to nurture talent. ## 6 Future Directions and Recommendations ### Prospective applications of generative AI-driven storytelling in marketing The horizon of generative AI-driven storytelling is vast and ever-expanding. As technology evolves, we foresee its application in real-time storytelling, where narratives adapt on-the-fly based on real-time user interactions. The integration of generative AI with augmented and virtual reality promises immersive brand experiences that were hitherto deemed science fiction. Social media platforms stand to benefit immensely, with AI-driven content curation tailoring user feeds to individuals. ### Potential avenues for further research and exploration With the evolution of AI-driven storytelling, avenues for future research along with a brief overview of the current state and knowledge gaps that warrant further research include fairness metrics, bias identification and human-AI collaboration. There should be studies on metrics and evaluation frameworks that can be used to assess the effectiveness and quality of AI-generated stories. This research should focus on factors such developing more robust and context-specific fairness metrics that account for temporal biases, gender bias and intersectional biases. Additionally, research is needed to determine the compromises between different fairness criteria and how to prioritize them in various applications. Causal inference techniques can be used to design interventions and mitigate biases effectively (13, 2023). Exploring causal interventions to mitigate biases and their unintended consequences is an important area of investigation. Further, research should explore ways of enhancing collaboration between AI algorithms and human storytelling. Further studies should delve further into investigations on how humans and AI systems can collaboratively make decisions that address biases in AI systems. Through this research there will be a right balance between AI assistance and human creativity, unlocking new storytelling capabilities. Further research should also be used to investigate the challenges and opportunities of using AI-generative storytelling in different cultural contexts. This understanding will help marketers to navigate the cultural nuance leading to a culturally relevant and sensitive marketing initiative. ### Recommendations for organizations considering the adoption of generative AI-driven storytelling As exciting as the prospects of generative AI are, it is crucial to tread with caution. Fairness metrics need to be established to ensure that the AI-generated narratives are inclusive and non-discriminatory. Continuous research is vital to identify and rectify biases in generative models. Balancing technology with creativity is essential. While AI can generate content, the human touch, the intuition, and the emotional understanding remain irreplaceable. For organizations considering the adoption of generative AI-driven storytelling: * Investing in data diversity and quality proves to be effective in generating AI models which are more diverse, comprehensive, and representative of the target audience. This strategy can be enhanced by adopting right infrastructure which facilitates seamless integration and computational requirements into the existing systems. Some of the key considerations to be made include; high-performance computing infrastructure like cloud services and GPUs that ensure efficient inference processes and training. Emerging technologies and systems have introduced cloud computing programs that help in data management. * The algorithms are good enough to detect threats based on self-learned skills in machine learning. For instance, IBM Research has been actively working on bias mitigation in AI systems and has developed the AI Fairness 360 toolkit [4, 2017, p. 29]. These tools provide users with metrics and algorithms which detect and mitigate biases in machine learning models. OpenAI, the organization behind ChatGPT, has emphasized the importance of cooperation and sharing knowledge for addressing AI biases. This company works with AI research community to address ethical concerns. * Delegation-based architectures can be used to delegate intensive tasks to powerful devices and servers like the Server-based Certificate Validation Protocol (SCVP). This approach enables security protocols like end-to-end IP security protocols [4, 2017, p. 29]. * Security concerns can be improved using hardware-based methods that involve additional hardware security modules like the Trusted Platform Module (TPM). This approach leads to a security paradigm that leads to solid authentication and encryption because they provide security-related functions. Further, the adoption of generative AI requires infrastructure that is scalable to accommodate the growing demand of generative AI models. Besides, organizations should prioritize ethical considerations by establishing ethical guidelines and frameworks that are responsible for using generative AI-driven storytelling. Data management and security systems should be able to handle the large volumes of training data and ensure data privacy and security. For instance, Microsoft has established an AI Ethics and Effects in Engineering and Research (AETHER) Committee, which conducts audits of AI systems to identify and address biases. Through these measures, the companies adopting these technologies will adhere to regulatory requirements [(4, 2017, p. 29)]. Information on the source and nature of content should also be availed to the target consumers to address the potential biases and maintain consumer trust. By implementing these pragmatic recommendations and drawing inspiration from companies like IBM, Microsoft, Google, the Partnership on AI, and OpenAI, organizations can take actionable steps to mitigate biases in AI systems. ## 7 Conclusion This study takes advantage of conceptual research design, which involves methodologies used in observing and analyzing information presented on a given topic. Through the conceptual research framework, the study has combined the previous research with the ongoing discovery to point out underlying issues with the subject matter. The confluence of generative AI and storytelling marks a transformative epoch in the domain of marketing. By combining the power of AI with the art of storytelling, marketers can create an engaging, personalized and impactful narrative that resonates with the target consumer on a deeper level. As we stand on the cusp of this revolution, it is imperative for marketing professionals to not only harness the power of this technology but also to understand its nuances, its challenges, and its ethical implications. While navigating through generative AI-driven storytelling, it is crucial to strike a balance between technology, creativity and ethical considerations. Through this balance, the marketers unlock full potential of generative AI in marketing, build stronger connection and compelling narratives. The implications of generative AI-driven storytelling within the realm of marketing analytics shape the way brands engage and communicate with their target customers. The overall significance of the research in the field builds understanding on the power of generative AI in the evolving digital landscape. This understanding guides corporations in pushing boundaries and staying competitive in the ever-changing business world. The challenge of AI data management can be addressed using standards, policies, and governance that shape the evolution of IoT. Future studies should narrow down this topic in order to identify facility-specific security techniques. For marketers, Generative AI-driven storytelling offers opportunities for enhanced content creation, personalization, and customer engagement. Marketers should explore the potential of AI-generated content in creating targeted and personalized storytelling experiences by leveraging AI for interactive advertisements, dynamic content generation, and personalized recommendations. For researchers, generative AI-driven storytelling is a dynamic research area with immense potential for innovation and creative expression. They should explore AI models and algorithms to improve the quality and coherence of generative storytelling by combining images, texts and audio leading to more immersive storytelling experiences. Policy frameworks and guidelines are needed to address ethical concerns, privacy issues, and accountability in generative AI-driven storytelling. #### Author Contributions We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us. #### Funding This research did not receive any external funding. The authors were responsible for all aspects of the project, including design, data collection, analysis, and interpretation. ## Declarations ### Conflict of interest We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
2309.11607
Superconducting triangular islands as a platform for manipulating Majorana zero modes
Current proposals for topological quantum computation (TQC) based on Majorana zero modes (MZM) have mostly been focused on coupled-wire architecture which can be challenging to implement experimentally. To explore alternative building blocks of TQC, in this work we study the possibility of obtaining robust MZM at the corners of triangular superconducting islands, which often appear spontaneously in epitaxial growth. We first show that a minimal three-site triangle model of spinless $p$-wave superconductor allows MZM to appear at different pairs of vertices controlled by a staggered vector potential, which may be realized using coupled quantum dots and can already demonstrate braiding. For systems with less fine-tuned parameters, we suggest an alternative structure of a "hollow" triangle subject to uniform supercurrents or vector potentials, in which MZM generally appear when two of the edges are in a different topological phase from the third. We also discuss the feasibility of constructing the triangles using existing candidate MZM systems and of braiding more MZM in networks of such triangles.
Aidan Winblad, Hua Chen
2023-09-20T19:40:56Z
http://arxiv.org/abs/2309.11607v2
# Superconducting triangular islands as a platform for manipulating Majorana zero modes ###### Abstract Current proposals for topological quantum computation (TQC) based on Majorana zero modes (MZM) have mostly been focused on coupled-wire architecture which can be challenging to implement experimentally. To explore alternative building blocks of TQC, in this work we study the possibility of obtaining robust MZM at the corners of triangular superconducting islands, which often appear spontaneously in epitaxial growth. We first show that a minimal three-site triangle model of spinless \(p\)-wave superconductor allows MZM to appear at different pairs of vertices controlled by a staggered vector potential, which may be realized using coupled quantum dots and can already demonstrate braiding. For systems with less fine-tuned parameters, we suggest an alternative structure of a "hollow" triangle subject to uniform supercurrents or vector potentials, in which MZM generally appear when two of the edges are in a different topological phase from the third. We also discuss the feasibility of constructing the triangles using existing candidate MZM systems and of braiding more MZM in networks of such triangles. _Introduction.--_For more than twenty years, Majorana zero modes (MZM) in condensed matter systems have been highly sought after due to their potential for serving as building blocks of topological quantum computation, thanks to their inherent robustness against decoherence and non-Abelian exchange statistics [1; 2; 3; 4; 5]. MZM were originally proposed to be found in half-quantum vortices of two-dimensional (2D) topological \(p\)-wave superconductors and at the ends of 1D spinless \(p\)-wave superconductors [6; 7]. Whether a pristine \(p\)-wave superconductor [8] has been found is still under debate. However, innovative heterostructures proximate to ordinary \(s\)-wave superconductors have been proposed to behave as effective topological superconductors in both 1D and 2D. These include, for example, semiconductor nanowires subject to magnetic fields [9; 10; 11], ferromagnetic atomic spin chains [12; 13; 14; 15; 16; 17], 3D topological insulators [18; 19; 20; 21], quantum anomalous Hall insulators [22; 23; 24], quasi-2D spin-orbit-coupled superconductors with a perpendicular Zeeman field [25; 26; 27; 28; 29; 30], and planar Josephson junctions [31; 32; 33; 34; 35; 36; 37], etc. It has been a challenging task to decisively confirm the existence of MZM in the various experimental systems due to other competing mechanisms that can potentially result in similar features as MZM do in different probes [38; 39; 40; 41; 42; 43; 34]. Other proposals for constructing Kitaev chains through a bottom-up approach, based on, e.g. magnetic tunnel junctions proximate to spin-orbit-coupled superconductors [44], and quantum dots coupled through superconducting links [45; 46; 47] are therefore promising. In particular, the recent experiment [47] of a designer minimal Kitaev chain based on two quantum dots coupled through tunable crossed Andreev reflections (CAR) offers a compelling route towards MZM platforms based on exactly solvable building blocks. In parallel with the above efforts of realizing MZM in different materials systems, scalable architectures for quantum logic circuits based on MZM have also been intensely studied over the past decades. A major proposal among these studies is to build networks of T-junctions, which are minimal units for swapping a pair of MZM hosted at different ends of a junction, that allow braiding-based TQC [5]. Alternatively, networks based on coupled wires forming the so-called tetrons and hexons, aiming at measurement-based logic gate operations [48], have also been extensively investigated. To counter the technical challenges of engineering networks with physical wires or atomic chains, various ideas based on effective Kitaev chains, such as quasi-1D systems in thin films [49], cross Josephson junctions [37], scissor cuts on a quantum anomalous Hall insulator [24], and rings of magnetic atoms [50], etc. have been proposed. However, due to the same difficulty of obtaining or identifying genuine MZM in quasi-1D systems mentioned above, it remains unclear how practical these strategies are in the near future. In this Letter, we propose an alternative structural unit for manipulating MZM, triangular superconducting islands, motivated by the above challenges associated with wire geometries and by the fact that triangular islands routinely appear spontaneously in epitaxial growth [51] on close-packed atomic surfaces. We first show that a minimal "Kitaev triangle" consisting of three sites hosts MZM at different pairs of vertices controlled by Peierls phases on the three edges [Fig. 1 (a)], which can be readily realized using quantum dots. To generalize the minimal model to triangular structures involving more degrees of freedom, we study the topological phase transitions of quasi-1D ribbons driven by Peierls phases, which can be created by magnetic fields or supercurrents [52; 53], and use the resulting phase diagram as a guide to construct finite-size triangles with a hollow interior that host MZM [Fig. 1 (b)]. In the end we discuss possible experimental systems that can realize our proposals and scaled-up networks of triangles for implementing braiding operations of MZM. _Kitaev triangle.--_In this section we present an exactly solvable minimal model with three sites forming a "Kitaev triangle" that can host MZM at different pairs of vertices controlled by Peierls phases on the edges. The Bogoliubov-de Gennes (BdG) Hamiltonian includes complex hopping and \(p\)-wave pairing between three spinless fermions forming an equilateral triangle [Fig. 1 (a)]: \[\mathcal{H}=\sum_{\langle jl\rangle}(-te^{i\phi_{jl}}c_{j}^{\dagger}c_{l}+ \Delta e^{i\theta_{jl}}c_{j}c_{l}+\text{h.c.})-\sum_{j}\mu c_{j}^{\dagger}c_{j}, \tag{1}\] where \(t\) is the hopping amplitude, \(\Delta\) is the amplitude of the (2D) \(p\)-wave pairing, \(\mu\) is the chemical potential, \(\theta_{jl}\) is the polar angle of \(\mathbf{r}_{jl}=\mathbf{r}_{l}-\mathbf{r}_{j}\) (the \(x\) axis is chosen to be along \(\mathbf{r}_{12}\)), consistent with \(\{c_{l}^{\dagger},c_{j}^{\dagger}\}=0\). \(\phi_{jl}\) is the Peierls phase due to a bond-dependent vector potential \(\mathbf{A}\) to be specified below (the nearest neighbor distance \(a\) is chosen to be the length unit hereinbelow): \[\phi_{jl}=\frac{e}{\hbar}\int_{\mathbf{r}_{j}}^{\mathbf{r}_{l}}\mathbf{A} \cdot d\mathbf{l}=-\phi_{lj} \tag{2}\] where \(e>0\) is the absolute value of the electron charge. Below we use the natural units \(e=\hbar=1\). To get the conditions for having MZM in this model we rewrite \(\mathcal{H}\) in the Majorana fermion basis \(a_{j}=c_{j}+c_{j}^{\dagger}\), \(b_{j}=\frac{1}{i}(c_{j}-c_{j}^{\dagger})\): \[\mathcal{H}=-\frac{i}{2}\sum_{\langle jl\rangle}\Big{[} \left(t\sin\phi_{jl}-\Delta\sin\theta_{jl}\right)a_{j}a_{l} \tag{3}\] \[+\left(t\sin\phi_{jl}+\Delta\sin\theta_{jl}\right)b_{j}b_{l}\] \[+\left(t\cos\phi_{jl}-\Delta\cos\theta_{jl}\right)a_{j}b_{l}\] \[-\left(t\cos\phi_{jl}+\Delta\cos\theta_{jl}\right)b_{j}a_{l} \Big{]}-\frac{i\mu}{2}\sum_{j}a_{j}b_{j}\] For concreteness we consider the Kitaev limit \(t=\Delta\), \(\mu=0\), and choose \(\phi_{12}=0\) so that sites \(1\) and \(2\) alone form a minimal Kitaev chain with \(\mathcal{H}_{12}=itb_{1}a_{2}\) and hosting MZM \(a_{1}\) and \(b_{2}\). In order for the MZM to persist in the presence of site \(3\), one can choose \(\phi_{23}\) and \(\phi_{31}\) so that all terms involving these Majorana operators cancel out. For example, consider the \(2-3\) bond, for which \(\theta_{23}=2\pi/3\), we require \[\sin\phi_{23}+\sin\frac{2\pi}{3}=\cos\phi_{23}+\cos\frac{2\pi}{3}=0 \tag{4}\] which means \(\phi_{23}=-\pi/3\). Similarly one can find \(\phi_{31}=-\phi_{13}=-\pi/3\). The three Peierls phases can be realized by the following staggered vector potential \[\mathbf{A}=\left[1-2\Theta(x)\right]\frac{2\pi}{3\sqrt{3}}\hat{\mathbf{y}} \tag{5}\] where \(\Theta(x)\) is the Heaviside step function. In fact, using a uniform \(\mathbf{A}=\frac{2\pi}{3\sqrt{3}}\hat{\mathbf{y}}\), which corresponds to \(\phi_{23}=-\pi/3=-\phi_{31}\) also works, since the existence of \(a_{1}\) is unaffected by \(\phi_{23}\). However, in this case the counterpart of \(b_{2}\) is not localized on a single site. For the same reason, the above condition for MZM localized at triangle corners can be generalized to Kitaev chains forming a triangular loop, as well as to finite-size triangles of 2D spinless \(p\)-wave superconductors in the Kitaev limit, as the existence of \(a_{1}\) and \(b_{2}\) are only dictated by the vector potential near the corresponding corners. It should be noted that in the latter case, 1D Majorana edge states will arise when the triangle becomes larger, and effectively diminish the gap that protects the corner MZM. On the other hand, for the longer Kitaev chain, due to the potential practical difficulty of controlling further-neighbor hopping and pairing amplitudes, it is better to resort to the approach of controlling the individual topological phases of the three edges which will be detailed in the next section. We next show that the minimal Kitaev triangle suffices to demonstrate braiding of MZM. To this end we consider a closed parameter path linearly interpolating between the following sets of values of \(\phi_{jl}\): \[(\phi_{12},\phi_{23},\phi_{31}) = \left(0,-\frac{\pi}{3},-\frac{\pi}{3}\right)\equiv\mathbf{\phi}_{1}\] \[\rightarrow \left(-\frac{\pi}{3},-\frac{\pi}{3},0\right)\equiv\mathbf{\phi}_{2}\] \[\rightarrow \left(-\frac{\pi}{3},0,-\frac{\pi}{3}\right)\equiv\mathbf{\phi}_{3}\] \[\rightarrow \mathbf{\phi}_{1}\] It is straightforward to show that at \(\mathbf{\phi}_{2}\) and \(\mathbf{\phi}_{3}\) there are MZM located at sites \(3,1\) and \(2,3\), respectively. Therefore the two original MZM at sites \(1,2\) should switch their positions at the end of the adiabatic evolution. Indeed, Fig. 2 shows that the MZM stays at zero energy throughout the parameter path that interchanges their positions. To show that such an operation indeed realizes braiding, we explicitly calculated the many-body Berry phase of the evolution [4; 50; 54] and found the two degenerate many-body ground states acquire a \(\frac{\pi}{2}\) difference in their Berry phases as expected [4]. Compared to the minimum T-junction model with four sites [4], our Kitaev triangle model only requires three sites to achieve Figure 1: Schematics of two triangle structures proposed in this work. (a) Three-site Kitaev triangle with bond-dependent Peierls phases. The bottom-left, bottom-right, and top sites are labeled by \(1\), \(2\), and \(3\), respectively. (b) Hollow triangular island with a uniform vector potential. braiding between two MZM, and is potentially also easier to engineer experimentally. In the next section we will show that a more mesoscopic hollow-triangle structure can achieve similar results and may be preferred in other materials platforms. _Hollow triangles.--_For systems with less fine-tuned Hamiltonians than the minimal model in the previous section, it is more instructive to search for MZM based on topological arguments. In this section we show that MZM generally appear at the corners of a hollow triangle, which can be approximated by joining three finite-width chains or ribbons whose bulk topology is individually tuned by the same uniform vector potential. To this end, we first show that topological phase transitions can be induced by a vector potential in a spinless \(p\)-wave superconductor ribbon. In comparison with similar previous proposals that mostly focused on vector potentials or supercurrents flowing along the chain [52; 53], we consider in particular the tunability by varying the direction of the vector potential relative to the length direction of the ribbon, which will become instrumental in a triangular structure. Consider Eq. (1) on a triangular lattice defined by unit-length lattice vectors \((\mathbf{a}_{1},\mathbf{a}_{2})=(\hat{\mathbf{x}},\frac{1}{2}\hat{\mathbf{x} }+\frac{\sqrt{3}}{2}\hat{\mathbf{y}})\) with \(W\) unit cells along \(\mathbf{a}_{2}\) but infinite unit cells along \(\mathbf{a}_{1}\), and assume the Peierls phases are due to a uniform vector potential \(\mathbf{A}\) so that \(\phi_{jl}=\mathbf{A}\cdot\mathbf{r}_{jl}\). We also introduce \(\mathbf{a}_{3}\equiv-\mathbf{a}_{1}+\mathbf{a}_{2}\) for later convenience. The Hamiltonian is periodic along \(x\) and can be Fourier transformed through \(c^{\dagger}_{m,n}=\frac{1}{\sqrt{N}}\sum_{k}c^{\dagger}_{k,n}e^{-ikm}\), where \(m,n\) label the lattice sites as \(\mathbf{r}_{m,n}=m\mathbf{a}_{1}+n\mathbf{a}_{2}\). The resulting momentum space Hamiltonian can be written as the following block form up to a constant \[\mathcal{H} = \frac{1}{2}\sum_{k}\Psi^{\dagger}_{k}\begin{pmatrix}h_{t}(k)&h_{ \Delta}(k)\\ h^{\dagger}_{\Delta}(k)&-h^{*}_{t}(-k)\end{pmatrix}\Psi_{k}\] \[\equiv \frac{1}{2}\sum_{k}\Psi^{\dagger}_{k}H(k)\Psi_{k}\] where \(\Psi_{k}\equiv(c_{k,1},\ldots,c_{k,W},c^{\dagger}_{-k,1},\ldots c^{\dagger}_{- k,W})^{T}\). \(h_{t}(k)\) is a \(W\times W\) Hermitian tridiagonal matrix with \((h_{t})_{n,n}=-2t\cos(k+\mathbf{A}\cdot\mathbf{a}_{1})-\mu\) and \((h_{t})_{n,n+1}=-t\left(e^{i(-k+\mathbf{A}\cdot\mathbf{a}_{3})}+e^{i\mathbf{A} \cdot\mathbf{a}_{2}}\right)\). \(h_{\Delta}(k)\) is a \(W\times W\) tridiagonal matrix with \((h_{\Delta})_{n,n}=-2i\Delta\sin k\) and \((h_{\Delta})_{n,n\pm 1}=\mp\Delta\left[e^{-i(\pm k+\frac{W}{2})}+e^{-i\frac{W}{2}}\right]\). By transforming Eq. (7) to the Majorana basis using the unitary transformation: \[U\equiv\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ -i&i\end{pmatrix}\otimes\mathbb{I} \tag{8}\] where \(\mathbb{I}\) is a \(W\times W\) identity matrix, and defining \(A_{k}\equiv-iUH(k)U^{\dagger}\), not to be confused with the vector potential, one can calculate the Majorana number [7]\(\mathcal{M}\) of the 1D ribbon as [55] \[\mathcal{M}=\mathrm{sgn}\left[\mathrm{Pf}(A_{k=0})\mathrm{Pf}(A_{k=\pi})\right] \tag{9}\] where \(\mathrm{Pf}\) stands for the Pfaffian of a skew-symmetric matrix [7]. When \(\mathcal{M}=-1\), the 1D system is in a nontrivial topological phase with MZM appearing at open ends of semi-infinite ribbons, and otherwise for \(\mathcal{M}=1\). In Fig. 3 (a) we show the topological phase diagrams for a 1D ribbon with width \(W=1\), \(\mathbf{A}=A\hat{\mathbf{y}}\) and \(\mathbf{A}=A(\frac{\sqrt{3}}{2}\hat{\mathbf{x}}+\frac{1}{2}\hat{\mathbf{y}})\) superimposed (see below). We found that the vector potential component normal to the ribbon length direction has no effect on the Majorana number, nor does the sign of its component along the ribbon length direction. However, topological phase transitions can be induced by varying the size of the vector potential component along the ribbon, consistent with previous results [52; 53]. These properties motivate us to consider the structure of a hollow triangle formed by three finite-width ribbons subject to a uniform vector potential \(\mathbf{A}=A\hat{\mathbf{y}}\) as illustrated in Fig. 1 (b). The light blue color on the phase diagram Fig. 3 (a) therefore means that the bottom edge and the two upper edges of the hollow triangle have different \(\mathcal{M}\), which should give rise to MZM localized at the two bottom corners if the triangle is large enough so that bulk-edge correspondence holds, and gap closing does not occur at other places along its edges. To show that corner MZM indeed appear when the conditions given by the phase diagram Fig. 3 (a) are met, we directly diagonalize the BdG Hamiltonian of a finite hollow triangle with edge length \(L=50\) and width \(W=1\). Fig. 3 (b) shows the spectral flow (BdG eigen-energies evolving with increasing vector potential \(A\)) close to zero energy at chemical potential \(\mu=1.6\). Indeed, zero-energy modes appear in the regions of \(\mu\) and \(A\) consistent with the phase diagram (except when the bulk band gap is too small; see [54] for some examples.). Hollow triangles with larger larger \(W\) also have qualitatively similar behavior, although the phase diagrams are more complex [54]. The eigenfunctions for the zero-energy modes at \(A=2.75\) and \(\mu=1.6\) in Fig. 4 (b) also confirm their spatial localization at the bottom corners of the triangle. We finally show that rotating the uniform vector potential in-plane can manipulate the positions of the MZM without hybridizing them with bulk states for certain ranges of \(\mu\) and \(A\). Fig. 4 (a) plots the spectral flow versus the in-plane azimuthal angle of \(\mathbf{A}\), which clearly shows that the zero-energy modes persist throughout the rotation and the bulk gap never closes. Figs. 4 (b-d) plot the BdG wavefunctions of the MZM at special values of \(\varphi\). One can see that the two MZM appear to cycle through the three vertices by following the rotation of \(\mathbf{A}\). The robustness of the MZM therefore requires the condition of two edges being in a different topological phase from the third one to be satisfied throughout the rotation. Such a criterion combined with the individual phase diagrams of the edges can help isolate the desired parameter regions of \(\mu\) and \(A\). We also note that the positions of the MZM do not interchange after \(\varphi\) increases from \(0\) to \(\pi\), different from the situation of the minimal Kitaev triangle in Fig. 2. The reason is that the MZM in the latter case are not due to bulk-boundary correspondence [the values of \(A=\frac{2\pi}{3\sqrt{3}}\) and \(\mu=0\) are a critical point in the phase diagram Fig. 3 (a)]. While the positions of the MZM at special points along the parameter path in the hollow triangle case have to be additionally constrained by the bulk topological phases of the three edges, that for the Kitaev triangle have more flexibility and are also protected by the finite size of the system. _Discussion.--_The hollow interior of the triangles considered in this work is needed for two reasons: (1) \(W\ll L\) is required for bulk-edge correspondence based on 1D topology to hold; (2) A finite \(W\) is needed to gap out the chiral edge states of a 2D spinless \(p\)-wave superconductor based on which Eq. (7) is written. The latter is not essential if one does not start with a spinless \(p\)-wave superconductor but a more realistic model such as the Rashba+Zeeman+\(s\)-wave pairing model. On the other hand, the former constraint may also be removed if one uses the Kitaev triangle. Nonetheless, an effective 3-site Kitaev triangle may emerge as the effective theory of triangular structures if a three-orbital low-energy Wannier basis can be isolated, similar to the continuum theory of moire structures. We also note in passing that the corner MZM in our triangles appear due to different reasons from that in higher-order topological superconductors [41; 56]. For possible physical realizations of our triangles, immediate choices are quantum dots forming a Kitaev triangle [47], planar Josephson junctions or cuts on quantum anomalous Hall insulator/superconductor heterostructures [24] that form a hollow triangle, and triangular atomic chains assembled by an STM tip [17] on a close-packed surface. The quantum-dot platform may be advantageous in the convenience of implementing parity readout by turning the third vertex temporarily into a normal quantum dot [57; 58; 59]. Looking into the future, it is more intriguing to utilize the spontaneously formed triangular islands in epitaxial growth [51] with the center region removed either physically by lithography/ablation, or electrically by gating. To create a staggered vector potential or supercurrent profile for the Kitaev triangle, one can use a uniform magnetic field, corresponding to a constant vector potential gradient, plus a uniform supercurrent that controls the position of the zero. It is also possible to use two parallel superconducting wires with counter-propagating supercurrents proximate to the triangle. A tentative design for braiding more than two MZM, illustrated in Fig. 5, consists of four triangles sharing corners with their neighbors. The critical step of trans porting \(\gamma_{2}\) to the left vertex of the rightmost triangle, corresponding to Figs. 5 (b,c), can be achieved by rotating the vector potential of the bottom-middle triangle counterclockwise from \(\varphi=\frac{\pi}{6}\) to \(\frac{\pi}{3}\), which swaps the topological phases of the two side edges as shown in Fig. 4. In [54] we show this operation does not involve gap closing at least for certain parameter regions. Our work provides a versatile platform for manipulating MZM based on currently available candidate MZM systems and for potentially demonstrating the non-Abelian nature of MZM in near-term devices. **ACKNOWLEDGMENTS** This work was supported by the start-up funding of CSU and partially by NSF CAREER grant DMR-1945023.
2309.05672
Circles: Inter-Model Comparison of Multi-Classification Problems with High Number of Classes
The recent advancements in machine learning have motivated researchers to generate classification models dealing with hundreds of classes such as in the case of image datasets. However, visualization of classification models with high number of classes and inter-model comparison in such classification problems are two areas that have not received much attention in the literature, despite the ever-increasing use of classification models to address problems with very large class categories. In this paper, we present our interactive visual analytics tool, called Circles, that allows a visual inter-model comparison of numerous classification models with 1K classes in one view. To mitigate the tricky issue of visual clutter, we chose concentric a radial line layout for our inter-model comparison task. Our prototype shows the results of 9 models with 1K classes
Nina Mir, Ragaad AlTarawneh, Shah Rukh Humayoun
2023-09-08T19:39:46Z
http://arxiv.org/abs/2309.05672v1
# Circles: Inter-Model Comparison of Multi-Classification Problems with High Number of Classes ###### Abstract The recent advancements in machine learning have motivated researchers to generate classification models dealing with hundreds of classes such as in the case of image datasets. However, visualization of classification models with high number of classes and inter-model comparison in such classification problems are two areas that have not received much attention in the literature, despite the ever-increasing use of classification models to address problems with very large class categories. In this paper, we present our interactive visual analytics tool, called **Circles**, that allows a visual inter-model comparison of numerous classification models with 1K classes in one view. To mitigate the tricky issue of visual clutter, we chose concentric a radial line layout for our inter-model comparison task. Our prototype shows the results of 9 models with 1K classes in a single view; however, up to 20 models' results can be displayed in this way. **Index Terms:** Human-centered computing--Visualization--Visualization application domains--Visual analytics ## 1 Introduction With the rise of machine learning (ML) as a service model, selecting a classifier, training it and running test data without having expert-level knowledge about the inner workings of these models have become a reality. In last few years, we have seen development of various public and commercial platforms using this approach such as Google Cloud AutoML [2], Microsoft Azure Machine Learning, Amazon Machine Learning Services, etc. These recent advancements in ML allow data analysts to use numerous models to tackle their problems. However, they face the issue of selecting the best model suitable to their domain and problem at hand [3]. Many visual analytics (VA) tools have been proposed to do comparison of multi-class classification models; however, mostly they deal with only a few classes, e.g., [11] demonstrated a scenario with 10 classes while [9] handles no more than 40 classes (film genres). In the case of datasets with hundreds of classes, such as ImageNet ILSVRC [12] dealing with 1K classes of images, it becomes a challenging issue to compare different models, even for seasoned ML analysts. Although some solutions have been proposed to visualize such models, they focus on only showing one model's results, e.g.: Alsallakh et al. [1] showed the results of one classifier model via a matrix visualization, Uwasketi et al. [14] used a confusion matrix layout to display results of a single image classifier model, and Ono et al. [9] used a concentric radial view to visualize the results of one multi-task classifier. Visualizing models with hundreds of classes requires intuitive solutions to enable the users to explore and compare Figure 1: **(a)** Circles inter-model comparison view, where all classification models’ outputs are displayed in a concentric radial line view. **(b)** Showing the models’ outputs using a radial bar chart. **(c)**_Metric Selection Panel_ provides the option to select a ML metric to be used for model comparison view in (a). **(d)** Mouse hover a particular class highlights the same class in all models and a tool-tip appears to show the value of used ML metric using horizontal bar chart. **(e)** Range side bar provides the option to highlight only the classes within the selected range. models so to select the best model for their problem at hand. Targeting this concern, we present an interactive visual analytics tool, called **Circles**, that allows users to compare results of multiple multi-class classification models, targeting 1K classes in ILSVRC dataset, using an interactive concentric radial view. The Circles tool allows users to explore and compare between different models using the most common used ML metrics in classification problems. ## 2 The Dataset ImageNet [4] dataset consists of a collection of over 15 million high resolution labelled images, belong to an estimated 22k different categories. We use ImageNet Large-Scaled Visual Recognition Challenge (ILSVRC) dataset [12] that is a subset of ImageNet dataset with images belong to 1k different categories. Overall, ILSVRC contains an estimated 1.2 million training images, 50k validation images and 150k testing images. Each model output is a 2-dimensional vector space where the prediction distribution is distributed across 1K classes and the output is saved in JSON format for portability. ## 3 The Circles Tool Our developed tool, called **Circles**, visualizes results of multiple multi-class classification models targeting 1K classes in ILSVRC [12] dataset. The web-based client side was developed using HTML, CSS, JavaScript, D3.js library, and Vue framework. The server side was developed using Node.js run-time environment to manage and process the imported data. One of the main challenges in our target multi-class classification problem is displaying the results of 1K classes. In the case of few classes or a single model, a vertical layout (e.g., [11] ) or a matrix layout (e.g., [5, 6, 10]) could work. However, we chose a radial (circular) layout approach (see Fig. 1) to inter-model comparison, as they produce compact visualizations and use space efficiently [7]. This is because they support a larger data domain on a square area compared to rectangular or square layouts [7, 8]. Also, they encourage the eye movement to proceed along the curved lines rather than a zig- zag fashion in a square or rectangular figure, which helps users to better understand and explore the underlying data [8]. In Circles, users import the target classification models' JSON files to compare them visually. Circles computes and visualizes these models using seven most common used ML metrics [13] of interest in classification problems, i.e., _accuracy_, _precision_, _recall_, _F1-score_, _specificity_, _false positive rate_, and _false negative rate_. These metrics are computed, using the formulas provided in [13], at class-level. Once the user selects a particular metric, Circles renders the underlying models in a concentric radial view using radial (circular) lines. In Fig. 1(a), each radial line represents one of the input models, where each model radial line is plotted using the prediction distribution of validation images in ILSVRC dataset using one of the selected ML metrics against 1K classes. The lower and upper bound for each of the ML metrics are, respectively, 0 and 1. This bound is projected into a circular buffer area of 10 pixels (0.1 = 1 px). In other words, each model's radial curve is plotted within this circular buffer area. However, we have added more than 10 pixels of distance between the concentric curves to increase visibility and to avoid visual clutter. Further spacing can be achieved by using the Plot Spacing slider (Fig. 1(c)). We use radial line path generator in D3.js library, which works with radial coordinates, to create these path elements. In order to accommodate 1K classes, one revolution (360 degrees) is divided into 1000 equal parts, resulting in 1000 non-overlapping points around a circle. All that is left is deciding the separation distance (pixels) between the resulting concentric circular paths to avoid inter-model overlapping. Further, we use D3.js library interpolator to produce a cubic Catmull-Rom spline, which not only generates smooth curves but also ensures that the resulting path goes through all the controlled points. Circles also provides the option to show the view in radial bars based on demand (Fig. 1(b)). Circles provides several interaction and filtering options for better inter-model comparison, e.g., users can change the space between radial lines using a slider (Fig. 1(c)). Further, a range slider is provided to select a portion of classes, which results in highlighting only the classes in range (Fig. 1(e)). Mouse hover a particular class in any model highlights it in red color and also highlights the same class in all other models in blue color (Fig. 1(d)). Furthermore, a tooltip is appeared to show the value of used ML metric of this class in all models through horizontal bar chart (Fig. 1(d)). ## 4 Future Work The presented Circles tool enables the visual comparison of numerous classification models with high number of classes (e.g., 9 models with 1K classes are presented in this paper). In the future, we intend to execute a detailed user study targeting the inter-model comparison view. We also plan to conduct detailed user studies to gain more insight about the needs of ML analysts who work with classification models. Finally, we would like to provide additional facilities in Circles's inter-model comparison view, e.g., additional class filtration features, color-coding to add an additional dimension to our visuals, etc.
2309.05114
Multi UAV-enabled Distributed Sensing: Cooperation Orchestration and Detection Protocol
This paper proposes an unmanned aerial vehicle (UAV)-based distributed sensing framework that uses orthogonal frequency-division multiplexing (OFDM) waveforms to detect the position of a ground target, and UAVs operate in half-duplex mode. A spatial grid approach is proposed, where an specific area in the ground is divided into cells of equal size, then the radar cross-section (RCS) of each cell is jointly estimated by a network of dual-function UAVs. For this purpose, three estimation algorithms are proposed employing the maximum likelihood criterion, and digital beamforming is used for the local signal acquisition at the receive UAVs. It is also considered that the coordination, fusion of sensing data, and central estimation is performed at a certain UAV acting as a fusion center (FC). Monte Carlo simulations are performed to obtain the absolute estimation error of the proposed framework. The results show an improved accuracy and resolution by the proposed framework, if compared to a single monostatic UAV benchmark, due to the distributed approach among the UAVs. It is also evidenced that a reduced overhead is obtained when compared to a general compressive sensing (CS) approach.
Xavier Alejandro Flores Cabezas, Diana Pamela Moya Osorio, Markku Juntti
2023-09-10T19:05:30Z
http://arxiv.org/abs/2309.05114v1
# Multi UAV-enabled Distributed Sensing: ###### Abstract This paper proposes an unmanned aerial vehicle (UAV)-based distributed sensing framework that uses orthogonal frequency-division multiplexing (OFDM) waveforms to detect the position of a ground target, and UAVs operate in half-duplex mode. A spatial grid approach is proposed, where an specific area in the ground is divided into cells of equal size, then the radar cross-section (RCS) of each cell is jointly estimated by a network of dual-function UAVs. For this purpose, three estimation algorithms are proposed employing the maximum likelihood criterion, and digital beamforming is used for the local signal acquisition at the receive UAVs. It is also considered that the coordination, fusion of sensing data, and central estimation is performed at a certain UAV acting as a fusion center (FC). Monte Carlo simulations are performed to obtain the absolute estimation error of the proposed framework. The results show an improved accuracy and resolution by the proposed framework, if compared to a single monostatic UAV benchmark, due to the distributed approach among the UAVs. It is also evidenced that a reduced overhead is obtained when compared to a general compressive sensing (CS) approach. distributed sensing, integrated sensing and communications, unmanned aerial vehicle network. ## I Introduction Sensing services, enabled by the emerging concept of integrated sensing and communications (ISAC) in future perceptive mobile networks (PMN), will not only allow for a more efficient use of network resources, but a further exploration of the utility of cellular systems by supporting several use cases [1, 2]. Sensing tasks include detection, localization and tracking, imaging, and recognition [1]. Localization can be interpreted as a parameter estimation problem, with the the mean squared error (MSE) [3] or the Cramer-Rao bound (CRB) [4] as its performance metrics. For improving localization accuracy, the coherent joint transmission and reception and centralized signal processing in distributed sensing settings can be exploited, which also alleviates the need of full-duplex operation at sensing nodes [5]. Distributed sensing over cellular networks has been conceptualized to give shape to the concept of PMNs, which has been given increasing attention [6, 7, 8, 9, 10]. Particularly, in [7], a comprehensive summary on the opportunities and challenges raised by PMNs is provided. In [8], a framework for distributed sensing over a PMN is addressed by employing orthogonal frequency division multiple access (OFDMA) and multi-user MIMO. This framework includes uplink, downlink active, and downlink passive sensing as types of sensing and compressive sensing (CS) as the method for parameter estimation. On the other hand, synchronization has proved to be a great challenge in the context of PMNs, which has been considered in [9]. Therein, a technique to remove the effect of the timing and phase offsets at the receiver side is proposed when the sensing is performed in the uplink. This is attained by correlating the outputs of receive antennas followed by a high-pass filtering. It is also evident that leveraging more flexible nodes, such as unmanned aerial vehicles (UAVs), can expand the capabilities of distributed sensing systems by providing more degrees of freedom, as already investigated in [11][12][13][14]. Particularly, in [11], the beamforming, user association, position and sensing schedule of a UAV, deployed to provide communication with users while sensing targets, are optimized to maximize the sum rate of communications while maintaining sensing requirements. Moreover, in [12], the deployment of multiple UAVs is considered for carrying out the detection of multiple targets. Therein, sensing and communication tasks are performed simultaneously by considering different beams, and the target distance estimates are aggregated at a ground base station designated as the fusion center. In [13], a full-duplex UAV-based ISAC system is proposed, where multiple UAVs perform local sensing while considering reflections from other UAVs as clutter. In this work, the area-based metric named upper-bound average cooperative sensing area (UB-ACSA) is proposed, as the area of sensing coverage in which a given probability of detection and probability of false alarm for sensing are guaranteed, while also guaranteeing a given outage probability for communications. A common approach for performing localization of targets, over a certain region, is to discretize a certain domain of interest into a grid made up of several cells. Thus, the localization of targets is made based on the closest cell that corresponds to the estimated target parameters, which is referred to as on-grid methods [15][16][17][18][19][20]. For instance, in [15], an on-grid approach is employed for target tracking, where sparse recovery methods are utilized over a discretized range and angular grids. Moreover, in [16], an on-grid approach is employed for the estimation of delays of targets from echoes of orthogonal frequency-division multiplexing (OFDM) waveforms, where the delay range is discretized, and a one-dimensional (1D) multiple measurement vector (MMV) CS technique is utilized for estimation. However, on-grid methods usually entail off-grid errors, since targets are rarely located exactly at the grid points. To deal with such errors, off-grid methods can be used to refine the estimate of the position of the target by considering the deviations of the target parameters from the grid points [21] [22][23]. For instance, a weighted average off-grid approach is proposed in [21], where an on-grid sparse recovery algorithm is applied to jointly obtain the estimates of each cell point and the weighted average coefficients corresponding to each cell point, to perform an off-grid approximation through a weighted average of the positions of the cells. Similarly, an off-grid method is used in [22] to estimate the direction of arrival from multiple radiating sources, where the estimation problem is formulated as a block-sparse CS framework capable of distinguishing closely located sources with high resolution. In [23] an off-grid post-processing technique for multiple-target localization based on received signal strength (RSS) measurements is presented, where a weighted average is performed with the cells that have a value above a certain threshold to obtain an off-grid estimate of the locations of the targets. ### _Contributions_ Considering that a 1D delay-based grid introduces ambiguity along ellipses that exhibit the same total delay of the reflections from the scatterers, grids based on delay and angle-of-arrival (AoA) are used to handle the ambiguity in the delay domain by processing on the AoA domain. For the two-dimensional (2D) scenarios, a simple way to perform estimation is to employ 2D grids [15][24, 25]. Moreover, in three-dimensional (3D) scenarios, the grids need to be augmented, for example, to a delay-elevation-azimuth 3D search. This search would directly increase the complexity of the estimation as investigated in [26, 27]. To deal with this dimensional increase, this work performs sensing over a 2D spatial grid in a 3D environment, which gives a simple and straightforward baseline for distributed sensing over different aerial nodes. Given the benefits of using UAVs for sensing purposes, the protocol described in this work utilizes a network of UAVs performing distributed sensing to locate a point-target in the ground. The UAVs operate in half-duplex mode, and they are coordinated by another UAV, a fusion center (FC), which acts as a coordinator for the protocol, as well as the central estimator that gathers local statistics of the received signals from the UAVs to estimate the position of the target. In contrast to CS techniques that allow for high resolution estimation of target parameters, herein estimation techniques based on the maximum likelihood estimation (MLE) criterion are considered. The reasoning behind this is that CS techniques present a complexity that can be prohibitive in scenarios of continuous monitoring, and the overhead introduced for a distributed sensing approach will tend to be large. However, the proposed techniques require a small overhead for transmission of the local statistics to the FC compared to a general CS approach. Moreover, considering the promising results provided by off-grid methods, this work explores further resolution enhancements by applying on-grid and off-grid refinement techniques, as well as different fusion mechanisms. To summarize, the main contributions of this paper are the following. * A novel distributed sensing framework is proposed for target detection using multiple UAVs operating in half-duplex mode, and the detection is based on a spatial grid. * Three distributed target estimation methods are proposed for the central estimation of the position of the target at the FC, based on the MLE criterion over OFDM frames, which reduce the amount of overhead compared to a general CS approach. * To enhance the detection accuracy of our framework, an augmented spatial mixed grid approach and a threshold-based weighted average post-processing approach are proposed. * Exhaustive Monte Carlo simulations are performed to demonstrate the gain on accuracy introduced by the consideration of multiple UAVs over a single UAV benchmark, for different system parameters. ### _Paper outline and notations_ The rest of this paper is organized as follows. The system model and signal model are introduced in Section II. The proposed distributed sensing protocol is detailed in Section III. The on-grid and off-grid position estimation of the target is explained in Section IV. Results are shown for the distributed sensing protocol in Section V. Finally, conclusions are drawn in Section VI. _Notations._ In this paper scalar variables are denoted by lowercase, italic letters (e.g. \(z\)), column vectors are denoted by lowercase bold letters (e.g. \(\mathbf{w}\)), matrices are denoted by uppercase bold letters (e.g. \(\mathbf{H}\)) and sets are denoted by uppercase calligraphic letters (e.g. \(\mathcal{P}\)). Also, \(|z|\) represents the modulus of complex scalar \(z\), \(|\mathcal{P}|\) represents the cardinality of set \(\mathcal{P}\), \(||\cdot||_{2}\) represents the \(L^{2}\) norm of a vector, \(||\cdot||_{\infty}\) represents the \(L^{\infty}\) norm of a vector, \(\odot\) represents the Hadamard product, \(\otimes\) represents the Kronecker product, \(z^{*}\) represents the conjugate of complex scalar \(z\), \(\mathbf{v}^{H}\) represents the conjugate transpose of complex vector \(\mathbf{v}\), and \(\mathcal{R}\{\cdot\}\) represents the real part of the complex argument (scalar, vector or matrix). ## II System Model Consider the system depicted in Fig. 1, where a single point-like target of radar cross-section (RCS) \(\sigma_{\mathrm{T}}\) is positioned on a square area \(S\) of \(\ell\) meters of side length. In this system, \(U\) UAVs are deployed at a common altitude \(h\) and are coordinated to perform distributed sensing in order to locate a ground target over \(S\). Each UAV \(u\in\mathcal{U}\), is positioned at coordinates \(\mathbf{r}_{u}=[x_{u},y_{u},h]^{T}\), with \(\mathbf{r}_{u}\in\mathbb{R}^{3\times 1}\), \(|\mathcal{U}|=U\) and \(\mathcal{U}\) as the set of all UAVs. It is assumed that the total ground area of interest has an RCS \(\sigma_{\mathrm{G}}\), and that it is uniformly spread across it. Similar to [13], it is assumed that each UAV has two arrays of antennas, a square uniform planar array (UPA) for sensing (mounted facing downward) and a uniform linear array (ULA) for communications (mounted horizontally). It is assumed that radar and communication links use their own dedicated frequencies, so they do not interfere with each-other. The square UPA consists of \(N\) isotropic antenna elements spaced \(\lambda/2\) from each-other, where \(\lambda=f_{0}/c_{0}\) is the wavelength of the signal, \(f_{0}\) is the frequency of the signal, and \(c_{0}\) is the speed of light. There is also a fusion center UAV (FC) that performs information fusion and coordination tasks. For the sensing process, it is considered that the total area \(S\) is sectioned into two grids, base grid and overlay grid, and then combined into a mixed grid as shown in Fig. 2. The base grid is composed of \(L\times L\) square cells with dimensions \(d\times d\) such that \(d=\ell/L\), while the overlay grid is a shifted version of the base grid, with cells of the same size \(d\) shifted horizontally and vertically by \(d/2\). Defining the set of all cells as \(\mathcal{P}\) with \(|\mathcal{P}|=P\) total cells, every cell is characterized by its middle point \(p\in\mathcal{P}\) at coordinates \(\mathbf{r}_{p}=[x_{p},y_{p},0]^{T}\), and the point \(p^{*}\) represents the target at coordinates \(\mathbf{r}_{p^{*}}=[x_{p^{*}},y_{p^{*}},0]^{T}\). OFDM frames consisting of \(M_{s}\) OFDM symbols, each consisting of \(N_{c}\) orthogonal subcarriers are used for the illumination of the cells. Assuming UAV \(u\in\mathcal{U}\) illuminates cell \(p\) employing a transmit beamformer \(\mathrm{w}\mathbb{Y}\mathbb{X}(p)\in\mathbb{C}^{N\times 1}\), and its reflections are received at UAV \(u^{\prime}\in\mathcal{U}\backslash\{u\}\), which employs the receive beamformer \(\mathbf{w}_{\mathrm{RX}}(p)\in\mathbb{C}^{N\times 1}\), the received symbol \(\hat{c}_{k,l}\) corresponding to the OFDM symbol \(l\in\{0,...,M_{s}-1\}\) in subcarrier \(k\in\{0,...,N_{c}-1\}\) considering ground reflections from a discrete number \(P\) of reflectors is given as \[\hat{c}_{k,l}=\sum_{p^{\prime}=1}^{P}\mathbf{w}_{\mathrm{RX}}^{H}(p)\mathbf{ H}_{p^{\prime}}\mathbf{w}_{\mathrm{TX}}(p)c_{k,l}+z_{k,l}, \tag{1}\] where \(c_{k,l}\) is the corresponding transmitted symbol, and \(\mathbf{H}_{p^{\prime}}\in\mathbb{C}^{N\times N}\) is the baseband channel response corresponding to the reflection from \(p^{\prime}\), given as \[\mathbf{H}_{p^{\prime}}=\mathbf{G}_{u,p^{\prime},u^{\prime}}\sqrt{\frac{ \sigma_{p^{\prime}}\lambda^{2}}{(4\pi)^{3}d_{u,p^{\prime}}^{3}d_{p^{\prime},u^ {\prime}}^{3}}}e^{j2\pi f_{D,p^{\prime}}T_{l}}e^{-j2\pi\tau_{p^{\prime}}\Delta f} \tag{2}\] with \(\mathbf{G}_{u,p^{\prime},u^{\prime}}\in\mathbb{C}^{N\times N}\) defined as \(\mathbf{G}_{u,p^{\prime},u^{\prime}}=\mathbf{g}_{\mathrm{RX}}(\varphi_{p^{ \prime},u^{\prime}})\mathbf{g}_{\mathrm{TX}}^{H}(\varphi_{u,p^{\prime}})\), where \(\mathbf{g}_{\mathrm{RX}}(\varphi_{p^{\prime},u^{\prime}})\in\mathbb{C}^{N \times 1}\) is the receive beam-steering vector of UAV \(u^{\prime}\) in the direction of cell \(p^{\prime}\) and \(\mathbf{g}_{\mathrm{TX}}(\varphi_{u,p^{\prime}})\in\mathbb{C}^{N\times 1}\) is the transmit beam-steering vector of UAV \(u\) in the direction of cell \(p^{\prime}\). Here, \(z_{k,l}\) is the AWGN noise affecting the symbol, \(\alpha\) is the pathloss exponent, \(d_{u,p^{\prime}}\) is the distance from \(u\) to \(p^{\prime}\) and \(d_{p^{\prime},u^{\prime}}\) is the distance from \(p^{\prime}\) to \(u^{\prime}\). A line-of-sight (LoS) channel is considered, since the performance of sensing is generally dependent on the LoS link between the UAVs and the target, while non-LoS (NLoS) links (if any) are treated as interference for the target sensing [28]. In this case, it is considered that UAVs present a strong LoS component, which is favorable for sensing [28], thus a free-space pathloss model is assumed with \(\alpha=2\). Let \(\eta_{p}\) be defined such that \(|\eta_{p}|=\sqrt{\sigma_{p}}\). Assuming zero Doppler from the ground, the combined channel response from the whole area can be generalized to \[\mathbf{H}=\sum_{p^{\prime}=1}^{P}\mathbf{G}_{u,p^{\prime},u^{\prime}}\frac{ \theta_{p^{\prime}}\lambda^{-j2\pi\tau_{p^{\prime}}\Delta fk}}{(4\pi)^{3/2}d_ {u,p^{\prime}}^{\alpha/2}d_{p^{\prime},u^{\prime}}^{\alpha/2}}. \tag{3}\] As \(\sigma_{\mathrm{G}}\) is uniformly spread over \(S\), it follows that \(\sqrt{\sigma_{\mathrm{G}}}\) is also uniformly spread across \(S\). Then, for a continuous area, (3) can be extended as \[\mathbf{H}=\frac{\lambda\sqrt{\sigma_{\mathrm{G}}}}{(4\pi)^{3/2}\ell^{2}} \int\limits_{0}^{t}\int\limits_{0}^{t}\mathbf{G}_{u,p^{\prime},u^{\prime}} \frac{e^{-j2\pi\tau_{p^{\prime}}\Delta fk}}{d_{u,p^{\prime}}^{\alpha/2}d_{p^{ \prime},u^{\prime}}^{\alpha/2}}dx^{\prime}dy^{\prime}, \tag{4}\] where \(dx^{\prime}\) and \(dy^{\prime}\) are infinitesimals on the \(x\) and \(y\) coordinates of point \(p^{\prime}\). Similarly, the channel matrix corresponding to the reflections from within the cell defined by \(p\) is given as \[\mathbf{H}_{\{p\}}=\frac{\lambda\sqrt{\sigma_{\mathrm{G}}}}{(4\pi)^{3/2}\ell^{2 }}\int\limits_{y_{p^{\prime}}-\frac{\ell}{2}}^{y_{p^{\prime}}+\frac{d}{2}} \int\limits_{x_{p^{\prime}}-\frac{\ell}{2}}^{z_{p^{\prime}}+\frac{d}{2}}\] \[\mathbf{H}_{\{p\}}=\frac{\lambda\sqrt{\sigma_{\mathrm{G}}}}{(4\pi)^{3/2}\ell^{2 }}\int\limits_{y_{p^{\prime}}-\frac{\ell}{2}}^{y_{p^{\prime}}+\frac{d}{2}} \int\limits_{x_{p^{\prime}}-\frac{\ell}{2}}^{y_{p^{\prime}}+\frac{d}{2}}\] \[\mathbf{H}_{\{p\}}=\frac{\lambda\sqrt{\sigma_{\mathrm{G}}}}{(4\pi)^{3/2}\ell^{ 2}}\int\limits_{y_{p^{\prime}}-\frac{\ell}{2}}^{y_{p^{\prime}}+\frac{d}{2}} \int\limits_{x_{p^{\prime}}-\frac{\ell}{2}}^{z_{p^{\prime}}+\frac{d}{2}}\] \[\mathbf{H}_{\{p^{\prime}\}}=\frac{\lambda\sqrt{\sigma_{\mathrm{G}}}}{(4\pi)^{3/2} \ell^{2}}\int\limits_{y_{p^{\prime}}-\frac{\ell}{2}}^{y_{p^{\prime}}+\frac{d}{2}} \int\limits_{x_{p^{\prime}}-\frac{\ell}{2}}^{z_{p^{\prime}}+\frac{d}{2}}\] corresponding reflections. All cells are illuminated at least once, i.e. \(\forall p\in\mathcal{P},\ \exists u\in\mathcal{U},\ s.t.\ p\in\mathcal{P}_{u} \ \ \land\ \ \mathcal{P}=\bigcup\limits_{u\in\mathcal{U}}\mathcal{P}_{u}\) The UAVs are coordinated to follow a schedule in time. This coordination is realized by the FC, indicating the time slots corresponding to the sensing of each cell in the ground, and which UAVs illuminate or receive reflections from it. The schedule also indicates the time slots in which each UAV will send their local statistics to the FC in order to avoid interference between UAVs. An illustration of this scheduling is shown in Fig. 3. **Step 2 (Digital beamforming):** The transmit and receive beamformers to sense cell \(p\) illuminated by UAV \(u\) and processed by UAV \(u^{\prime}\) are designed to minimize the response of the reflections from the interfering cells while maintaining the main lobe of the transmit beampattern pointing in the direction of the intended cell center. This problem can be expressed as \[\textbf{P}:\ \ \min_{\textbf{w}_{\mathrm{TX}},\textbf{w}_{\mathrm{RX}}} \ \ \ \textbf{w}_{\mathrm{RX}}^{H}(p)\textbf{H}_{\{\bar{p}\}}\textbf{w}_{ \mathrm{TX}}(p)\textbf{w}_{\mathrm{TX}}(p)^{H}\textbf{H}_{\{\bar{p}\}}^{H} \textbf{w}_{\mathrm{RX}}(p)\] (7a) subject to \[\ \ \ \ \textbf{w}_{\mathrm{RX}}^{H}(p)\textbf{g}_{\mathrm{RX}}( \varphi_{p,u^{\prime}})=1\] (7b) \[\ \ given by \(\hat{\mathbf{c}}=[(\hat{\mathbf{c}}^{(1)})^{T},...,(\hat{\mathbf{c}}^{(U)})^{T}]^{T}\) and \(\mathbf{c}=\mathbf{c}^{(u)}\otimes\mathbf{1}_{U-1}\), with \(\mathbf{c},\hat{\mathbf{c}}\in\mathbb{C}^{(U-1)}\lambda_{U}^{T}\mathbf{\hat{ \mathbf{c}}_{u}\times 1}\). Similarly, the total vector of phase shifts normalized with respect to the corresponding pathloss is \(\mathbf{v}=[(d_{p,1}^{-\alpha/2}\mathbf{v}^{(1)})^{T},...,(d_{p,U}^{-\alpha/2} \mathbf{v}^{(U)})^{T}]^{T}\), with \(\mathbf{v}\in\mathbb{C}^{(U-1)\lambda_{r}N_{c}\times 1}\). Considering constraints (7b) and (7c), the central received symbol vector is given as \[\hat{\mathbf{c}}=\underbrace{\frac{\sqrt{NP_{T}}\lambda}{(4\pi)^{ \frac{1}{2}}d_{q,p}^{2}}\eta_{\mathbf{v}}(\mathbf{v}\odot\mathbf{c})}_{\mathbf{ s}}+\mathbf{z}. \tag{12}\] Here, \(\mathbf{z}\in\mathbb{C}^{(U-1)M_{s}N_{c}\times 1}\) is the vector of noise samples, and is a circularly symmetric gaussian vector of mean zero and covariance matrix \(\mathbf{C}=N_{0}\mathbf{I}_{N_{c}M_{s}(U-1)}\). Then, the likelihood function is given as \[\mathcal{L}(\hat{\mathbf{c}};\eta_{p})=\frac{1}{(\pi N_{0})^{M_{s }N_{c}(U-1)}}e^{-\frac{1}{N_{0}}||\hat{\mathbf{c}}-\mathbf{z}||_{2}^{2}}. \tag{13}\] Discarding terms that do not depend on \(\eta_{p}\), the log-likelihood function is obtained for ease of optimization, as a monotonically increasing function that preserves the concavity of the likelihood function, and is given as \[l(\hat{\mathbf{c}};\eta_{p})=-\frac{1}{N_{0}}\left(||\mathbf{s}||_{2}^{2}-2 \mathcal{R}\{\mathbf{s}^{H}\hat{\mathbf{c}}\}\right) \tag{14}\] Taking the Wirtinger derivatives over \(\eta_{p}\) and equaling the expression to zero, the MLE for the RCS is obtained as in (15), at the bottom of next page. If data dependency is removed from the received symbols prior to processing, as \(\hat{c}_{k,l}^{(u^{\prime})}=\hat{c}_{k,l}^{(u^{\prime})}/c_{k,l}^{(u^{\prime})}\), the MLE for the RCS is simplified to (16), at the bottom of the next page. These expressions are used to obtain the statistics to be computed by each receive UAV and later sent to the FC for fusion, depending of the estimation method used. These methods are introduced next. _MIMORE:_ Under this approach, the RCS of every cell \(p\in\mathcal{P}\) is centrally computed as in (16). For this, each UAV \(u^{\prime}\in\mathcal{U}\) sends their statistics \(\delta_{p}^{(u^{\prime})}\) defined as \[\delta_{p}^{(u^{\prime})}=\sum_{l=0}^{N_{c}-1}\sum_{k=0}^{M_{s}-1}\hat{c}_{k,l} ^{(u^{\prime})}e^{-j2\pi f_{D,p}^{(u^{\prime})}T_{d}}e^{j2\pi r_{p}^{(u^{ \prime})}\Delta fk}, \tag{17}\] for every cell \(p\in\mathcal{P}\setminus\mathcal{P}_{u}\) to the FC for central estimation. From (16), these are sufficient statistics for the central RCS estimation of \(p\). _MuRE:_ Under this approach, the RCS of every cell \(p\in\mathcal{P}\setminus\mathcal{P}_{u}\) is locally computed by each UAV \(u^{\prime}\in\mathcal{U}\), forming a local RCS map of the grid \(\hat{\mathbf{\Gamma}}_{u}\). The UAVs then send their local RCS maps to the FC for central estimation. Considering the MLE RCS estimation of a single receive UAV \(u^{\prime}\), equation (16) becomes \[\hat{\sigma}_{p}=\left(\frac{1}{M_{s}N_{c}}\right)^{2}\frac{(4 \pi)^{3}d_{u,p}^{\alpha}d_{p,u^{\prime}}^{\alpha}}{NP_{T}\lambda^{2}}\times\] \[\left|\sum_{l=0}^{N_{c}-1}\sum_{k=0}^{M_{s}-1}\hat{c}_{k,l}^{(u^{ \prime})}e^{-j2\pi f_{D,p}^{(u^{\prime})}T_{d}}e^{j2\pi r_{p}^{(u^{\prime})} \Delta fk}\right|^{2}. \tag{18}\] Then, each receive UAV estimates the RCS of each of the cells \(p\in\mathcal{P}\setminus\mathcal{P}_{u}\) locally with (18), obtaining \(\hat{\mathbf{\Gamma}}_{u}\). Note that \(\hat{\mathbf{\Gamma}}_{u}\) is a matrix of RCS estimates of all cells \(p\in\mathcal{P}\), where the estimates for the cells \(p\in\mathcal{P}_{u}\) are set to zero. _MuPE:_ Each UAV \(u\in\mathcal{U}\) obtains local estimates of the RCS of every cell \(p\in\mathcal{P}\) employing (18), obtaining an RCS map of the grid \(\hat{\mathbf{\Gamma}}_{u}\). Afterwards, each UAV locally estimates the position of the target \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\) as described in Section IV. Under this approach, the \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\) estimates are sent to the FC for fusion. _Step 5 (Centralized estimation):_ In this stage the statistics gathered from every UAV \(u^{\prime}\in\mathcal{U}\) are processed at the FC to obtain the central estimate of the location of the target. According to the different approaches introduced in the previous step, there are three different methods for processing at the FC, depending on the statistics gathered at the UAVs. These are described below. _MIMORE:_ The FC gathers the statistics \(\delta_{p}^{(u^{\prime})}\) for each UAV \(u^{\prime}\in\mathcal{U}\) and every cell \(p\in\mathcal{P}\setminus\mathcal{P}_{u}\), Then, the RCS of every cell \(p\in\mathcal{P}\) is estimated as in (16). With these estimates, the FC obtains the central RCS map of the grid \(\hat{\mathbf{\Gamma}}\). Finally, the FC estimates the position of the target \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\) as described in Section IV. _MuRE:_ The FC gathers the local RCS maps \(\hat{\mathbf{\Gamma}}_{u}\) of all UAVs \(u\in\mathcal{U}\) and performs information-level fusion of the local estimates to obtain a global estimate \(\hat{\mathbf{\Gamma}}\). To this end, the FC averages the values of each cell over the local maps from all UAVs in \(\mathcal{U}\) such that \(\hat{\mathbf{\Gamma}}=\frac{1}{U}\sum_{u\in\mathcal{U}}\hat{\mathbf{\Gamma}}_{u}\). Then, the FC estimates the position of the target based on the fused RCS map, as described in Section IV. _MuPE:_ The FC gathers the local positions estimates \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\) of all UAVs \(u\in\mathcal{U}\) and performs information-level fusion of the local estimates of the position of the target by averaging them as \(\hat{\mathbf{r}}_{p^{*}}=\frac{1}{U}\sum_{u\in\mathcal{U}}\hat{\mathbf{r}}_{p^{*}}^ {(u)}\). ### _Overhead Analysis_ Assume that \(\hat{\sigma_{p}}\in\hat{\mathbf{\Gamma}}_{u}\) is represented by a 32-bit floating point number, \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\) is represented by two 32-bit floating point numbers (one for each cartesian component), while \(\delta_{p}^{(u^{\prime})}\) is represented by two 32-bit floating point numbers (real and imaginary components). The overhead sent by each UAV to the FC when reporting their local RCS maps \(\hat{\mathbf{\Gamma}}_{u}\), is of \(32P\) bits. When reporting their local statistics \(\delta_{p}^{(u^{\prime})}\), the overhead sent by each UAV to the FC is of \(64P\) bits. Finally, only \(64\) bits are sent by each UAV when reporting their local position estimates \(\hat{\mathbf{r}}_{p^{*}}^{(u)}\). For comparison, a CS technique [8] for the central estimation of the target location would require a total of \(M_{s}N_{c}N\) samples per cell to be sent to the FC by each UAV. Each of those samples would be represented by two 32-bit floating point numbers (real and imaginary components), so that the payload sent by each UAV to the FC when reporting their samples for CS would be of \(64M_{s}N_{c}NP\) bits. As an illustrative example, assuming a base grid of \(12\times 12\) cells and \(16\) UAVs deployed uniformly across the grid as in Fig. 2, with OFDM frames consisting of 64 subcarriers and 16 OFDM symbols, and UPAs of 4 antenna elements, the overhead for the transmission of a single UAV, and the total reception overhead of the FC are shown in Table I, by considering a mixed grid. It is worthwhile to notice that there is a remarkable overhead reduction attained by the proposed algorithms if compared to the CS approach. ## IV Position Estimation The estimation of the position of the target is done locally by each UAV or centrally by the FC, depending on the considered method. To that purpose, an on-grid approach is employed, where the target position estimate \(p^{*}\) is restricted to be the center point of the cell with highest estimated RCS on the grid, disregarding the information of adjacent cells. To improve the accuracy of this method, an off-grid refinement technique is also employed. ### _On-Grid Estimation_ The on-grid estimation considers that the target is located at the cell that returns the maximum RCS estimate \(\hat{\sigma}_{p^{\prime}}=\max_{p\in\mathcal{P}}\widehat{\mathbf{\Gamma}}\). Therefore, the on-grid estimation of the position of the target is \(\hat{\mathbf{r}}_{p^{*}}=\mathbf{r}_{p^{\prime}}\). Note that if the cell containing the target is correctly estimated, the \(L^{\infty}\) distance estimation error is bounded by half of the size of the cell \(d\) as \(0\leq||\hat{\mathbf{r}}_{p^{*}}-\mathbf{r}_{p^{*}}||_{\infty}\leq d/2\). To reduce the off-grid estimation error, off-grid post-processing of the estimate is carried out as explained below. ### _Off-Grid Post-Processing_ The off-grid post-processing over the on-grid estimates is performed by considering a weighted average technique as in [23]. For this purpose, all elements of \(\widehat{\mathbf{\Gamma}}\) are normalized between 0 and 1, thus, the set of all cells in \(\widehat{\mathbf{\Gamma}}\) that have a value above a certain threshold \(\lambda\) is defined as \(\mathcal{T}=\{p|p\in\widehat{\mathbf{\Gamma}};\hat{\sigma}_{p}\geq\lambda\}\). Then, the off-grid estimate \(\hat{\mathbf{r}}_{p^{*}}\) is given by the weighted average of the positions of all elements in \(\mathcal{T}\) as \(\hat{\mathbf{r}}_{p^{*}}=\sum_{p\in\mathcal{T}}\alpha_{p}\mathbf{r}_{p}\), with \(\alpha_{p}=\frac{\hat{\sigma}_{p}}{\sum_{p^{\prime}\in\mathcal{T}}\sigma_{p^{ \prime}}}\). ## V Numerical Results The performance of the proposed target localization framework is evaluated in terms of the absolute error between the estimated and the real target position, given as \(||\hat{\mathbf{r}}_{p^{*}}-\mathbf{r}_{p^{*}}||_{2}\), which is the Euclidean norm of the estimation error. For this purpose, Monte Carlo simulations are performed, where the target is randomly and uniformly located in the area of interest at each iteration. The performance of the proposed methods is evaluated for the mixed grid approach. These results are compared to a full base-grid approach. Then, when the value of \(\lambda=1\) is considered, just the cell of highest value is selected, while for a value of \(\lambda=0.9\) the highest valued cells are considered for estimation. In addition, the results are compared to the case where a single UAV is performing sensing by operating in full-duplex mode from the geometric center of the area of interest. The simulation parameters utilized for all figures are presented in Table II, unless stated otherwise. Fig.4 presents the absolute estimation error plotted versus number of UAVs in the system, \(U\). Note that, for all the proposed techniques, increasing the number of UAVs decreases the estimation error for the position of the target. It is worth observing that the employment of multiple UAVs in the proposed system provides several advantages, such as having more samples to estimate the RCS of a cell, any delay ambiguity due to the MLE estimation is reduced, and the pathloss is reduced as closer UAVs are available. It can be observed that the MuPE case present higher errors for smaller number of UAVs even when compared to the benchmark. This is explained by the fact that, in the MuPE case, each UAV estimates its own target position. The UAV that directly Fig. 4: Absolute error of the estimation of the position of the target for different number of UAVs \(U\) for different types of estimation. illuminates the target does not hear reflections due to the half-duplex operation, thus its target position estimate is more biased. Therefore, smaller number of UAVs in the system are more impacted by this biased estimate in average, thus increasing the error in the overall position estimate of the target. It is also observed that, when no post-processing is considered (\(\lambda=1\)), the MIMORE method has better performance than the other methods, which is to be expected, as it employs the maximum likelihood estimator in a distributed setting, ensuring better results than estimates based on local estimation. With post-processing, i.e. \(\lambda=0.9\), the performance of every case improves by further lowering the estimation error, and the MuRE case presents the best performance. Moreover, the mixed grid approach presents better performance than the base grid approach in every case, because of the increased sampling around the edges between \(\mathcal{P}_{u}\) sets of cells. In Fig. 5, the absolute estimation error is plotted versus the number of antennas \(N\) in the UPAs of the UAVs. Here, the MIMORE and MuRE methods maintain an almost constant error, while the MuPE method presents a decreasing in estimation error as more antennas are used. This effect occurs because, for the MuPE algorithm, there is an ambiguity introduced by MLE estimation, coming from the points in the ground with the same delay as the target, which is reduced only by the beamforming. For this reason, narrower beams allow for smaller ambiguity regions in the ground in the estimation of the RCS of the cells, thus reducing the estimation error. In MIMORE and MuRE, this ambiguity is reduced by the multiplicity of UAVs having areas of ambiguity that overlap only at the areas around where the target is located. Then, the non-overlapping areas are decreased when the central estimation is performed. Thus, leading to a more accurate position estimation of the target. The behavior of the other curves is the same as previously seen. To visualize the ambiguity due to the delay and the reduction of such ambiguity, in Fig. 6 and Fig. 7, the RCS maps are shown for the single monostatic UAV benchmark, for all of the distributed UAVs, for the central MuRE algorithm, and for the central MIMORE algorithm. Fig. 6 shows the maps for \(N=1\) single isotropic antenna for the UAVs and Fig. 7 shows the maps for \(N=64\). Note that the monostatic and the local distributed RCS maps present an ambiguity in the angular domain around the target, as those are points with the same delay with respect to the transmit and receive UAVs. This ambiguity is reduced as the number of antennas increase, as the main lobe of the beams get narrower. However, it is observed that a more significant reduction on the ambiguity occurs when performing centralized estimation in the FC. For algorithm MuRE the ambiguity is highly reduced, and it is even further reduced with the MIMORE algorithm. This behavior is more accentuated when considering a single antenna element in the UAVs, i.e. no beamforming was performed. This effect occurs because, the reflections obtained by each individual UAV have different zones of ambiguity, but all of them share the point where the target is located. So, by performing fusion, the cell corresponding to the position of the target is emphasized, while the ambiguity zones are decreased. Looking at the local UAV maps in (b), it can be seen that for \(N=1\), the regions of ambiguity are non-negligible and are more likely to impact the performance of the MuRE algorithm (which was also observed in Fig. 5). In Fig.8 the absolute estimation error is plotted versus the altitude of the UAVs \(h\). It is observed a point of minimum error, for which smaller altitudes negatively impact the performance of the estimators. This behavior occurs due to the beams pointing to the cells are closer to \(90^{\circ}\) in the elevation domain at smaller altitudes, then the beams are ill-behaved at this points. In particular, the MuPE method is severely impacted by this effect, as well as the delay ambiguity discussed previously, as the altitude of the UAVs further increase approaching the benchmark. On the other hand, the errors for MIMORE and MuRE remain low even at high altitudes. As in the previous figures, MIMORE presents better performance without post-processing, and post-processing with \(\lambda=0.9\) provides a notable reduction of the estimation error, with the best results achieved for the MuRE until 140m, while the best results are achieved by MIMORE above that value. In Fig.9 the absolute estimation error is plotted versus the size of the cells \(d\) in meters. It is observed that the absolute estimation error monotonically increases with the size of the cells. This behavior is expected as the average minimum error is bounded by the dimensions of the cells. For small cells of 20cm, the absolute error is smallest for MIMORE method, presenting an error of 26cm without post-processing and 12cm with \(\lambda=0.9\), which is an improvement of less than half of the error with post-processing. The behavior of the other curves is the same as previously seen. ## VI Conclusions A half-duplex distributed sensing framework for UAV-assisted networks was proposed, based on the RCS estimation of a spatial on-grid approach. A mixed grid was proposed for on-grid refinement while a partial weighted average post-processing was proposed for off-grid refinement. For RCS estimation three algorithms were proposed and contrasted to benchmarks in terms of the introduced transmission overhead and the absolute error for the estimation of the position of a ground point-like target. Fig. 5: Absolute error of the estimation of the position of the target for varying number of antennas \(N\) in the UPAs of the UAVs. Our results showed that the proposed distributed framework using the MLE algorithms presents advantages with respect to the CS estimation techniques in terms of transmission overhead. This framework also presents a significant improvement with respect to the estimation performance attained by a single monostatic UAV benchmark. For the MuPE method, making the beams narrower increases the performance of the estimation, but for the MIMORE and MuRE methods, this effect is negligible as the ambiguity introduced by the ML estimation is mitigated by the multiplicity of UAVs. For MuRE and MIMORE, having as few as 4 UAVs to cover the area of interest working in half-duplex mode greatly outperforms the benchmark of a single UAV working in full duplex monostatic mode. The MuRE and MIMORE methods are robust against higher UAV altitudes, in which the benchmark presents much higher errors. For small cells of 20cm and considering post-processing, the MIMORE method can reach estimation errors as low as 12cm. In general, the off-grid post-processing reduces the estimation error in all cases, even with a simple choice of \(\lambda=0.9\). The mixed grid results have lower error in every case due to the increased sampling by a subset of the UAVs. Fig. 8: Absolute error of the estimation of the position of the target for varying altitude of the UAVs \(h\) for different types of estimation. Fig. 6: RCS maps with \(N=1\) single isotropic antenna element in the UAVs for (a) Single monostatic UAV benchmark, (b) Local RCS maps for distributed UAVs, (c) MuRE central RCS map, (d) MIMORE central RCS map. Fig. 7: RCS maps with \(N=64\) antenna elements in the UAV UPAs for (a) Single monostatic UAV benchmark, (b) Local RCS maps for distributed UAVs, (c) MuRE central RCS map, (d) MIMORE central RCS map. Fig. 9: Absolute error of the estimation of the position of the target for varying size of the cells \(d\) in meters.
2309.03565
Perturbations of massless external fields in Horndeski hairy black hole
In this paper, we study the propagations of external fields in Horndeski theory, including the scalar field, electromagnetic field and Dirac field. We extensively explore the quasinormal frequencies, time evolution, greybody factors and emission rates of those massless perturbing fields by solving the corresponding master equations in the Horndeski hairy black hole. With the use of both numerical and analytical methods, we disclose the competitive/promotional influences of the Horndeski hair, spin and quantum momentum number of the external fields on those phenomenal physics. Our results show that the Horndeski hairy black hole is stable under those perturbations. Moreover, a larger Horndeski hair could enhance the intensity of energy emission rate for Hawking radiation of various particles, indicating that comparing to the Schwarzschild black hole, the Horndeski hariy black hole could have longer or shorter lifetime depending on the sign of the Horndeski hair.
Zhen-Hao Yang, Yun-He Lei, Xiao-Mei Kuang, Jian-Pin Wu
2023-09-07T08:52:44Z
http://arxiv.org/abs/2309.03565v1
# Perturbations of massless external fields in Horndeski hairy black hole ###### Abstract In this paper, we study the propagations of external fields in Horndeski theory, including the scalar field, electromagnetic field and Dirac field. We extensively explore the quasinormal frequencies, time evolution, greybody factors and emission rates of those massless perturbing fields by solving the corresponding master equations in the Horndeski hairy black hole. With the use of both numerical and analytical methods, we disclose the competitive/promotional influences of the Horndeski hair, spin and quantum momentum number of the external fields on those phenomenal physics. Our results show that the Horndeski hairy black hole is stable under those perturbations. Moreover, a larger Horndeski hair could enhance the intensity of energy emission rate for Hawking radiation of various particles, indicating that comparing to the Schwarzschild black hole, the Horndeski hairy black hole could have longer or shorter lifetime depending on the sign of the Horndeski hair. ###### Contents * I Introduction * II Master equations and effective potentials of the perturbing external fields * II.1 Scalar field perturbation * II.2 Electromagnetic perturbation * II.3 Dirac field perturbation * III Quasi-normal mode frequencies of various perturbations * III.1 \(Q\)- dependence * III.2 \(\ell\)- dependence * III.3 QNFs in eikonal limit * IV Greybody factor and Hawking radiation * V Conclusion and discussion * A WKB method * B Matrix method * C Time domain integration Introduction Recent observational progress on gravitational waves [1; 2; 3] and black hole shadow [4; 5] further demonstrates the great success of Einstein's general relativity (GR). Yet it is unabated that GR should be generalized, and in the generalized theories extra fields or higher curvature terms are always involved in the action [6; 7; 8]. Numerous modified gravitational theories were proposed, which indeed provide a richer framework and significantly help us further understand GR as well as our Universe. Among them, the scalar-tensor theories, which contain a scalar field \(\phi\) as well as a metric tensor \(g_{\mu\nu}\), are known as the simplest nontrivial extensions of GR [9]. One of the most famous four-dimensional scalar-tensor theory is the Horndeski gravity proposed in 1974 [10], which contains higher derivatives of \(\phi\) and \(g_{\mu\nu}\) and is free of Ostrogradski instabilities because it possesses at most second-order differential field equations. Various observational constraints or bounds on Horndeski theories have been explored in [11; 12; 13; 14; 15; 16]. Horndeski gravity attracts lots of attention in the cosmological and astrophysical communities because it has significant consequences in describing the accelerated expansion and other interesting features, please see [17] for review. Moreover, Horndeski theory is important to test the no-hair theorem, because it has diffeomorphism invariance and second-order field equations, which are similar to GR. In fact, hairy black holes in Horndeski gravity have been widely constructed and analyzed, including the radially dependent scalar field [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28] and the time-dependent scalar field [29; 30; 31; 32; 33; 34]. However, the hairy solution with scalar hair in linear time dependence was found to be unstable, and so this type of hairy solution was ruled out in Horndeski gravity [35]. Later in [36], the no-hair theorem was demonstrated not be hold when a Galileon field is coupled to gravity, but the static spherical black hole only admits trivial Galileon profiles. Then, inspired by [36], the authors of [37] further examined the no-hair theorem in Horndeski theories and beyond. They demonstrated that shift-symmetric Horndeski theory and beyond allow for static and asymptotically flat black holes with a nontrivial static scalar field, and the action they considered is dubbed quartic Horndeski gravity \[S=\int d^{4}x\sqrt{-g}\big{[}Q_{2}+Q_{3}\Box\phi+Q_{4}R+Q_{4, \chi}\left((\Box\phi)^{2}-(\nabla^{\mu}\nabla^{\nu}\phi)\big{(}\nabla_{\mu} \nabla_{\nu}\phi\big{)}\right)+Q_{5}G_{\mu\nu}\nabla^{\mu}\nabla^{\nu}\phi\] \[-\frac{1}{6}Q_{5,\chi}\left((\Box\phi)^{3}-3(\Box\phi)(\nabla^{ \mu}\nabla^{\nu}\phi)(\nabla_{\mu}\nabla_{\nu}\phi)+2(\nabla_{\mu}\nabla_{\nu }\phi)(\nabla^{\nu}\nabla^{\gamma}\phi)(\nabla_{\gamma}\nabla^{\mu}\phi) \right)\big{]}, \tag{1}\] where \(\chi=-\partial^{\mu}\phi\partial_{\mu}\phi/2\) is the canonical kinetic term, \(Q_{i}\) (\(i=2,3,4,5\)) are arbitrary functions of \(\chi\) and \(Q_{i,\chi}\equiv\partial Q_{i}/\partial\chi\), \(R\) is the Ricci scalar and \(G_{\mu\nu}\) is the Einstein tensor. In particular, very recently a static hairy black hole in a specific quartic Horndeski theory, saying that \(Q_{5}\) in the above action vanishes, has been constructed in [38] \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi ^{2})\quad\text{with}\quad f(r)=1-\frac{2M}{r}+\frac{Q}{r}\ln\left(\frac{r}{2 M}\right). \tag{2}\] Here, \(M\) and \(Q\) are the parameters related to the black hole mass and Horndeski hair. The metric reduces to Schwarzschild case as \(Q\to 0\), and it is asymptotically flat. From the metric (2), it is not difficult to induce that for arbitrary \(Q\), \(r=0\) is an intrinsic singularity as the curvature scalar is singular, and \(f(r)=0\) always admits a solution \(r_{+}=2M\) which indicates a horizon at \(r_{+}=2M\). In addition, when \(Q>0\), \(r_{+}=2M\) is the unique root of \(f(r)=0\), so it has a unique horizon, i.e., the event horizon for the hairy black hole as for the Schwarzschild black hole. While for \(-2M<Q<0\), \(f(r)=0\) has two roots: one is \(r_{+}=2M\) indicating the event horizon, and the other \(r_{-}\) denotes the Cauchy horizon which is smaller than the event horizon. \(r_{-}\) increases as \(Q\) decreases, and finally approaches \(r_{+}\) as \(Q\rightarrow-2M\), meaning the extremal case. This hairy black hole and its rotating counterpart attract plenty of attentions. Some theoretical and observational investigations have been carried out, for examples, the strong gravitational lensing [39], thermodynamic and weak gravitational lensing [40; 41], shadow constraint from EHT observation [42], superradiant energy extraction [43] and photon rings in the black hole image [44]. However, the propagation of external field in the Horndeski hairy black hole is still missing. To fill this gap, here we shall explore the quasinormal frequencies (QNFs), time evolution, greybody factors and emission rates by analyzing the massless external fields perturbations (including the scalar field, electromagnetic field and Dirac field) around the Horndeski hairy black hole. QNFs of a field perturbation on the black hole are infinite discrete spectrum of complex frequencies, of which the real part determines the oscillation timescale of the quasinormal modes (QNMs), while the complex part determines their exponential decaying timescale. They dominate the signal of the gravitational waves at the ringdown stage and are one of the most important characteristics of black hole geometry. The interest of QNMs in more fundamental physics can be referred to the reviews [45; 46; 47]. Mathematically, QNFs depend solely on the basic three parameters of the black holes, i.e., the mass, charge, and angular momentum. However, if there are any additional parameters that describe the black hole, such as the hairy parameter in this framework, those parameters will also have prints on the QNMs spectrum. Even though the propagations of external field in a black hole background seems less related to the gravitational wave signals, they still might provide us with important insights about the properties of the Horndeski hairy black holes, such as their stability and the possible probe of the characterized parameters of black holes. This is our motivation to investigate the external fields QNMs of the hairy black hole solution (2). The goal is to study the influences of the hairy parameter \(Q\) on the QNFs signature for the massless scalar field, electromagnetic field and Dirac field perturbations, respectively. To this end, we will use both the WKB method and the matrix method to numerically obtain the QNFs, and also exhibit the time evolution of the perturbations in the time domain. The other goal of this work is to study the impact of Horndeski hair on the energy emission rate of the particles with spin \(=0,1\) and \(1/2\), respectively, and the greybody factors of their Hawking radiation from the Horndeski hairy black hole. The grey-body factor measures the modification of the pure black body spectrum and it is equal to the transmission probability of an outgoing wave radiated from the black hole event horizon to the asymptotic region [48]. It significantly describes information about the near-horizon structure of black holes [49]. So, one can evaluate the energy emission rate of Hawking radiation with the use of the greybody factor [50]. It is known that the Hawking radiation spectrum and its greybody factor are very sensitive to the modifications of GR. So they at least provide important sources of physical consequences for the modifications in the formulation of the black hole. Thus, we could expect that the Horndeski hair will leave prints on the Hawking radiation spectrums as well as the greybody factors. It is noted that the study of the energy emission rates and greybody factors of the particles with spin\(=0,1\) and \(1/2\) requires one to solve the master equations of the scalar, electromagnetic and Dirac perturbing fields on the hairy black hole background. This process is similar to that we do when calculating the quasinormal modes, only with different boundary conditions: the latter requires a purely outgoing wave at infinity and a purely ingoing wave at the event horizon, while the former allows ingoing waves at infinity. This paper is organized as follows. In section II, we show the master equations for the test massless scalar, electromagnetic and Dirac fields in the Horndeski hairy black hole, and analyze the properties of their effective potentials. In section III, we calculate the QNM frequencies of the perturbing fields with both the WKB method and matrix method, and then match the behaviors of the perturbations in the time domain. In section IV, by solving the corresponding master equations, we evaluate the greybody factor and the energy emission of Hawking radiation for various particles. The last section contributes to our conclusions and discussion. Throughout the paper, we will set \(c=h=G=1\). Moreover, in a convenient way, we will fix \(M=1/2\) and denote \(Q/M\to Q\) in all the computations. ## II Master equations and effective potentials of the perturbing external fields In this section, we will show the master equations of various massless external fields, including the scalar field, electromagnetic field and Dirac field around the Horndeski hairy black hole. The influences of Horndeski hair \(Q\) and angular quantum number \(\ell\) on the corresponding effective potentials of the perturbations will be analysed. ### Scalar field perturbation The propagation of massless scalar field \(\Phi\) in the Horndeski hairy black hole satisfies the Klein-Gordon equation \[\Box\Phi=\frac{1}{\sqrt{-g}}\partial_{\mu}(g^{\mu\nu}\sqrt{-g}\partial_{\nu}\Phi )=0, \tag{3}\] where \(g\) is the determinant of the black hole metric (2). By taking the ansatz \[\Phi(t,r,\theta,\varphi)=e^{-i\omega t+im\varphi}\frac{R(r)}{r}S(\theta), \tag{4}\] where \(\omega\) is the frequency of scalar field perturbation and \(m\) is the azimuthal number, we can separate (3) and obtain the radial master equation \[f^{2}(r)\frac{d^{2}R}{dr^{2}}+f(r)f^{\prime}(r)\frac{dR}{dr}+[\omega^{2}-V_{sc }(r)]R=0. \tag{5}\] Here, the prime denotes a derivative w.r.t. \(r\), and the effective potential is \[V_{sc}(r)=\left[1-\frac{2}{r}\left(M-\frac{Q}{2}\ln\frac{r}{2M}\right)\right] \left[\frac{\ell(\ell+1)}{r^{2}}+\frac{1}{r^{3}}\left(Q+2M-Q\ln\frac{r}{2M} \right)\right] \tag{6}\] where \(\ell=0,1,2,\cdots\) is the angular quantum number. Under the tortoise coordinate \[r_{*}=\int\frac{dr}{f(r)}, \tag{7}\] the equation (5) can be written into Schrodinger-like form \[\frac{d^{2}R}{dr_{*}^{2}}+[\omega^{2}-V_{sc}(r)]R=0. \tag{8}\] The behavior of effective potential \(V_{sc}(r)\) with various samples of \(Q\) and \(\ell\) are shown in FIG.1. It can be obviously seen that, for each case with different \(Q\) and \(\ell\), the potential functions are always positive outside the event horizon. The absence of a negative potential well may give a hint that the black hole could remain stable under the massless scalar field perturbation. These plots also show that the effective potential always has a barrier near the horizon, which is enhanced by increasing the values of \(Q\) and \(\ell\). ### Electromagnetic perturbation The propagation of electromagnetic field in the Horndeski hairy black hole background satisfies the Maxwell equation \[\nabla_{\nu}F^{\mu\nu}=\frac{1}{\sqrt{-g}}\partial_{\nu}\big{(}\sqrt{-g}F^{ \mu\nu}\big{)}=0, \tag{9}\] Figure 1: The effective potential \(V_{sc}(r)\) for the massless scalar field perturbation. In the left plot we fix \(\ell=0\) and tune \(Q\), while we fix \(Q=-0.5\) and tune \(\ell\) in the right plot, respectively. where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the field strength tensor, and \(A_{\mu}\) is the vector potential. In order to separate the Maxwell equation, we take \(A_{\mu}\) as the Regge-Wheeler-Zerilli decomposition [51; 52] \[A_{\mu}=e^{-i\omega t}\sum_{\ell,m}\left[\begin{pmatrix}0\\ 0\\ a^{\ell m}(r)\,\frac{1}{\sin\theta}\,\partial_{\varphi}Y_{\ell m}\\ -a^{\ell m}(r)\,\sin\theta\,\partial_{\theta}Y_{\ell m}\end{pmatrix}+\begin{pmatrix} j^{\ell m}(r)\,Y_{\ell m}\\ h^{\ell m}(r)\,Y_{\ell m}\\ k^{\ell m}(r)\,\partial_{\theta}Y_{\ell m}\\ k^{\ell m}(r)\,\partial_{\varphi}Y_{\ell m}\end{pmatrix}\right], \tag{10}\] where \(Y_{\ell m}=Y_{\ell m}(\theta,\varphi)\) are the scalar spherical harmonics with the angular quantum number, \(\ell\), and azimuthal number, \(m\), respectively. By substituting Eq.(10) into Eq.(9), we will obtain two decoupled radial equations which can be uniformed into the master equation \[f^{2}(r)\frac{d^{2}\psi}{dr^{2}}+f(r)f^{\prime}(r)\frac{d\psi}{dr}+[\omega^{2 }-V_{EM}(r)]\psi=0, \tag{11}\] with \[\psi(r)=\begin{cases}\begin{array}{c}a^{\ell m}(r)&\text{for axial modes with odd parity }(-1)^{\ell+1},\\ \frac{r^{2}}{\ell(\ell+1)}\left[i\omega h^{\ell m}(r)+\frac{d^{j}\ell m}{dr} \right]&\text{for polar modes with even parity }(-1)^{\ell}\.\end{array}\end{cases} \tag{12}\] Again under the tortoise coordinate (7), the Schrodinger-like form of Eq.(11) reads as \[\frac{d^{2}\psi}{dr_{*}^{2}}+[\omega^{2}-V_{EM}(r)]\psi=0. \tag{13}\] where the effective potential is \[V_{EM}(r)=\left[1-\frac{2}{r}\left(M-\frac{Q}{2}\ln\frac{r}{2M}\right)\right] \frac{\ell(\ell+1)}{r^{2}}. \tag{14}\] The potential function for the electromagnetic field perturbation is depicted in FIG.2 which shows that both larger \(Q\) and \(\ell\) give higher potential barrier, similar to the case in scalar field perturbation. ### Dirac field perturbation The propagation of massless fermionic field in the Horndeski hairy black hole background is governed by \[\Gamma^{a}[\partial_{a}+\frac{1}{4}(\omega_{\mu\nu})_{a}\Gamma^{\mu\nu}]\Psi= 0\quad\text{ with }\ \Gamma^{\mu\nu}=\frac{1}{2}[\Gamma^{\mu},\Gamma^{\nu}]\ \text{ and }\ (\omega_{\mu\nu})_{a}=(e_{\mu})_{b}\nabla_{a}(e_{\nu})^{b}. \tag{15}\] In this equation, \((\omega_{\mu\nu})_{a}\) is the 1-form spin connections; \((e_{\mu})^{a}\) is a rigid tetrad defined by \((e^{\mu})_{a}=\sqrt{g_{\mu\nu}}(dx^{\mu})_{a}\) and its dual form is \((e_{\mu})^{a}=(e^{\nu})_{b}g^{ab}\eta_{\mu\nu}\) with \(\eta_{\mu\nu}\) the Minkowski metric; \(\Gamma^{a}\) is the curved spacetime gamma matrices, which Figure 2: The effective potential of electromagnetic perturbation \(V_{EM}(r)\). We fix \(\ell=1\) and tune \(Q\) in the left plot, while we fix \(Q=-0.5\) and tune \(\ell\) in the right plot, respectively. connects the flat spacetime gamma via \(\Gamma^{a}=(e_{\mu})^{a}\Gamma^{\mu}\). After working out the spin connections of the metric (2), we expand the Dirac equations as \[\frac{\Gamma^{0}}{\sqrt{f}}\frac{\partial\Psi}{\partial t}+\Gamma^{1}\sqrt{f}( \frac{\partial}{\partial r}+\frac{1}{r}+\frac{f^{\prime}}{4f})\Psi+\frac{ \Gamma^{2}}{r}\big{(}\frac{\partial}{\partial\theta}+\frac{1}{2}cot\,\theta \big{)}\Psi+\frac{\Gamma^{3}}{r\,sin\,\theta}\frac{\partial\Psi}{\partial \varphi}=0. \tag{16}\] To proceed, we choose the representation of the flat spacetime gamma matrices as [53] \[\Gamma^{0}=\begin{pmatrix}-i&0\\ 0&i\end{pmatrix},\quad\Gamma^{i}=\begin{pmatrix}0&-i\sigma^{i}\\ i\sigma^{i}&0\end{pmatrix},\quad i=1,2,3, \tag{17}\] where \(\sigma^{i}\) are the Pauli matrices. Then considering the Dirac field decomposition [53] \[\Psi^{(\pm)}(t,r,\theta,\varphi)=\frac{e^{-i\omega t}}{rf(r)}\begin{pmatrix} iG^{(\pm)}(r)\\ F^{(\pm)}(r)\end{pmatrix}\otimes\begin{pmatrix}\phi^{(\pm)}_{jm}(\theta, \varphi)\\ \phi^{(\mp)}_{jm}(\theta,\varphi)\end{pmatrix}, \tag{18}\] where the spinor angular harmonics are \[\begin{split}\phi^{(+)}_{jm}=\begin{pmatrix}\sqrt{\frac{2+m}{2j} }Y_{1}^{m-1/2}\\ \sqrt{\frac{j-m}{2j}}Y_{l}^{m+1/2}\end{pmatrix}\qquad\text{for}\quad j=l+\frac {1}{2},\\ \phi^{(-)}_{jm}=\begin{pmatrix}\sqrt{\frac{j+1+m}{2(j+1)}}Y_{l}^{m-1/2}\\ -\sqrt{\frac{j+1-m}{2(j+1)}}Y_{l}^{m+1/2}\end{pmatrix}\qquad\text{for}\quad j=l- \frac{1}{2},\end{split} \tag{19}\] we can obtain two radial master equations \[r^{2}f\,\partial_{r}(f\,\partial_{r}G^{(\pm)})+(r^{2}\omega^{2}- \kappa_{\pm}^{2}f-\kappa_{\pm}f^{3/2}+\frac{1}{2}\kappa\sqrt{f}f^{\prime})G^{ (\pm)}=0, \tag{20}\] \[r^{2}f\,\partial_{r}(f\,\partial_{r}F^{(\pm)})+(r^{2}\omega^{2}- \kappa_{\pm}^{2}f+\kappa_{\pm}f^{3/2}-\frac{1}{2}\kappa\sqrt{f}f^{\prime})F^{ (\pm)}=0. \tag{21}\] where \(\kappa_{\pm}=\mp(j+\frac{1}{2})\) for \(j=l\pm 1/2\). The Schrodinger-like equations under the tortoise coordinate take the forms \[\frac{d^{2}F^{(\pm)}}{dr_{\pm}^{2}}+[\omega^{2}-V^{I}_{Dirac}]F^{ (\pm)}=0, \tag{22}\] \[\frac{d^{2}G^{(\pm)}}{dr_{\pm}^{2}}+[\omega^{2}-V^{II}_{Dirac}]G^{ (\pm)}=0, \tag{23}\] where the effective potentials are \[V^{I}_{Dirac}=\frac{\sqrt{f}|\kappa_{+}|}{r^{2}}\big{(}|\kappa_{ +}|\sqrt{f}+\frac{rf^{\prime}}{2}-f\big{)}, \tag{24}\] \[V^{II}_{Dirac}=\frac{\sqrt{f}|\kappa_{-}|}{r^{2}}\big{(}|\kappa_ {-}|\sqrt{f}-\frac{rf^{\prime}}{2}+f\big{)}. \tag{25}\] It was addressed in [54] that the behaviors of \(V^{I}_{Dirac}\) and \(V^{II}_{Dirac}\) usually are qualitatively similar because they are super-symmetric partners derived from the same super potential, so one can choose one of them to proceed without any loss of generality. Thus, in the following study, we will concentrate on the master equation (22) with \(V^{I}_{Dirac}\), which is plotted in FIG.3 for some references of \(Q\) and \(\ell\). Comparing these three effective potentials in FIG.1-FIG.3 for various perturbing external fields, we can extract the following properties. (i) The Horndeski hair has the same effect on the maximum value of various potentials, i.e., a larger \(Q\) corresponds to a larger and narrower barrier, from which we expect the same influence on the QNFs of those perturbations. (ii) The effect of \(\ell\) on the behavior of various potentials in Horndeski hairy black hole are similar to that in the Schwarzschild black hole [55]. (iii) The barrier of the effective potential for perturbing field with higher spin seems larger and wider. We could expect that the features of effective potentials could be reflected in the QNFs. ## III Quasi-normal mode frequencies of various perturbations In this section, we shall compute the quasi-normal frequencies of the Horndeski hairy black hole under the massless scalar, EM and Dirac fields perturbations by solving the master equations (8), (13) and (22) with the boundary conditions: ingoing wave (\(\sim e^{-i\omega\tau_{*}}\)) at the horizon and the outgoing wave (\(\sim e^{i\omega\tau_{*}}\)) at infinity. We will employ both the WKB method and matrix method to guarantee that we focus on the fundamental mode with the node \(n=0\), and also to pledge the credibility of our results. Moreover, in order to directly analyze the essence of QNFs in the propagation of various perturbations, we will also study the time evolution of the perturbation fields with the use of time domain integration. Instead of the tedious details in the main part, we briefly review the main steps of WKB method, matrix method and domain integration method of our framework in appendixes A-C, because all the methods are widely used in the related studies. Especially, we use the Pade series \(P_{m}^{\tilde{n}}\)[56] in WKB method to improve and reconstruct the WKB correction terms and transform it into a continued-fraction-like form. Here \(\tilde{n}\) and \(\tilde{m}\) are the order numbers of the numerator and denominator series (see [57]). ### \(Q\)- dependence Firstly, we analyze the influence of the Horndeski hair parameter \(Q\) on the QNM frequencies of the lowest \(\ell\) modes in various external field perturbations. The results are listed in TABLE 1 (also depicted in FIG. 4) in which we also calculate the relative error between two methods defined by \[\Delta_{k}=\frac{\text{Matrix}(\omega_{k})-\text{WKBP}(\omega_{k})}{\text{ WKBP}(\omega_{k})}100\%,\qquad k=\text{Re},\text{Im}. \tag{26}\] The QNFs obtained from the matrix and WKB-Pade methods agree well with each other. For various perturbations, the imaginary part of QNFs, \(\textit{Im}(\omega)\), keeps increasing as \(Q\) decreases, and it is always negative even in the extremal case with \(Q=-1\). It means that the Horndeski hairy black hole is dynamically stable under those external fields perturbation with the lowest-lying \(\ell\). Moreover, by comparing the QNFs for various perturbations, we find that for the field with larger spin, the \(\textit{Im}(\omega)\) is larger (see also the left plot of FIG. 4 ). Thus, the perturbation field with a higher spin could live longer than the one with a lower spin because the damping time \(\tau_{d}\) for a wave field is related with the QNF by \(\tau_{d}\sim 1/|-\textit{Im}(\omega)|\). However, the real part of QNFs, \(\textit{Re}(\omega)\), for all the perturbations decreases as \(Q\) decreases, meaning that smaller \(Q\) suppresses the oscillation of the perturbations. Similar to the imaginary part, \(Re(\omega)\) is larger for a perturbing field with higher spin. The effects of Horndeski hair and the spin of fields on the QNFs can be explained by their influences on the corresponding effective potentials as we described in the previous section. For the sake of intuitively understanding how the hairy charge \(Q\) influences the evolution and waveform of various perturbations, we use the finite difference method to numerically integrate the wave-like equations (8), (13), (22) in Figure 3: The effective potential of Dirac perturbation \(V_{Dirac}^{I}(r)\). We fix \(\ell=0\) and tune \(Q\) in the left plot, while we fix \(Q=-0.5\) and tune \(\ell\) in the right plot. the time domain. For more details, the readers can refer to the appendix C. The results for the lowest-lying modes are shown in FIG.5. For all perturbations in each plot, we observe that smaller \(Q\) makes the ringing stage of perturbations waveform more lasting and sparser, which corresponds to the larger \(Im(\omega)\) and the smaller \(Re(\omega)\). In addition, by a careful comparison, we also find that the perturbation wave with higher spin will perform a shorter and more intensive ringing stage, indicating smaller \(Im(\omega)\) and larger \(Re(\omega)\). These observations in time domains agree well with the results we obtained in the frequency domain, also they explicitly show the time evolution process of various perturbing fields. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{scalar field (\(s=0\)), \(\ell=0\)} & \multicolumn{2}{c|}{relative error/\%} \\ \hline \(Q\) & Matrix Method & WKB-Padé & Re(\(\omega\)) & Im(\(\omega\)) \\ \hline [MISSING_PAGE_POST] 0.0002 & 0.0009 \\ \hline -0.9 & 0.297249 - 0.058808 i & 0.297253 - 0.058796 i & -0.0013 & 0.0204 \\ \hline 1.0 & 0.269519 - 0.052500 i & 0.269520 - 0.052497 i & -0.0004 & 0.0057 \\ \hline \end{tabular} \end{table} Table 1: The fundamental (\(n=0\)) QNFs of lowest \(\ell\)-mode for various massless field perturbations obtained by WKB method and matrix method, and their relative errors. Figure 4: Quasi-normal frequencies as a function of the hairy charge at the low-lying angular quantum number, i.e., \(\ell=0\) for scalar & Dirac fields and \(\ell=1\) for EM field. ### \(\ell-\) dependence Now we move on to study the effect of the angular quantum number for various perturbations. To this end, we focus on \(Q=-0.5\). The results are shown in TABLE 2 and FIG. 6. Following are our observations: (i) For small \(\ell\), it has a relatively strong influence on the \(Re(\omega)\), while the influence on the \(Im(\omega)\) is weak. We observe that as \(\ell\) increases, \(Im(\omega)\) for the EM field tends to slightly decrease, while the \(Im(\omega)\) for scalar and Dirac fields slightly increase. This means that the effect of growing \(\ell\) will shorten the lifetime of the EM perturbation, but it can instead extend the lifetime of the scalar and Dirac perturbations. Similar phenomena can also be observed in the Schwarzschild black hole [58], but here we find that the introducing of the Horndeski \(Q\) has print on the effect of \(\ell\). In detail, positive \(Q\) will enhance the effect while negative \(Q\) suppresses this effect. These findings in QNFs can be verified from FIG.7 and its comparison to FIG.5. (ii) As \(\ell\) increases, the results from WKB methods and matrix match better and better, and the relative error becomes smaller than \(10^{-6}\). This is because the WKB-Pade method is essentially a semi-analytic approximation which works better for an analytical form with higher \(\ell\)[59]. (iii) When we further increase \(\ell\), the gap among the QNFs for all the massless perturbing fields tends to be smaller. This is because for large \(\ell\), the dominant terms in all the effective potentials (6), (14) and (24) are the terms \(\propto\ell^{2}\), which have the same formula in all cases. And finally the gap will vanish in the eikonal limit \(\ell\gg 1\) which we will study in the next subsection. Figure 5: Time evolution of the lowest-lying mode for the perturbing scalar field with spin \(s=0\) (left), Dirac field with \(s=1/2\) (middle) and EM field with \(s=1\) (right), respectively. Figure 6: Quasi-normal frequencies as a function of the angular momentums for various perturbing fields. Figure 7: Time evolution for the perturbing scalar field with spin \(s=0\) (left), Dirac field with \(s=1/2\) (middle) and EM field with \(s=1\) (right), respectively. Here we focus on the second lowest-lying angular momentum. ### QNFs in eikonal limit Cardoso et al proposed that in the eikonal limit (\(\ell\gg 1\)), the real part of QNM frequency for a static spherical black hole is connected with the angular velocity of the circular null geodesics while the imaginary part is connected with the Lyapunov exponent [60]. Then the real part of the QNMs in the eikonal limit was further related to the shadow radius of a static black hole as \(\omega_{Re}=\lim\limits_{\ell\gg 1}\frac{\ell}{R_{sh}}\)[61], and more recently this connection was extended into the rotating black holes [62]. This correspondence may originate from the fact that the perturbing waves could be treated as massless particles propagating along the last timelike unstable orbit out to infinity, but deeper research deserves to be done for further understanding. In this subsection, we compare the quasinormal spectrum obtained from the master equations with the spectrum computed directly from the geometric-optics approximation formula, which is given by [60] \[\omega_{QNM} = \Omega_{c}\ell-i(n+\frac{1}{2})|\lambda_{LE}|, \tag{27}\] \[\text{with}\quad\Omega_{c} = \sqrt{\frac{f(r_{c})}{r_{c}^{2}}},\text{and}\quad\lambda_{LE}= \sqrt{-\frac{r_{c}^{2}}{f(r_{c})}\left(\frac{d^{2}}{dr_{*}^{2}}\frac{f(r)}{r^ {2}}\right)_{r\neq r_{c}}} \tag{28}\] for the Horndeski hairy black hole (2). \(\Omega_{c}\) is the angular velocity of a massless particle geodesically moving on a circular null orbit with radius \(r=r_{c}\), and \(\lambda_{LE}\) is the Lyapunov exponent, where the radius \(r_{c}\) is given by the positive root of the equation \(2f_{c}=r_{c}f_{c}^{\prime}\). In TABLE 3, we list the QNFs obtained from wave analysis and the geometric-optics approximation for fixed \(\ell=40\). QNFs for various perturbing fields converge to be the value obtained from geometric-optics correspondence, because in the limit \(\ell\gg 1\), all the perturbed wave equations reduce to the analytical equation describing the geodesic motion of the massless particle in the Horndeski hairy spacetime. Then in order to check the effect of \(Q\), we plot \(\Omega_{c}\) and \(\lambda_{LE}\) as functions of \(Q\) in FIG. 8. We see that both of them grow monotonically as \(Q\) increases, indicating a smaller imaginary part but a larger real part of QNFs for \(\ell\gg 1\). The effect of \(Q\) on the QNFs is already reflected for small \(\ell\sim 1\) as we disclosed in the previous subsections. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{scalar field (\(s=0\))} & relative error/\% \\ \hline \(\ell\) & Matrix Method & WKB-Padé & Re(\(\omega\)) & Im(\(\omega\)) \\ \hline 0 & 0.158948 - 0.113060 i & 0.158921 - 0.113149 i & 0.0170 & -0.0787 \\ \hline 1 & 0.450680 - 0.109494 i & 0.450691 - 0.109490 i & -0.0024 & 0.0037 \\ \hline 2 & 0.748012 - 0.109136 i & 0.748013 - 0.109136 i & -0.0001 & \(O\) \\ \hline 3 & 1.046033 - 0.109036 i & 1.046033 - 0.109036 i & \(O\) & \(O\) \\ \hline 4 & 1.344278 - 0.108994 i & 1.344278 - 0.108994 i & \(O\) & \(O\) \\ \hline 5 & 1.642622 - 0.108973 i & 1.642622 - 0.108973 i & \(O\) & \(O\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Dirac field (\(s=1/2\))} & relative error/\% \\ \hline \(\ell\) & Matrix Method & WKB-Padé & Re(\(\omega\)) & Im(\(\omega\)) \\ \hline 0 & 0.291568 - 0.109287 i & 0.291570 - 0.109214 i & -0.0007 & 0.0668 \\ \hline 1 & 0.593509 - 0.109003 i & 0.593512 - 0.109025 i & -0.0005 & -0.0202 \\ \hline 2 & 0.893211 - 0.109010 i & 0.89322 - 0.108974 i & 0.0010 & 0.0330 \\ \hline 3 & 1.192322 - 0.108940 i & 1.192307 - 0.108955 i & 0.0013 & -0.0138 \\ \hline 4 & 1.491178 - 0.108941 i & 1.491177 - 0.108947 i & 0.0001 & -0.0055 \\ \hline 5 & 1.789936 - 0.108926 i & 1.789929 - 0.108942 i & 0.0004 & -0.0147 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{electromagnetic field (\(s=1\))} & relative error/\% \\ \hline \(\ell\) & Matrix Method & WKB-Padé & Re(\(\omega\)) & Im(\(\omega\)) \\ \hline 1 & 0.403289 - 0.106855 i & 0.403290 - 0.106853 i & -0.0002 & 0.0019 \\ \hline 2 & 0.720260 - 0.108223 i & 0.720260 - 0.108223 i & \(O\) & \(O\) \\ \hline 3 & 1.026335 - 0.108575 i & 1.026335 - 0.108574 i & \(O\) & 0.0009 \\ \hline 4 & 1.328996 - 0.108716 i & 1.328996 - 0.108716 i & \(O\) & \(O\) \\ \hline 5 & 1.630135 - 0.108787 i & 1.630135 - 0.108788 i & \(O\) & -0.0009 \\ \hline 6 & 1.930461 - 0.108828 i & 1.930461 - 0.108828 i & \(O\) & \(O\) \\ \hline \end{tabular} \end{table} Table 2: The fundamental (\(n=0\)) QNMs of various massless perturbation modes with different angular momentums obtained by WKB method and matrix method and their relative errors. Here we fix \(Q=-0.5\). On the other hand, the results in previous subsection show that the increasing \(Q\) can enhance the contribution of the spin so that QNFs of perturbing fields with different spins emerge a larger bifurcation (see FIG.4 ). However, in the eikonal limit the perturbing fields with different spins tend to possess a same QNFs, which indicates that the contribution of field spin is diluting as \(\ell\) increases. In order to analytically understand the balance effect of \(Q\) and \(\ell\) on the spin contribution on the QNFs, we apply the Newman-Penrose formalism to construct general spherical symmetric Teukolsky equations for an arbitrary field spin \(s\) in static spherical-symmetric metric with \(ds^{2}=-f(r)dt^{2}+f^{-1}(r)dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2})\), which separate into angular and radial parts as \[\left[\frac{1}{sin\theta}\frac{d}{d\theta}\left(sin\theta\frac{d }{d\theta}\right)-\frac{m^{2}+2m\,s\,cos\theta+s^{2}cos^{2}\theta}{sin^{2} \theta}+s+A_{s\ell}\right]S_{s}(\theta) =0, \tag{29}\] \[\left[\Delta^{-s}\frac{d}{dr}\left(\Delta^{(1+s)}\frac{d}{dr} \right)+4i\,s\,r\,\omega+\frac{r^{2}\omega(r^{2}\omega-i\,s\,\Delta^{\prime})} {\Delta}+\epsilon_{s}(\Delta^{\prime\prime}-2)-A_{s\ell}\right]R_{s}(r) =0. \tag{30}\] with \(\Delta\equiv r^{2}f(r)\), \(m\) the azimuthal number, \(\epsilon_{s}\) and \(A_{s\ell}\) listed in TABLE 4. For more details on the formula derivation, readers can refer to [63; 64; 65]. It is straightforward to check that the above equations can reduce to the Teukolsky equation in static limit derived in [63]. To proceed, we reform \(R_{s}=\Delta^{-s/2}\Psi_{s}/r\) and work in tortoise coordinate \(dr_{\star}=(r^{2}/\Delta)dr\), thus the radial equation (30) is transformed into the wave-like equation \[\frac{d^{2}\Psi_{s}}{dr_{\star}^{2}} + [\omega^{2}-V_{s\ell}(r)]\Psi_{s}=0,\quad\text{with} \tag{31}\] \[V_{s\ell}(r)=i\,s\,\omega\,r^{2}\frac{d}{dr}\left(\frac{\Delta} {r^{4}}\right) + \frac{\Delta}{r^{3}}\frac{d}{dr}\left(\frac{\Delta}{r^{2}} \right)+\frac{\Delta}{4r^{4}}\left(4A_{s\ell}+s^{2}\frac{\Delta^{\prime 2}}{ \Delta}+2s\Delta^{\prime\prime}-4\epsilon_{s}(\Delta^{\prime\prime}-2)\right). \tag{32}\] It is noted that the Teukolsky radial equations are different from the corresponding master equations (8), (13) and (22) when setting \(s=0,1,1/2\), because they are not in the form of canonical wave equations. However, it was \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(s\) & -2 & -1 & -1/2 & 0 & 1/2 & 1 & 2 \\ \hline \(\epsilon_{s}\) & 1/2 & 0 & 0 & 0 & 1/2 & 1 & 5/2 \\ \hline \(A_{s\ell}\) & \((\ell-1)(\ell+2)\) & \(\ell(\ell+1)\) & \(\ell^{2}\) & \(\ell(\ell+1)\) & \((\ell+1)^{2}-1\) & \(\ell(\ell+1)-2\) & \((\ell-1)(\ell+2)-4\) \\ \hline \end{tabular} \end{table} Table 4: Variables \(A_{S\ell}\,\delta\,\epsilon_{s}\) dependent on the field spin in spherical symmetric Teukolsky equations. Figure 8: The angular velocity and the Lyapunov exponent as functions of hairy charge \(Q\) with fixing \(\ell=40\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & & Matrix Method & WKB method & geometric-optics approximation \\ \hline \multirow{3}{*}{scalar} & Q=-0.5 & 12.090059 - 0.108933 & 12.090062 - 0.108931 i & 11.940699 - 0.108931 i \\ \cline{2-6} & Q=0 & 15.588767 - 0.192457 i & 15.588765 - 0.192454 i & 15.396007 - 0.192450 i \\ \hline \multirow{2}{*}{EM} & Q=-0.5 & 12.088368 - 0.108929 i & 12.088371 - 0.108928 i & 11.940699 - 0.108931 i \\ \cline{2-6} & Q=0 & 15.585599 - 0.19244 i & 15.585597 - 0.192441 i & 15.396007 - 0.192450 i \\ \hline \multirow{2}{*}{Dirac} & Q=-0.5 & 12.239089 - 0.108914 i & 12.239044 - 0.108931 i & 11.940699 - 0.108931 i \\ \cline{2-6} & Q=0 & 15.780429 - 0.192451 i & 15.587973 - 0.192451 i & 15.396007 - 0.192450 i \\ \hline \end{tabular} \end{table} Table 3: QNFs with \(\ell=40\) for various perturbing fields obtained via different methods. addressed in [66] that the Teukolsky equations can be brought into the corresponding master equations under certain transformation. The QNFs are significantly determined by the effective potential and dominated by the last term in \(V_{s\ell}\). In the Horndeski hairy black hole (2), the influence from the spin can be amplified through the coupling \(s^{2}Q^{2}\) originating from the term \(s^{2}\Delta^{\prime 2}/4r^{4}\), which eventually causes the bifurcation or, saying the wider "fine structure" in the quasinormal spectrum. While in the eikonal limit \(\ell>>1\), the term \(\Delta A_{s\ell}/r^{4}\rightarrow\Delta\ell^{2}/r^{2}\) shall become dominant in the potential, so that the effect of spin will be suppressed. This subsequently causes the degeneracy for both the null circular orbits and the quasinormal spectrum of massless fields with different spins. ## IV Greybody factor and Hawking radiation It is known from Hawking's paper [50] that a particle with negative energy measured by an observer at infinity can physically exist inside the black hole since there the Killing vector is spacelike. Thus, during the pair production at the vicinity of the horizon, the particle with positive energy can escape to the observer and leave the negative one to fall into the singularity. This effect causes the so-called Hawking radiation, whose power spectrum is shown to be a literally black-body spectrum. However, due to the existence of the potential barrier outside the black hole, the Hawking radiation is not totally transparent for the observer at infinity, so what the observer detects in fact is a grey-body spectrum because the particles could be scattered by the potential barrier. In order to quantize the scattering process for various particles, one should first calculate the transmission coefficient, which is also defined as the greybody factor. Here we will solve the master equations (8), (13) and (22) by the scattering boundary conditions which permit the ingoing wave at infinity differing from the case of calculating QNFs. The boundary conditions that can equivalently describe the scattering process for a particle emitted from the horizon read as \[\begin{split}\Psi_{\ell}&=T_{\ell}\,e^{-i\omega r _{*}},\hskip 28.452756ptr_{*}\rightarrow-\infty,\\ \Psi_{\ell}&=e^{-i\omega r_{*}}+R_{\ell}\,e^{i\omega r _{*}},\hskip 28.452756ptr_{*}\rightarrow+\infty,\end{split} \tag{33}\] where \(T_{\ell}\) and \(R_{\ell}\) are denoted as the transmission and reflection coefficients for the angular momentum \(\ell\) mode, satisfying \(|T_{\ell}|^{2}+|R_{\ell}|^{2}=1\). Since from section II we know that each effective potential exhibits a potential barrier decreasing monotonically towards both boundaries, thereby, we can again employ the WKB method described in appendix A to calculate the coefficients. Subsequently, we have the greyfactor \(|A_{\ell}|^{2}\) defined as [67; 59] \[|A_{\ell}|^{2}=1-|R_{\ell}|^{2}=|T_{\ell}|^{2}\quad\text{and}\quad R_{\ell}=( 1+e^{-2i\pi\mathcal{K}})^{-\frac{1}{2}} \tag{34}\] where \(\mathcal{K}\) can be obtained from the WKB formula \[\mathcal{K}-i\,\frac{\omega^{2}-V(r_{0})}{\sqrt{-2V^{\prime\prime}(r_{0})}}- \sum_{i=2}^{i=6}\Lambda_{i}(\mathcal{K})=0. \tag{35}\] Here, \(V(r_{0})\) denotes the maximal potential locating at \(r_{0}\) for various spins, the second derivative in \(V\) is with respect to the tortoise coordinate, and \(\Lambda_{i}\) are the higher order WKB correction terms. It is noted that the WKB formula for determining the grey-body factors is well known to provide reasonable accuracy for further estimating the energy rate of Hawking radiation. However, this approach may not be suitable for the cases with very small \(\omega\), which imply almost complete wave reflection with negligible contributions to the total energy emission rate, so we set a cutoff for \(\omega\) in the numeric to make our results reliable. With the above preparation of methodology, we depict the grey-body factor as a function of \(\omega\) with different Horndeski hair \(Q\) for various fields in FIG. 9. Two remarkable properties we can extract from the figure. (i) Similar as in Schwarzschild black hole (\(Q=0\)), the increasing of spin \(s\) and orbital angular momentum \(\ell\) could make the curves of the grey-body factor shift towards a larger \(\omega\) in Horndeski hairy black hole. This means that for the emission particle with larger \(\ell\) and \(s\), the lowest frequency with which the particle penetrates the potential barrier would raise up such that the barrier would tend to shield more low-frequency particles for the observer at infinity. (ii) Regarding to the effect of the hairy charge \(Q\) on \(|A_{\ell}|^{2}\), we observe that the increasing of \(|Q|\) would not make the curve of \(|A_{\ell}|^{2}\) shift distinctly, instead, it makes the curve branch out between two fixed frequencies. It means that the Horndeski hair would not change the lowest frequency for a particle transmitting the potential barrier. Moreover, by comparing the dotted, solid and dashed curves with the same color for all the fields, it is obvious that for larger \(Q\), the grey-body factors is smaller for fixed \(\omega\), which means that more number of particles is reflected by the corresponding effective potential. This observation is reasonable because for larger \(Q\), all the effective potentials barrier is higher (see FIG.1-FIG.3), so the particles are more difficult to penetrate. With the greybody factor in hands, we can further estimate the energy emission rate of the Hawking radiation of the Horndeski hairy black hole via [50] \[\frac{dE}{dt}=\sum_{\ell}\frac{N_{\ell}}{2\pi}\frac{|A_{\ell}|^{2}}{e^{\omega/ T_{H}}\pm 1}\omega d\omega, \tag{36}\] where \(\pm\) in the denominator denote the fermions and the bosons created in the vicinity of the event horizon, the Hawking temperature \(T_{H}\) is \[T_{H}=\frac{f^{\prime}(r)}{4\pi}\big{|}_{r=\tau_{+}=2M}=\frac{1+Q}{8\pi M}, \tag{37}\] and the multiples \(N_{\ell}\) is known as \[N_{\ell}=\begin{cases}2\ell+1&\text{(scalar field)},\\ 8(\ell+1)&\text{(Dirac field)},\\ 2(2\ell+1)&\text{(Maxwell field)}.\end{cases} \tag{38}\] Note that the formula (36) only works when the system can be described by the canonical ensemble, which is fulfilled under the assumption that the temperature of black hole does not change between two particles emitted in succession [50]. The energy emission rate (EER) for scalar, Dirac and Maxwell fields as a function of frequency with samples of \(\ell\) and \(Q\) are shown in Fig.10. We can extract the following features. (i) The general behavior of EER is that as \(\omega\) grows, the EER first increases till it reaches a maximum at \(\omega=\omega_{p}\) where \(\omega_{p}\) depends on the parameters, and then decays to be zero. This is because in the right side of (36), there exists a competition between the greybody factor in the numerator and the exponential term in the denominator. Though FIG.9 shows that \(|A_{\ell}|^{2}\) grows as \(\omega\) increases, the growth of \(|A_{\ell}|^{2}\) is only more influential for \(\omega<\omega_{p}\), while for \(\omega>\omega_{p}\) the growing of \(e^{\omega/T_{H}}\) plays the dominant role and makes the EER decrease. And further increasing \(\omega\), \(|A_{\ell}|^{2}\) tends to the unit, but the denominator increases exponentially, which causes the EER to decay exponentially. (ii) In each plot, we see that the EER for various fields in the black hole with a larger \(Q\) is stronger. The result is reasonable because for fixed \(\omega\), as \(Q\) increases, the greybody factor becomes smaller and the exponential term in the denominator also decreases due to the growth of Hawking temperature (37). The joint contributions result in the stronger intensity of EER for larger \(Q\). (iii) By comparing the two plots for each field, we observe that EER for particles with higher orbital angular momentum would be suppressed Figure 9: Greybody factors for emission particles with spin 0 (left), 1/2 (middle) and 1 (right). In each plot, we distinguish the curves with different \(\ell\) by colors, while curves with same color indicate the graybody factor with \(Q=-0.5\) (dotted), \(Q=0\) (solid),\(Q=0.5\) (dashed). in both Schwarzschild and Horndeski hairy black holes. In addition, this suppression effect is more significant for the particles with a higher spin. Finally, we numerically integrate the EER over \(\omega\) and obtain the total EER, \(dE/dt\), for various fields as the function of \(Q\) in FIG.11. For the static hairy black hole in the current Horndeski gravity, the positive Horndeski hair \(Q\) will enhance the total EER around the black hole, which may lead to a higher speed of evaporation and a shorter lifetime for this kind of black hole according to the discussion in [68]. Therefore, in terms of observation, this intuitively would mean that, such hairy small or even medium black hole with a large positive \(Q\) in the early universe may have disappeared due to the high evaporation rate. While for negative \(Q\), one would expect that the Horndeski hairy black hole has a lower evaporation rate. Especially, in the extremal case with \(Q=-1\), the total EER approaches zero as expected, meaning that such extremal black holes almost do not evaporate and thus they are often considered to live forever if they are in complete isolation, similar to the case in extremal RN black hole [69]. Figure 11: Total emission rate \(dE/dt\) as a function of Horndeski hair for various fields with \(s=0\) (left), \(s=1/2\) (middle) and \(s=1\) (right). Figure 10: Energy emission rate as a function of frequency for various fields with spin \(s=0\) (top), \(s=1/2\) (middle), \(s=1\) (bottom) around Horndeski hairy black hole. The left column describes the fields with the corresponding lowest \(\ell\) while the right column is for their second lowest \(\ell\). Conclusion and Discussion In this paper, we investigated the quasinormal frequencies and Hawking radiations of the Horndeski hairy black hole by analyzing the perturbations of massless fields with spins \(0\) (scalar field), \(1/2\) (Dirac field) and \(1\) (electromagnetic field), respectively. The starting points of both aspects are the master equations of the perturbing fields. For the quasinormal mode spectral analysis, we employed three methods: WKB method, matrix method and time domain integration; while in the Hawking radiation part, we used the \(6-\)th order WKB method to determine the greybody factor. Our results show that under the massless perturbations of the scalar field, Dirac field and electromagnetic field, the Horndeski hairy black holes are dynamically stable in terms of the frequency and time domain of QNMs. Similar as in Schwarzschild black hole [55], the massless field with higher spin has a larger imaginary part of quasinormal frequency, so the related perturbation can live longer in the hairy black hole background. This effect can be enhanced by having a stronger negative Horndeski hair. Moreover, the real part of QNFs for all the perturbations increases as \(Q\) increases, meaning that the larger Horndeski hair enhances the oscillation of the perturbations. In addition, in Horndeski hairy black hole, the mode with a larger \(\ell\) has a shorter lifetime for electromagnetic field perturbation, but it can survive longer for the scalar and Dirac perturbations, which are similar to those in Schwarzschild black hole. But this effect of \(\ell\) on the QNFs would be enhanced by a positive \(Q\) while suppressed by a negative \(Q\). Nevertheless, for large enough \(\ell\) till the eikonal limit (\(\ell\gg 1\)), the QNFs for various perturbations tend to be almost the same value, of which the imaginary (real) part decreases (increases) as \(Q\) increases. Focusing on the general Teukolsky equations for arbitrary spin in the current hairy black hole, we explained the balance effects of the Horndeski hair, the spin and the quantum angular momentum on the QNFs in an analytical way. Our studies on Hawking radiation shew that the intensity of energy emission rate for various fields is stronger for a larger Horndeski hair, which indicates that the Horndeski hair could remarkably influence the evaporation rate of black hole. Thus, we argue that such black hole with large positive Horndeski hairy charge (if exists) in the early universe may have disappeared due to the strong radiation, while comparing to black hole in GR, Horndeski hairy black hole with negative hair could have a lower evaporation rate. It would be more practical to extend our study into the gravitational perturbations, i.e., the massless spin-2 field, and we believe that it deserves an individual consideration due to the difficulty in reducing the master wavelike equation in the current theory. However, our findings from the test external fields could shed some insights on the related physics on gravitational perturbations. For example, the QNFs of gravitational fields perturbation in eikonal limit could be the same as our findings because the behavior of the QNM spectrum for external and gravitational fields is usually known to be qualitatively the same, independent of the spin of the field in this limit. In addition, our findings about the influence of the field's spin on the QNFs and the energy emission rate could also provide a good reference in this scenario. Moreover, considering that the overtone modes may play important roles in the GW [70; 71], it would be interesting to further study the QNFs for the overtone modes. Additionally, the extension to the external fields with mass could be another interesting direction. At least the propagation of the massive scalar field was found to behave differently from the massless scalar field [72; 73]. We hope to perform these studies in the near future. ###### Acknowledgements. We appreciate Guo-Yang Fu and Hua-Jie Gong for helpful discussions. This work is partly supported by Natural Science Foundation of China under Grants No. 12375054 and No. 12375055, Natural Science Foundation of Jiangsu Province under Grant No.BK20211601, the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant No. KYCX23_3501, and Top Talent Support Program from Yangzhou University. ## Appendix A WKB method WKB method, as a well known approximation, is a semianalytic technique for determining the eigenvalue of the Schrodinger wavelike equation, which has the form: \[\frac{d^{2}}{dx^{2}}\psi(x)-V(x)\psi(x)=0, \tag{39}\] where the potential barrier \(V(x)\) rises to a peak at \(x=x_{0}\) and is assumed to be constant at the infinity (\(|x|\rightarrow\infty\)), and the radial function \(\psi(x)\) is required to be purely "outgoing" as \(|x|\rightarrow\infty\). The method is named after Wentzel-Kramers-Brillouin, and originally applied to approximate the bound-state energies and tunneling rates of the Schrodinger equation in quantum physics. Owing to the noted modification of the WKB approach proposed by Iyer and Will [67], the method is carried to the third order beyond the eikonal approximation and is able to calculate the quasi-normal frequency quickly and accurately for a wide range of black hole systems. Then Konoplya extended the method to the 6th order [74], and Matyjasek-Opala brought it to the 13th order[56]. The principal idea is to match simultaneously exterior WKB solutions across the two turning points on the potential barrier, and this finally yields the WKB formula \[\frac{iV(x_{0})}{\sqrt{2V^{\prime\prime}(x_{0})}}-\sum_{i=2}^{N}\Lambda_{i}=n+ \frac{1}{2}, \tag{40}\] where \(n=0,1,2,...\) is the overtone number and \(N\) is the number of WKB order. \(\Lambda_{i}\) is the \(i\)-th correction term that depends only on the derivatives of \(V(x)\) evaluated at \(x_{0}\), and several formulas can be found in [67; 74]. In our framework, we substitute the \(V(x)\) by \(V_{sc}(r),V_{EM}(r)\) and \(V_{Dirac}(r)\) for the scalar, electromagnetic and Dirac fields, respectively, into (40), and then solve the equations to obtain the QNFs. ## Appendix B Matrix method In this appendix, we will show the main steps of matrix method [75; 76; 77] in calculating the QNFs from the three master equations for various perturbing fields. We uniform the three master equations as \[\frac{d^{2}K}{dr_{*}^{2}}+[\omega^{2}-V(r)]K=0, \tag{41}\] where \(K\) is the corresponding field variables, and the effective potential \(V(r)\) can be \(V_{sc}(r),V_{EM}(r)\) or \(V_{Dirac}(r)\) for the scalar, electromagnetic and Dirac field, respectively. To study the QNM spectrum, we impose the ingoing wave \(K\sim e^{-i\omega r_{*}}\) at horizon (\(r\to r_{+}\)), and the outgoing wave \(K\sim e^{i\omega r_{*}}\) at infinity (\(r\rightarrow\infty\)). Recalling (7), the above boundary conditions can be rewritten as \[K(r\to r_{+})\sim\begin{cases}(r-r_{+})^{-\frac{i\omega r_{*}^{2}}{Qr_{*}}}, \quad\text{for non--extreme black hole }(Q>-2M)\\ \\ (r-r_{+})^{-\frac{10}{3}i\omega r_{*}}e^{\frac{2i\omega r_{*}^{2}}{r-r_{*}}}, \quad\text{for extreme black hole }(Q=-2M)\end{cases}, \tag{42}\] and \[K(r\rightarrow\infty)\sim r^{2iM\omega}e^{i\omega r-\frac{1}{2}iQ\omega(\ln \frac{r}{2M})^{2}}. \tag{43}\] Thus, in order to make sure \(K\) satisfy the boundary conditions simultaneously, it is natural to redefine \(K\) in the following way \[K(r)=\begin{cases}(r-r_{+})^{-\frac{i\omega r_{*}^{2}}{Qr_{*}}}r^{\frac{i \omega r_{*}^{2}}{Qr_{*}}}r^{2iM\omega}e^{i\omega r-\frac{1}{2}iQ\omega(\ln \frac{r}{2M})^{2}}Z(r),\quad\text{for non--extreme black hole}\\ \\ (r-r_{+})^{-\frac{10}{3}i\omega r_{*}}e^{\frac{2i\omega r_{*}^{2}}{r-r_{*}}}r^{ \frac{10}{3}i\omega r_{*}}e^{-\frac{2i\omega r_{*}^{2}}{r}}r^{2iM\omega}e^{i \omega r-\frac{1}{2}iQ\omega(\ln\frac{r}{2M})^{2}}Z(r),\quad\text{for extreme black hole}\end{cases}. \tag{44}\] To proceed, we consider the coordinate transformation \[x(r)=\begin{cases}1-(r_{+}/r)^{1/3},\quad\text{for scalar and electromagnetic field}\\ \\ (1-(r_{+}/r)^{1/2})^{1/2},\quad\text{for Dirac field}\end{cases}, \tag{45}\] to bring the integration domain from \(r\in[r_{+},\infty]\) to \(x\in(0,1]\). Further by implementing the function transformation \[\chi(x)=x(1-x)Z(x), \tag{46}\] we can reform the equation (41) into \[A_{2}(x)\chi^{\prime\prime}(x)+A_{1}(x,\omega)\chi^{\prime}(x)+A_{0}(x,\omega) \chi(x)=0, \tag{47}\] where the expressions of \(A_{i}(i=0,1,2)\) are straightforward and will not present here. Subsequently, the boundary conditions are simplified as \[\chi(0)=\chi(1)=0. \tag{48}\] With the reformed master equation (47) and the boundary conditions (48) in hands, we can follow the standard steps of matrix method directly to obtain the eigenvalue \(\omega\) in it. Following is the main principles of the matrix method [75; 76; 77]. One first interpolates \(N\) grid points \(x_{1}=0<x_{2}<x_{3}<\cdots<x_{N}=1\) in the interval \(x\in[0,1]\), then by carrying out Taylor expansion of \(\chi\) at anyone of these grid points \(x_{k}(k=1,2,\cdots,N)\), one has \[\chi(x)-\chi(x_{k})=(x-x_{k})\chi^{\prime}(x_{k})+\frac{1}{2}(x-x_{k})^{2} \chi^{\prime\prime}(x_{k})+\frac{1}{3!}(x-x_{k})^{3}\chi^{\prime\prime\prime }(x_{k})+\cdots, \tag{49}\] Finally, by taking \(x=x_{j}(j=1,2,\cdots,k-1,k+1,\cdots,N)\), one can get a matrix equation \[\Delta F=MD, \tag{50}\] where \[M=\begin{pmatrix}x_{1}-x_{k}&\frac{(x_{1}-x_{k})^{2}}{2}&\cdots&\frac{(x_{1}- x_{k})^{k}}{k!}&\cdots&\frac{(x_{1}-x_{k})^{N-1}}{(N-1)!}\\ x_{2}-x_{k}&\frac{(x_{2}-x_{k})^{2}}{2}&\cdots&\frac{(x_{2}-x_{k})^{k}}{k!}& \cdots&\frac{(x_{2}-x_{k})^{N-1}}{(N-1)!}\\...&...&...&...&...\\ x_{k-1}-x_{k}&\frac{(x_{k-1}-x_{k})^{2}}{2}&\cdots&\frac{(x_{k-1}-x_{k})^{k}}{k! }&\cdots&\frac{(x_{k-1}-x_{k})^{N-1}}{(N-1)!}\\ x_{k+1}-x_{k}&\frac{(x_{k+1}-x_{k})^{2}}{2}&\cdots&\frac{(x_{k-1}-x_{k})^{k}}{k! }&\cdots&\frac{(x_{k-1}-x_{k})^{N-1}}{(N-1)!}\\...&...&...&...&...\\ x_{N}-x_{k}&\frac{(x_{N}-x_{k})^{2}}{2}&\cdots&\frac{(x_{N}-x_{k})^{k}}{k!}& \cdots&\frac{(x_{N}-x_{k})^{N-1}}{(N-1)!}\end{pmatrix},\] \[D=\left(\chi^{\prime}(x_{k}),\chi^{\prime\prime}(x_{k}),\cdots,\chi^{(k)}(x_{ k}),\cdots,\chi^{(N-1)}(x_{k})\right)^{T}.\] It is more convenient to express \(\chi^{\prime}(x_{k})\) and \(\chi^{\prime\prime}(x_{k})\) as the linear combination of the function value of \(\chi\) at each grid point using the Cramer rule, \[\chi^{\prime}(x_{k})=\det(M_{1})/\det(M), \tag{51}\] \[\chi^{\prime\prime}(x_{k})=\det(M_{2})/\det(M),\] where the matrix \(M_{i}(i=1,2)\) is constructed with replacing the \(i\)'th column of matrix \(M\) by \(\Delta F\). In this way, one can finally transform the master equation (47) into a matrix equation \[\bar{\mathscr{M}}(\omega)\mathscr{F}=0, \tag{52}\] where \(\mathscr{F}=(\chi(x_{1}),\chi(x_{2}),\ldots,\chi(x_{N}))^{T}\). Considering the boundary conditions (48), the above matrix equation take the forms \[\mathscr{M}(\omega)\mathscr{F}=0, \tag{53}\] where \[\mathscr{M}_{ij}=\begin{cases}\delta_{ij},&i=1,N\\ \bar{\mathscr{M}}_{ij},&i=2,3,\ldots,N-1\end{cases}. \tag{54}\] Consequently, the condition that Eq.(53) has nonvanishing root is the validity of the algebra equation \[det(\mathscr{M}(\omega))=0, \tag{55}\] by solving which, one obtains the eigenvalue \(\omega\) as the quasinormal frequencies. ## Appendix C Time domain integration In order to illustrate the properties of QNMs from the propagations of various fields, we shall shift our analysis into the time domain. To this end, we reconstruct the Schrodinger-like equations (Eqs. (8), (13) and (22)) into the time-dependent form by simply replacing the \(\omega^{2}\) with \(-d^{2}/dt^{2}\), then we have the uniformed second-order partial differential equation for various perturbed fields as \[\left(-\frac{d^{2}}{dt^{2}}+\frac{d^{2}}{dr_{*}}-V(r)\right)\Psi(t,r)=0. \tag{56}\] To solve the above equations, one has to deal with the time-dependent evolution problem. A convenient way is to adopt the finite difference method [78] to numerically integrate these wave-like equations at the time coordinate and fix the space configuration with a Gaussian wave as an initial value of time. To handle this, one firstly discretizes the radial coordinate with the use of the definition of tortoise coordinate \[\frac{dr(r_{*})}{dr_{*}}=f(r(r*))\Rightarrow\frac{r(r_{*j}+\Delta r_{*})-r(r_ {*j})}{\Delta r_{*}}=\frac{r_{j+1}-r_{j}}{\Delta r_{*}}=f(r_{j})\Rightarrow r _{j+1}=r_{j}+\Delta r_{*}f(r_{j}), \tag{57}\] So, a list of \(\{r_{j}\}\) is generated if one chooses the seed \(r_{0}=r_{horizon}+\epsilon\) with a given the grid interval \(\Delta r_{*}\). Then one can further discretize the effective potential into \(V(r(r_{*}))=V(j\Delta r_{*})\equiv V_{j}\) and the field into \(\Psi(t,r)=\Psi(j\Delta r_{*},\,i\Delta t)\equiv\Psi_{j,i}\). Subsequently, the wave-like equation (56) turns out to be a discretized equation \[-\frac{\Psi_{j,i+1}-2\Psi_{j,i}+\Psi_{j,i-1}}{\Delta t^{2}}+\frac{\Psi_{j+1,i }-2\Psi_{j,i}+\Psi_{j-1,i}}{\Delta r_{*}^{2}}-V_{j}\Psi_{j,i}+\mathcal{O}( \Delta t^{2})+\mathcal{O}(\Delta r_{*}^{2})=0, \tag{58}\] from which one can isolate \(\Psi_{j,i+1}\) after algebraic operations \[\Psi_{j,i+1}=\frac{\Delta t^{2}}{\Delta r_{*}^{2}}\Psi_{j+1,i}+\left(2-2\frac {\Delta t^{2}}{\Delta r_{*}^{2}}-\Delta t^{2}V_{j}\right)\Psi_{j,i}+\Psi_{j- 1,i}-\Psi_{j,i-1}. \tag{59}\] The above equation is nothing but an iterative equation, which can be solved if one gives a Gaussian wave packet \(\Psi_{j,0}\) as the initial perturbation. In our calculations, we shall set \(\epsilon=10^{-15}\), \(\Delta r_{*}=0.2\), \(\Delta t=0.1\) and \(\Psi_{j,0}=\exp[-\frac{(r_{j}-0)^{2}}{8}]\) and \(\Psi_{j,i<0}=0\), and similar settings have also been used in [79; 80; 81] and references therein.
2309.14019
On a class of strong valid inequalities for the connected matching polytope
We identify a family of $O(|E(G)|^2)$ nontrivial facets of the connected matching polytope of a graph $G$, that is, the convex hull of incidence vectors of matchings in $G$ whose covered vertices induce a connected subgraph. Accompanying software to further inspect the polytope of an input graph is available.
Phillippe Samer
2023-09-25T10:34:57Z
http://arxiv.org/abs/2309.14019v2
# On a class of strong valid inequalities for the connected matching polytope ###### Abstract We identify a family of \(O(|E(G)|^{2})\) nontrivial facets of the connected matching polytope of a graph \(G\), that is, the convex hull of incidence vectors of matchings in \(G\) whose covered vertices induce a connected subgraph. Accompanying software to further inspect the polytope of an input graph is available. ## 1 Introduction Our goal with this paper is to bring attention to an interesting polytope, and contribute towards improved algorithms for a recent combinatorial optimization problem defined over it. A \(\mathsf{P}\)-matching in a graph \(G\) is a matching \(M\) such that the subgraph induced by vertices covered by \(M\) has property \(\mathsf{P}\), _e.g._ being connected. In particular, while finding a maximum cardinality _connected matching_ is a well-solved problem, the edge-weighted counterpart is NP-hard even in very restricted graph classes (Gomes et al., 2023). We initiate the systematic study of the polytope \(\mathfrak{C}(G)\) of connected matchings in a general graph \(G\), introducing a relevant class of facet-defining inequalities. Early examples of studies on \(\mathsf{P}\)-matching problems include Stockmeyer and Vazirani (1982) on induced matchings, Golumbic et al. (2001) on uniquely restricted matchings, and Goddard et al. (2005) contemplating acyclic, connected and disconnected matchings. While different \(\mathsf{P}\)-matching problems receive increased attention in the recent literature, we highlight Gomes et al. (2022) and Gomes et al. (2023) on weighted connected matchings, who were able to determine several fine grained complexity results. In particular, they show that it is NP-hard to find a maximum weight connected matching even in bounded-diameter bipartite and planar bipartite graphs. The main argument for our contribution is to bring the machinery of polyhedral studies and mixed-integer programming (MIP) to bear on the investigation of weighted connected matchings in general graphs. In light of decades' worth of progress on matching theory and on the effective use of strong valid inequalities in MIP solvers, the combinatorial analysis of the facial structure of polytope \(\mathfrak{C}(G)\) is a natural methodology. On that perspective, the key idea we present next is a powerful ingredient in that direction. We remark that all polyhedra in this work are rational, and that we do not leave the unit hypercube. Nearly all terminology and notation we use are standard in graph theory and polyhedral combinatorics. The following might be worth mentioning. We write \([k]\stackrel{{\text{def}}}{{=}}\{1,\ldots,k\}\). Given a graph \(G\), we denote its _line graph_ by \(L(G)\), and define the _distance between two edges_ in \(G\) as \(d_{L}:E(G)\times E(G)\rightarrow\mathbb{Z}_{+}\) so that \(d_{L}(e_{1},e_{2})\) equals the number of edges in a shortest path between \(e_{1}\) and \(e_{2}\) in \(L(G)\). Given a subset of edges \(S\subseteq E(G)\), we denote by \(\chi^{S}\) its _incidence_ (or _characteristic_) _vector_ in space \(\mathbb{Q}^{|E(G)|}\), with \(\chi^{S}_{e}=1\) for each \(e\in S\), and \(\chi^{S}_{e}=0\) otherwise. First, it is convenient to clear implied equations out of systems of inequalities in studies of the connected matching polytope. Using the standard argument that the unit vectors \(\left\{\chi^{\{e\}}:e\in E(G)\right\}\) and \(\mathbf{0}\in\mathbb{Q}^{|E(G)|}\) are affinely independent and induce trivial incidence vectors of connected matchings in \(G\), we have the following result. **Proposition 1**.: _The connected matching polytope \(\mathfrak{C}(G)\) of an arbitrary graph \(G\) is full-dimensional._ Our main result is the following. **Theorem 2**.: _Let \(G\) be a connected graph and \(\{e^{\prime},e^{\prime\prime}\}\subset E(G)\) be a disconnected matching. Denote by \(\Lambda\stackrel{{\text{def}}}{{=}}\Lambda(e^{\prime},e^{\prime \prime})=\{f\in E(G):d_{L}(f,e^{\prime})=d_{L}(f,e^{\prime\prime})=2\}\) the corresponding set of edges at two hops from both \(e^{\prime}\) and \(e^{\prime\prime}\). Suppose, further, that there is no connected matching including both \(e^{\prime}\) and \(e^{\prime\prime}\) in \(G-\Lambda\) (i.e. the subgraph of \(G\) without edges in \(\Lambda\)). Then, the inequality_ \[x_{e^{\prime}}+x_{e^{\prime\prime}}-\sum_{f\in\Lambda}x_{f}\leq 1 \tag{1}\] _is valid for \(\mathfrak{C}(G)\). Moreover, (1) defines a facet when \(\Lambda\) induces a clique in \(L(G)\) and the subgraph induced by \(\{e^{\prime},e^{\prime\prime},f\}\) is 2-connected for each \(f\in\Lambda\)._ The proof is saved for the next section. Let us first illustrate how important it may be to consider this small set of facets (note that we have at most one inequality for each pair of edges). There are many options for modelling induced connectivity. Progress in mathematical programming computation of structures like maximum-weight connected subgraphs and Steiner trees build on vertex choosing binary variables \(y\in\{0,1\}^{|V(G)|}\) and _minimal separator inequalities_ (MSI): \(y_{a}+y_{b}-\sum_{u\in\mathcal{C}}y_{u}\leq 1\) for each pair of non-adjacent vertices \(a\) and \(b\), and each \((a,b)\)-separator \(\mathcal{C}\subseteq V\backslash\{a,b\}\), _i.e._ there are no paths connecting \(a\) to \(b\) if we remove \(\mathcal{C}\) from \(G\). See Wang et al. (2017) for a thorough polyhedral analysis, and Fischetti et al. (2017) for supporting experimental results of an award-winning solver for Steiner tree problems. In an attempt to build on those results, and impose induced connectivity on a system of inequalities formulating connected matchings while using only natural design variables \(x\in\{0,1\}^{|E(G)|}\), as opposed to working with extended formulations, one may use the fact that vertex \(u\) belongs to the subgraph induced by matching \(M\) if and only if there is exactly one edge in \(M\) incident to \(u\). That is, projecting MSI onto the space of \(x\) variables using \(y_{u}\stackrel{{\text{def}}}{{=}}\sum_{e\in\delta(u)}x_{e}\), we derive a MIP formulation to find maximum weight connected matchings using MSI. We are currently pursuing that endeavour and crafting a branch-and-cut algorithm for weighted connected matchings. In the meantime, we inspected the convex hull \(\mathfrak{C}(G)\) for several examples using polymake(Gawrilow and Joswig, 2000; Assarf et al., 2017). Remarkably, we discovered fine examples where a single inequality in (1) dominates several MSI. For instance, taking \(G_{J26}\) to be the skeleton graph of Johnson Solid 26, depicted in Figure 1, and studying the minimal inequality description of \(\mathfrak{C}(G_{J26})\), we detect 14 trivial facets from non-negativity bounds, 8 blossom inequalities on handles \(H_{i}=V(G)\backslash\{v_{i}\}\), some blossom inequalities on triangles, and 5 facets defined by our inequalities in (1). Among the latter, we find \[x_{5}+x_{13}-x_{2}-x_{3}\leq 1, \tag{2}\] which may be interpreted as the lifting of 4 different MSI corresponding to \(\mathcal{C}\stackrel{{\text{def}}}{{=}}\{v_{1},v_{3},v_{5},v_{7}\}\) as a minimal \((v_{a},v_{b})\)-separator for \((a,b)\in\{(2,6),(2,8),(4,6),(4,8)\}\). In other words, a MIP formulation adding inequality (2) _a priori_ gives a tighter approximation of \(\mathfrak{C}(G_{J26})\) than a formulation that depends solely on cutting planes from MSI projected onto \(x\) space, which could generate dynamically the cuts \(\big{\{}y_{v_{a}}+y_{v_{b}}-\sum_{z\in\mathcal{C}}y_{z}\leq 1\big{\}}\). For example, when \((a,b)=(2,6)\), the projected inequality is \[(x_{1}+x_{5}+x_{6}+x_{7})+(x_{11}+x_{12}+x_{13})\] \[-(x_{1}+x_{2}+x_{3}+x_{4})-(x_{2}+x_{8}+x_{9})\] \[-(x_{3}+x_{6}+x_{11})-(x_{7}+x_{10}+x_{12}+x_{14})\leq 1,\] that is, \(x_{5}+x_{13}-2x_{2}-2x_{3}-x_{4}-x_{8}-x_{9}-x_{10}-x_{14}\leq 1\), which is immediately seem to be dominated by (2). The productive exercise of plugging different input graphs \(G\) and inspecting the resulting polytope \(\mathfrak{C}(G)\) is made free and open to anyone interested. The code developed to find a \(\mathcal{V}\)-description of the polytope corresponding to an input graph, and then produce an \(\mathcal{H}\)-description with polymake, is available at [https://github.com/phillippesamer/wcm-branch-and-cut/tree/main/polyhedra](https://github.com/phillippesamer/wcm-branch-and-cut/tree/main/polyhedra), as is the forthcoming branch-and-cut algorithm. Before we resume to the main proof, we note that it may be straightforward in some particular cases to show that (1) induces a facet of the smaller polytope \(\mathfrak{C}(G[S])\), with \(S=\{e^{\prime},e^{\prime\prime},f\}\) and \(f\in\Lambda\). Nonetheless, this would only be interesting if coupled with a lifting result showing how to derive a facet of our target polytope \(\mathfrak{C}(G)\) from the smaller dimensional one. We skip that altogether and give next a direct proof of the general result. It is also important to remark that there are simple sufficient conditions to verify the theorem hypothesis that no connected matching including both \(e^{\prime}\) and \(e^{\prime\prime}\) exists in \(G-\Lambda\). For instance, it suffices that \(G-\Lambda\) does not contain a path joining the two edges. The latter is not, however, a necessary condition, as one can see using again the example underlying inequality (2) above. We leave the problem of finding a strict condition for future work, and conjecture that it may be verified efficiently by working on the subgraph obtained from \(G\) after removing \(\Lambda\), as well as those edges adjacent to either \(e^{\prime}\) or \(e^{\prime\prime}\), and using the fact that finding a maximum cardinality connected matching can be done in polytime. Figure 1: Skeleton graph of Johnson Solid 26. Proof of Theorem 2 _(i)_ The validity of inequality (1) follows from simple combinatorial reasoning. Let \(\chi^{S}\) be the incidence vector of an arbitrary connected matching \(S\). If \(e^{\prime}\not\in S\) or \(e^{\prime\prime}\not\in S\), the left-hand side of the expression in (1) is at most 1. On the other hand, if both \(e^{\prime}\in S\) and \(e^{\prime\prime}\in S\), the hypothesis that there is no connected matching including both \(e^{\prime}\) and \(e^{\prime\prime}\) in \(G-\Lambda\) implies that there must be an additional edge \(f\in\Lambda\) in \(S\), and again the left-hand side of the inequality is bound below 1 at \(\chi^{S}\). Since \(\chi^{S}\) is taken as an arbitrary vertex of the polytope, the inequality is valid at all points in \(\mathfrak{C}(G)\) by convexity. _(ii)_ For the facet proof, let \(F\stackrel{{\rm def}}{{=}}\left\{x\in\mathfrak{C}(G):x_{e^{ \prime}}+x_{e^{\prime\prime}}-\sum_{f\in\Lambda}x_{f}=1\right\}=\left\{x\in \mathfrak{C}(G):\pi x=\pi_{0}\right\}\) denote the face corresponding to inequality (1); vector \((\pi,\pi_{0})\) is just shorthand notation here. By the full-dimensionality observation in Proposition 1, there is no equation satisfied by all points in the polytope. It thus suffices to show that if \(F\subseteq\overline{F}\stackrel{{\rm def}}{{=}}\left\{x\in \mathfrak{C}(G):\lambda x=\lambda_{0}\right\}\), the defining inequalities \(\pi x\leq\pi_{0}\) and \(\lambda x\leq\lambda_{0}\) are actually equivalent (_i.e._ there exists \(\rho>0\) such that \(\lambda=\rho\pi\) and \(\lambda_{0}=\rho\pi_{0}\)), and hence the strict inclusion cannot hold (Nemhauser and Wolsey, 1999, Theorem I.4.3.6). Set \(\rho\stackrel{{\rm def}}{{=}}\lambda_{0}\). To prove the equivalence of the nonzero coefficients, let us denote by \(x^{1},x^{2},x^{3}\) the incidence vectors of the connected matchings consisting of \(\{e^{\prime}\}\), \(\{e^{\prime\prime}\}\), and \(\{e^{\prime},e^{\prime\prime},f_{1}\}\), respectively, with \(f_{1}\in\Lambda\) arbitrary. Note that each of these points belong to \(F\), and give a single nonzero coefficient when evaluating \(\lambda x=\lambda_{0}\): 1. From \(x^{1}\stackrel{{\rm def}}{{=}}\{x_{e^{\prime}}^{1}=1,x_{*}^{1}=0 \text{ for }*\in E\backslash\left\{e^{\prime}\right\}\}\in F\) we get \(\lambda x^{1}=\lambda_{e^{\prime}}\). Together with \(\lambda x^{1}=\lambda_{0}\), we obtain \(\lambda_{e^{\prime}}=\lambda_{0}\stackrel{{\rm def}}{{=}}\rho= \rho\cdot 1=\rho\cdot\pi_{e^{\prime}}\). 2. From \(x^{2}\stackrel{{\rm def}}{{=}}\{x_{e^{\prime\prime}}^{2}=1,x_{*}^ {2}=0\text{ for }*\in E\backslash\left\{e^{\prime\prime}\right\}\}\in F\) we get \(\lambda x^{2}=\lambda_{e^{\prime\prime}}\). Together with \(\lambda x^{2}=\lambda_{0}\), we obtain \(\lambda_{e^{\prime\prime}}=\lambda_{0}\stackrel{{\rm def}}{{=}} \rho=\rho\cdot 1=\rho\cdot\pi_{e^{\prime\prime}}\). 3. From \(x^{3}\stackrel{{\rm def}}{{=}}\{x_{e^{\prime}}^{3}=x_{e^{\prime \prime}}^{3}=x_{f_{1}}^{3}=1,x_{*}^{3}=0\text{ for }*\in E\backslash\left\{e^{\prime},e^{ \prime\prime},f_{1}\right\}\}\in F\) we get \(\lambda x^{3}=\lambda_{e^{\prime}}+\lambda_{e^{\prime\prime}}+\lambda_{f_{1}}\). Together with \(\lambda x^{3}=\lambda_{0}\), we have \(\lambda_{f_{1}}=\lambda_{0}-\lambda_{e^{\prime}}-\lambda_{e^{\prime\prime}}\). Now, using the coefficients determined in the previous two items, we find that \(\lambda_{f_{1}}=\lambda_{0}-\lambda_{0}-\lambda_{0}=\lambda_{0}\cdot(-1) \stackrel{{\rm def}}{{=}}\rho\cdot(-1)=\rho\cdot\pi_{f_{1}}\). Since the choice of \(f_{1}\in\Lambda\) is without loss of generality, we repeat the argument in the last item above using the incidence vectors of each of the connected matchings in \(\left\{\{e^{\prime},e^{\prime\prime},f\}:f\in\Lambda\right\}\), to actually determine that \(\lambda_{f}=\rho\cdot\pi_{f}\) for each \(f\in\Lambda\). That establishes the equivalence of nonzero coefficients. It remains to show that all other coefficients of \(\lambda\) are null. All we need for that are the following two remarks. **Fact 1**: Let \(\chi^{M}\in F\) be the incidence vector of matching \(M\). By the hypothesis that \(\Lambda\) induces a clique in \(L(G)\), we have that \(M\) contains at most one edge in \(\Lambda\). That enables us to judiciously order the edges in \(M=\{e_{1},\ldots,e_{m}\}\) so that (i) the edges in \(\overline{\Lambda}\stackrel{{\rm def}}{{=}}\{e^{\prime},e^{ \prime\prime}\}\cup\Lambda\) (either one or exactly three) appear first, and (ii) each edge added after we are done with \(\overline{\Lambda}\) yields an additional point in our face \(F\). Ultimately, we may apply the following simple observation a number of times: \[\chi^{\{e_{1},\ldots,e_{k}\}}\in F\subseteq\overline{F} \implies\lambda\cdot\chi^{\{e_{1},\ldots,e_{k}\}}=\overbrace{\sum_{i=1}^{k- 1}\lambda_{e_{i}}\cdot 1}^{=\lambda_{0}}+\lambda_{e_{k}}\cdot 1\ +\overbrace{\sum_{ \begin{subarray}{c}e\in E(G):\\ e\notin\{e_{1},\ldots,e_{k}\}\end{subarray}}^{=0}\lambda_{e}\cdot 0}^{=0}= \lambda_{0}\] \[\implies\lambda_{e_{k}}=0 \tag{3}\] **Fact 2**: For all \(e\in E(G)\), there exists a connected matching \(M\) including \(e\) such that \(\chi^{M}\in F\). Let \(S\) be the set of endpoints of edges in \(\overline{\Lambda}=\{e^{\prime},e^{\prime\prime}\}\cup\Lambda\), and consider the following two cases. * Suppose that \(e\in E(G[S])\). Taking \(M\) in \(\{\{e^{\prime},e^{\prime\prime},f\}:f\in\Lambda\}\), the claim follows immediately for \(e\in\overline{\Lambda}\), by definition of \(F\). Otherwise, if \(e=\{u,v\}\) is an induced edge in \(E(G[S])\backslash\overline{\Lambda}\), it joins a vertex from either \(e^{\prime}\) or \(e^{\prime\prime}\) to a vertex of some \(f\in\Lambda\). Suppose without loss of generality that \(u\in e^{\prime}\) and \(v\in f\). Since the subgraph induced by \(\{e^{\prime},e^{\prime\prime},f\}\) is 2-connected by the hypothesis in the theorem, the matching \(M\stackrel{{\rm def}}{{=}}\{e,e^{\prime\prime}\}\) is connected, and \(\chi^{M}\in F\). * If \(\hat{e}\not\in E(G[S])\), we use the assumption that \(G\) is connected and take a simple path \(P=(e_{1},\ldots,e_{n})\) on \(n\) edges beginning at \(e_{1}=\hat{e}\), and minimal with respect to including an edge in \(E(G[S])\), that is, \(e_{n}\) is the only edge in \(P\) which is also in the subgraph induced by vertices covered by \(\left\{e^{\prime},e^{\prime\prime},\hat{f}\right\}\), for some \(\hat{f}\in\Lambda\). Now, we let \(\hat{M}\) be the matching alternating edges along \(P\), _requiring_\(\hat{e}\in\hat{M}\), and reason on the parity of the path length. Suppose first that \(n\) is odd, so that \(e_{n}\) is also in \(\hat{M}\). If \(e_{n}\) is either \(e^{\prime}\) or \(e^{\prime\prime}\), we are done. If \(e_{n}=\hat{f}\), we set \(M\stackrel{{\rm def}}{{=}}\hat{M}\uplus\{e^{\prime},e^{\prime \prime}\}\), and we are done. Otherwise, \(e_{n}=\{u,v\}\) as in case (i): without loss of generality, let \(e_{n}\) join \(u\in e^{\prime}\) and \(v\in\hat{f}\). Using again the hypothesis that the subgraph induced by \(\left\{e^{\prime},e^{\prime\prime},\hat{f}\right\}\) is 2-connected, choose \(M\stackrel{{\rm def}}{{=}}\hat{M}\uplus\{e^{\prime\prime}\}\), and we are done. Suppose now that \(n\) is even, and thus \(e_{n}\not\in\hat{M}\). By the 2-connectivity hypothesis, we may augment \(\hat{M}\) to get a connected matching with exactly one of \(e^{\prime}\) or \(e^{\prime\prime}\), whose characteristic vector is thus in the face \(F\). If \(e_{n-1}\) is incident to a vertex of \(e^{\prime}\), we set \(M\stackrel{{\mathrm{def}}}{{=}}\hat{M}\uplus\left\{e^{\prime\prime},e ^{\dagger}\right\}\), where \(e^{\dagger}\) joins the endpoint of \(e^{\prime}\) not covered by \(\hat{M}\) to a vertex of \(\hat{f}\). The analogous argument holds when \(e_{n-1}\) is incident to a vertex of \(e^{\prime\prime}\). Finally, if \(e_{n-1}\) is incident to a vertex of \(\hat{f}\), we simply choose \(M\stackrel{{\mathrm{def}}}{{=}}\hat{M}\uplus\left\{e^{\prime}\right\}\). To complete the proof, we consider each edge \(e_{k}\not\in\overline{\Lambda}\). By Fact 2, we obtain a connected matching \(M\) including \(e_{k}\) such that \(\chi^{M}\in F\). Using (3) in Fact 1, we conclude that \(\lambda_{e_{k}}=0\). Hence \((\lambda,\lambda_{0})=(\rho\pi,\rho\pi_{0})\), and \(F\) determines a facet of \(\mathfrak{C}(G)\). AcknowledgementThe author is grateful to the support by the Research Council of Norway through the research project 249994 CLASSIS. This work is dedicated to the sweet memory of our department administration member Ingrid Kyllingmark.
2309.07703
Causal Entropy and Information Gain for Measuring Causal Control
Artificial intelligence models and methods commonly lack causal interpretability. Despite the advancements in interpretable machine learning (IML) methods, they frequently assign importance to features which lack causal influence on the outcome variable. Selecting causally relevant features among those identified as relevant by these methods, or even before model training, would offer a solution. Feature selection methods utilizing information theoretical quantities have been successful in identifying statistically relevant features. However, the information theoretical quantities they are based on do not incorporate causality, rendering them unsuitable for such scenarios. To address this challenge, this article proposes information theoretical quantities that incorporate the causal structure of the system, which can be used to evaluate causal importance of features for some given outcome variable. Specifically, we introduce causal versions of entropy and mutual information, termed causal entropy and causal information gain, which are designed to assess how much control a feature provides over the outcome variable. These newly defined quantities capture changes in the entropy of a variable resulting from interventions on other variables. Fundamental results connecting these quantities to the existence of causal effects are derived. The use of causal information gain in feature selection is demonstrated, highlighting its superiority over standard mutual information in revealing which features provide control over a chosen outcome variable. Our investigation paves the way for the development of methods with improved interpretability in domains involving causation.
Francisco Nunes Ferreira Quialheiro Simoes, Mehdi Dastani, Thijs van Ommen
2023-09-14T13:25:42Z
http://arxiv.org/abs/2309.07703v2
# Causal Entropy and Information Gain for Measuring Causal Control ###### Abstract Artificial intelligence models and methods commonly lack causal interpretability. Despite the advancements in interpretable machine learning (IML) methods, they frequently assign importance to features which lack causal influence on the outcome variable. Selecting causally relevant features among those identified as relevant by these methods, or even before model training, would offer a solution. Feature selection methods utilizing information theoretical quantities have been successful in identifying statistically relevant features. However, the information theoretical quantities they are based on do not incorporate causality, rendering them unsuitable for such scenarios. To address this challenge, this article proposes information theoretical quantities that incorporate the causal structure of the system, which can be used to evaluate causal importance of features for some given outcome variable. Specifically, we introduce causal versions of entropy and mutual information, termed causal entropy and causal information gain, which are designed to assess how much control a feature provides over the outcome variable. These newly defined quantities capture changes in the entropy of a variable resulting from interventions on other variables. Fundamental results connecting these quantities to the existence of causal effects are derived. The use of causal information gain in feature selection is demonstrated, highlighting its superiority over standard mutual information in revealing which features provide control over a chosen outcome variable. Our investigation paves the way for the development of methods with improved interpretability in domains involving causation. Keywords:Causal Inference Information Theory Interpretable Machine Learning Explainable Artificial Intelligence ## 1 Introduction Causality plays an important role in enhancing not only the prediction power of a model [19] but also its interpretability [4]. Causal explanations are more appropriate for human understanding than purely statistical explanations [12]. Accordingly, comprehending the causal connections between the variables of a system can enhance the interpretability of interpretable machine learning (IML) methods themselves. Interpretable models such as linear regression or decision trees do not, despite their name, always lend themselves to _causal_ interpretations. To illustrate this point, consider running multilinear regression on the predictors \(X_{1},X_{2}\) and outcome \(Y\) within a system whose variables are causally related as depicted in the graph of Figure 1. The regression coefficients \(\beta_{1}\) and \(\beta_{2}\) of \(X_{1}\) and \(X_{2}\) might yield large values, which may be (and are often in practice) interpreted as suggesting a causal relationship. However, a causal interpretation of \(\beta_{1}\) would not be appropriate. Although \(X_{1}\) might provide predictive power over \(Y\), this does not imply a causal relationship, since this predictive power is due to the confounder \(W\). Consequently, intervening on \(X_{1}\) would not impact the outcome \(Y\). In current model-agnostic methods, a causal interpretation is often desirable but rarely possible. In partial dependence plots (PDPs) [6], the partial dependence of a model outcome \(\hat{Y}\) on a variable \(X_{i}\) coincides with the backdoor criterion formula [15] when the conditioning set encompasses all the other covariates \(X_{j\neq i}\)[24]. Consequently, there is a risk of disregarding statistical dependence or, conversely, finding spurious dependence, by conditioning on causal descendants of \(X_{i}\)[24]. Therefore, PDPs (along with the closely related individual conditional expectation (ICE) lines [7]) generally lack a causal interpretation. Similarly, when utilizing (Local Interpretable Model-Agnostic Explanations) LIME [18] to evaluate the importance of a feature for an individual, a causal interpretation cannot be guaranteed. LIME fits a local model around the point of interest and assesses which features, when perturbed, would cause the point to cross the decision boundary of the model. However, intervening on a feature in such a way as to cross the model's decision boundary does not guarantee an actual change in the outcome in reality. This is because the model was trained on observational data, and that feature may merely be correlated with the outcome through a confounding factor, for example, rather than having a causal effect on the outcome. In both cases just described, it is the presence of confounders, selection bias, or an incorrect direction of causality seemingly implied by the model that can lead to misleading predictions and interpretations. We need a way to select which features are causally relevant -- _i.e._ give us control over the chosen outcome variable. Information theoretical quantities such as mutual information are often used to assess the relevance of a feature with respect to a given outcome variable [20, 2, 25], but this relevance is still purely statistical. This is a common issue when using standard information theoretical quantities in situations that require consideration of the underlying causal relationships. A version of mutual information which takes into account the causal structure of the system would solve this problem. This is what we set out to develop in this work. In our research, we extend traditional conditional entropy and mutual information to the realm of _interventions_, as opposed to simple conditioning. This extension drew inspiration from the conceptual and philosophical work presented in1[8]. We dub these constructs "causal entropy" and "causal information gain". They are designed to capture changes in the entropy of a given variable in re sponse to manipulations affecting other variables. We derive fundamental results connecting these quantities to the presence of causal effect. We end by illustrating the use of causal information gain in selecting a variable which allows us to control an outcome variable, and contrast it with standard mutual information. The novelty of our work consists of providing rigorous definitions for causal entropy and causal information gain, as well as deriving some of their key properties for the first time. These contributions set the foundations for the development of methods which correctly identify features which provide causal control over an outcome variable. This paper is organized as follows. In Section 2, we introduce the definitions of quantities from the fields of causal inference and information theory that will be used throughout the rest of the paper. Section 3 includes a simple example of a structural causal model where standard entropy and mutual information are inadequate for obtaining the desired causal insights. In Section 4, we define causal entropy and explore its relation to total effect. Section 5 discusses the definition of causal information gain and investigates its connection with causal effect. Furthermore, it revisits the example from Section 3, showing that causal entropy and causal information gain allow us to arrive at the correct conclusions about causal control. In Section 6, we compare the definitions and results presented in this paper with those of previous work. Finally, in Section 7, we discuss the obtained results and propose future research directions. ## 2 Formal Setting In this section we present the definitions from causal inference and information theory which are necessary for the rest of this paper. All random variables are henceforth assumed to be discrete and have finite range. ### Structural Causal Models One can model the causal structure of a system by means of a "structural causal model", which can be seen as a Bayesian network [10] whose graph \(G\) has a causal interpretation and each conditional probability distribution (CPD) \(P(X_{i}\mid\mathrm{PA}_{X_{i}})\) of the Bayesian network stems from a deterministic function \(f_{X_{i}}\) (called "structural assignment") of the parents of \(X_{i}\). In this context, it is common to separate the parent-less random variables (which are called "exogenous" or "noise" variables) from the rest (called "endogenous" variables). Only the endogenous variables are represented in the structural causal model graph. As is commonly done [16], we assume that the noise variables are jointly independent and that exactly one noise variable \(N_{X_{i}}\) appears as an argument in the structural assignment \(f_{X_{i}}\) of \(X_{i}\). In full rigor2[16]: Footnote 2: We slightly rephrase the definition provided in [16] to enhance its clarity. Definition 1 (Structural Causal Model): Let \(X\) be a random variable with range \(R_{X}\) and \(\mathbf{W}\) a random vector with range \(R_{\mathbf{W}}\). A structural assignment for \(X\) from \(\mathbf{W}\) is a function \(f_{X}\colon R_{\mathbf{W}}\to R_{X}\). A structural causal model (SCM) \(\mathcal{C}=(\mathbf{X},\mathbf{N},S,p_{\mathbf{N}})\) consists of:_ 1. _A random vector_ \(\mathbf{X}=(X_{1},\ldots,X_{n})\) _whose variables we call_ endogenous_._ 2. _A random vector_ \(\mathbf{N}=(N_{X_{1}},\ldots,N_{X_{n}})\) _whose variables we call_ exogenous _or_ noise_._ 3. _A set_ \(S\) _of_ \(n\) _structural assignments_ \(f_{X_{i}}\) _for_ \(X_{i}\) _from (_PA\({}_{X_{i}},N_{X_{i}}\)_), where_ PA\({}_{X_{i}}\subseteq\mathbf{X}\) _are called_ parents _of_ \(X_{i}\)_. The_ causal graph__\(G^{\mathcal{C}}:=(\mathbf{X},E)\) _of_ \(\mathcal{C}\) _has as its edge set_ \(E=\{(P,X_{i}):X_{i}\in\mathbf{X},\ P\in\text{PA}_{X_{i}}\}\)_. The_ PA\({}_{X_{i}}\) _must be such that the_ \(G^{\mathcal{C}}\) _is a directed acyclic graph (DAG)._ 4. _A jointly independent probability distribution_ \(p_{\mathbf{N}}\) _over the noise variables. We call it simply the_ noise distribution_._ We denote by \(\mathcal{C}(\mathbf{X})\) the set of SCMs with vector of endogenous variables \(\mathbf{X}\). Furthermore, we write \(X:=f_{X}(X,N_{X})\) to mean that \(f_{X}(X,N_{X})\) is a structural assignment for \(X\). Notice that for a given SCM the noise variables have a known distribution \(p_{\mathbf{N}}\) and the endogenous variables can be written as functions of the noise variables. Therefore the distributions of the endogenous variables are themselves determined if one fixes the SCM. This brings us to the notion of the entailed distribution [2][16]: Definition 2 (Entailed distribution): Let \(\mathcal{C}=(\mathbf{X},\mathbf{N},S,p_{\mathbf{N}})\) be an SCM. Its _entailed distribution_\(p_{\mathbf{X}}^{\mathcal{C}}\) is the unique joint distribution over \(\mathbf{X}\) such that \(\forall X_{i}\in\mathbf{X},\ X_{i}=f_{X_{i}}(\text{PA}_{X_{i}},N_{X_{i}})\). It is often simply denoted by \(p^{\mathcal{C}}\). Let \(\mathbf{x}_{-i}:=(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})\). For a given \(X_{i}\in\mathbf{X}\), the marginalized distribution \(p_{X_{i}}^{\mathcal{C}}\) given by \(p_{X_{i}}^{\mathcal{C}}(x_{i})=\sum_{\mathbf{x}_{-i}}p_{\mathbf{X}}^{\mathcal{ C}}(\mathbf{x})\) is also referred to as _entailed distribution (of \(X_{i}\))._ An SCM allows us to model interventions on the system. The idea is that an SCM represents how the values of the random variables are generated, and by intervening on a variable we are effectively changing its generating process. Thus intervening on a variable can be modeled by modifying the structural assignment of said variable, resulting in a new SCM differing from the original only in the structural assignment of the intervened variable, and possibly introducing a new noise variable for it, in place of the old one. Naturally, the new SCM will have an entailed distribution which is in general different from the distribution entailed by the original SCM. The most common type of interventions are the so-called "atomic interventions", where one sets a variable to a chosen value, effectively replacing the distribution of the intervened variable with a point mass distribution. In particular, this means that the intervened variable has no parents after the intervention. This is the only type of intervention that we will need to consider in this work. Formally [2][16]: Definition 3 (Atomic intervention): Let \(\mathcal{C}=(\mathbf{X},\mathbf{N},S,p_{\mathbf{N}})\) be an SCM, \(X_{i}\in\mathbf{X}\) and \(x\in R_{X_{i}}\). The _atomic intervention_\(do(X_{i}=x)\) is the function \(\mathcal{C}(\mathbf{X})\rightarrow\mathcal{C}(\mathbf{X})\) given by \(\mathcal{C}\mapsto\mathcal{C}^{do(X_{i}=x)}\), where \(\mathcal{C}^{do(X_{i}=x)}\) is the SCM that differs from \(\mathcal{C}\) only in that the structural assignment \(f_{X_{i}}(\mathit{PA}_{X_{i}},N_{X_{i}})\) is replaced by the structural assignment \(\tilde{f}_{X_{i}}(\tilde{N}_{X_{i}})=\tilde{N}_{X_{i}}\), where \(\tilde{N}_{X_{i}}\) is a random variable with range \(R_{X_{i}}\) and3\(p_{\tilde{N}_{X_{i}}}(x_{i})=\mathbf{1}_{x}(x_{i})\) for all \(x_{i}\in R_{X_{i}}\). Such SCM is called the _post-atomic-intervention SCM_. One says that the variable \(X_{i}\) was _(atomically) intervened on_. The distribution \(p^{do(X_{i}=x)}:=p^{\mathcal{C}^{do(X_{i}=x)}}\) entailed by \(\mathcal{C}^{do(X_{i}=x)}\) is called the _post-intervention distribution (w.r.t. the atomic intervention \(\mathit{do}(X_{i}=x)\) on \(\mathcal{C}\)). Footnote 3: We denote by \(\mathbf{1}_{x}\) the indicator function of \(x\), so that \(\mathbf{1}_{x}(x_{i})=\begin{cases}1,&x_{i}=x\\ 0,&\text{otherwise}\end{cases}\). We can also define what we mean by "\(X\) having a total causal effect on \(Y\)". Following [16, 14], there is such a total causal effect if there is an atomic intervention on \(X\) which modifies the initial distribution of \(Y\)[16]: Definition 4 (Total Causal Effect): Let \(X\), \(Y\) be random variables of an SCM \(\mathcal{C}\). \(X\) has a _total causal effect on \(Y\)_, denoted by \(X\neg Y\), if there is \(x\in R_{X}\) such that \(p_{Y}^{do(X=x)}\neq p_{Y}\). In this work, all variables of the form \(X_{i}\), \(Y_{i}\) or \(Z_{i}\) are taken to be endogenous variables of some SCM \(\mathcal{C}\). ### Entropy and Mutual Information Since the quantities defined and studied in this article build upon the standard entropy and mutual information, it is important for the reader to be familiar with these. In this subsection we will state the definitions of entropy, conditional entropy and mutual entropy. In the interest of space, we will not try to motivate these definitions. For a pedagogical introduction the reader is referred to [5, 11]. We will also clarify what we precisely mean by causal control. Definition 5 (Entropy and Conditional Entropy [5]): Let \(X\) be a discrete random variable with range \(R_{X}\) and \(p\) be a probability distribution for \(X\). The _entropy of \(X\) w.r.t. the distribution \(p\)_is4 Footnote 4: In this article, \(\log\) denotes the logarithm to the base \(2\). \[H_{X\sim p}(X):=-\sum_{x\in R_{X}}p(x)\log p(x). \tag{1}\] Entropy is measured in _bit_. If the context suggests a canonical probability distribution for \(X\), one can write \(H(X)\) and refers to it simply as the _entropy of \(X\). The _conditional entropy \(H(Y\mid X)\) of \(Y\) conditioned on \(X\) is the expected value w.r.t. \(p_{X}\) of the entropy \(H(Y\mid X=x):=H_{Y\sim p_{Y\mid X=x}}(Y)\): \[H(Y\mid X):=\mathbb{E}_{x\sim p_{X}}\left[H(Y\mid X=x)\right]. \tag{2}\] This means that the conditional entropy \(H(Y\mid X)\) is the entropy of \(H(Y)\) that remains on average if one conditions on \(X\). An essential concept closely associated with entropy is that of "uncertainty." This qualitative concept is often present when interpreting information-theoretical quantities. The entropy of a variable \(X\) purports to measure the uncertainty regarding \(X\). In this paper, we use another qualitative concept called "causal control" (or simply "control"). The (causal) control that variable \(X\) has over variable \(Y\) is the level of uncertainty remaining about \(Y\) after intervening on \(X\). It indicates how close we are to fully specifying \(Y\) by intervening on \(X\). This understanding of the term "control" has been implicitly utilized in the philosophy of science literature [17, 3]. Remark 1: Notice that \(H(Y\mid X=x)\) is seen as a function of \(x\) and the expected value in Equation (2) is taken over the random variable \(x\) with distribution \(p_{X}\). This disrespects the convention that random variables are represented by capital letters, but preserves the convention that the specific value conditioned upon (even if that value can be randomly realized -- _i.e._ is a random variable) is represented by a lower case letter. Since we cannot respect both, we will follow the common practice and opt to use lower case letters for random variables in these cases. There are two common equivalent ways to define mutual information (often called information gain). Definition 6 (Mutual Information [5]): Let \(X\) and \(Y\) be discrete random variables with ranges \(R_{X}\) and \(R_{Y}\) and distributions \(p_{X}\) and \(p_{Y}\), respectively. The _mutual information_ between \(X\) and \(Y\) is the KL divergence between the joint distribution \(p_{X,Y}\) and the product distribution \(p_{X}p_{Y}\), i.e.: \[I(X;Y):=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Example 1: Let us consider an ice-cream shop where the sales volume \(Y\) on a given day can be categorized as low (\(Y=0\)), medium (\(Y=1\)), or high (\(Y=2\)). We would like to find a way to control \(Y\). Assume that the sales volume is influenced by two factors: the temperature \(W\), characterized as warm (\(W=1\)) or cold (\(W=0\)), and whether the ice-cream shop is being advertised, represented by the binary variable \(X_{2}\). Additionally, we introduce a discrete variable \(X_{1}\) to represent the number of individuals wearing shorts, which can be categorized as few (\(X_{1}=0\)), some (\(X_{1}=1\)), or many (\(X_{1}=2\)). Naturally, higher temperatures have a positive influence on the variable \(X_{1}\). We do not consider any other variables. One can crudely model this situation using an SCM with endogenous variables \(X_{1},X_{2},W\) and \(Y\), as specified in Figure 1. The chosen structural assignments and noise distributions reflect the specific scenario where: the temperature \(W\) is warm about half of the time; the number \(X_{1}\) of people wearing shorts is highly determined by the weather conditions; and the ice-cream shop is advertised occasionally. \(W\), \(X_{2}\) and all noise variables of the SCM are binary variables, while \(X_{1},Y\in\{0,1,2\}\). Assume we cannot intervene on \(W\). We would like to decide which of the variables \(X_{1}\) or \(X_{2}\) provide us with the most control over \(Y\). It is clear that being able to intervene on \(X_{1}\) gives us no control whatsoever over \(Y\). Any observed statistical dependence between \(X_{1}\) and \(Y\) comes purely from the confounder \(W\). Consequently, interpreting a non-zero correlation or mutual information between \(X_{1}\) and \(Y\) as indicative of a causal connection between these variables would be a mistake, and an instance of conflation between correlation and causation. Figure 1: An SCM6. It models the real-world scenario described in Example 1, where \(Y\) is the sales volume of an ice-cream shop, \(W\) is the temperature, \(X_{1}\) is the amount of people wearing shorts, and \(X_{2}\) stands for the advertisement efforts of the ice-cream shop. The notation \(N_{Z}\sim\text{Bern}(q)\) signifies that the random variable \(Z\) follows the Bernoulli probability distribution with parameter \(q\). Grayed out variables cannot be intervened on. If we naively use the mutual information to assess whether one should intervene on \(X_{1}\) or \(X_{2}\) for controlling \(Y\), one wrongly concludes that one should use \(X_{1}\). Intuitively, this happens because knowing \(X_{2}\) provides us with less information about \(Y\) than \(W\), and \(X_{1}\) is very close to \(W\). The (approximate) values can be consulted6 in Table 1. Footnote 6: The details of the computations can be found in Appendix A. Notice that \(I(Y;W)>I(Y;X_{1})\), as it should be: \(W\) has more information about \(Y\) than \(X_{1}\) has. We also see that \(I(Y;X_{2})<I(Y;X_{1})\). If mutual information were a suitable criterion for selecting the variable to intervene on, the contrary would be expected. In the context of our real-world scenario, intervening on the number \(X_{1}\) of people wearing shorts would not be a logical approach for controlling ice cream sales. Instead, allocating more resources to advertising efforts (represented by \(X_{2}\)) would be more appropriate. The issue is that the mutual information \(I(Y;X_{1})\) includes the information that one has about \(Y\) by _observing_\(X_{1}\) which flows through the confounder \(W\). But what we want is a metric quantifying how much control we can have over \(Y\) by _intervening_ on \(X_{1}\). We will see that the generalization of mutual information studied in this paper ("causal information gain") satisfies these requirements. ## 4 Causal Entropy The causal entropy of \(Y\) for \(X\) will be the entropy of \(Y\) that is left, on average, after one atomically intervenes on \(X\). In this section we give a rigorous definition of causal entropy and study its connection to causal effect. We define causal entropy in a manner analogous to conditional entropy (see Definition 5). It will be the average uncertainty one has about \(Y\) if one sets \(X\) to \(x\) with probability \(p_{X^{\prime}}(x)\), where \(X^{\prime}\) is a new auxiliary variable with the same range as \(X\) but independent of all other variables, including \(X\). In contrast with the non-causal case, here one needs to make a choice of distribution over \(X^{\prime}\) corresponding to the distribution over the atomic interventions that one is intending to perform. Definition 7 (Causal entropy, \(H_{c}\)): Let \(Y\), \(X\) and \(X^{\prime}\) be random variables such that \(X\) and \(X^{\prime}\) have the same range and \(X^{\prime}\) is independent of all variables in \(\mathcal{C}\). We say that \(X^{\prime}\) is an _intervention protocol_ for \(X\). \begin{table} \begin{tabular}{l c c} \hline \(H(Y)\approx 1.41\) & \(H(Y\mid X_{1})\approx 0.85\) & \(H(Y\mid X_{2})=1\) \\ \hline \hline \(I(Y;W)\approx 0.60\) & \(I(Y;X_{1})\approx 0.56\) & \(I(Y;X_{2})\approx 0.41\) \\ \hline \end{tabular} \end{table} Table 1: Information theoretical values for Figure 1. _The causal entropy \(H_{c}(Y\mid\text{do}(X\sim X^{\prime}))\) of \(Y\) given the intervention protocol \(X^{\prime}\) for \(X\) is the expected value w.r.t. \(p_{X^{\prime}}\) of the entropy \(H(Y\mid\text{do}(X=x)):=H_{Y\sim p_{Y}^{\text{do}(X=x)}}(Y)\) of the interventional distribution \(p_{Y}^{\text{do}(X=x)}\). That is:_ \[H_{c}(Y\mid\text{do}(X\sim X^{\prime})):=\mathbb{E}_{x\sim p_{X^{\prime}}} \left[H(Y\mid\text{do}(X=x))\right] \tag{5}\] We will now see that, unsurprisingly, if there is no total effect of \(X\) on \(Y\), then the causal entropy is just the initial entropy \(H(Y)\). Perhaps more unexpectedly, the converse is not true: it is possible to have \(H_{c}(Y\mid X\sim X^{\prime})=H(Y)\) while \(X\neg Y\). One way this can happen is due to the non-injectivity of entropy when seen as a mapping from the set of distributions over \(Y\), _i.e._ it may happen that \(p_{Y}^{\text{do}(X=x)}\neq p_{Y}\) but \(H_{Y\sim p_{Y}^{\text{do}(X=x)}}(Y)=H_{Y\sim p_{Y}}(Y)\). Proposition 1: _If there is no total effect of \(X\) on \(Y\), then \(H_{c}(Y\mid\text{do}(X\sim X^{\prime}))=H(Y)\) for any intervention protocol \(X^{\prime}\) for \(X\). The converse does not hold._ Proof: The proof can be found in Appendix 0.B. If there is a total causal effect of \(X\) on \(Y\), there cannot be a total causal effect of \(Y\) on \(X\) (if \(X\) is a cause of \(Y\), \(Y\) cannot be a cause of \(X\)) [16]. This immediately yields the following corollary. Corollary 1: _If \(H_{c}(Y\mid\text{do}(X\sim X^{\prime}))\neq H(Y)\) for some intervention protocol \(X^{\prime}\) for \(X\), then \(H_{c}(X\mid\text{do}(Y\sim Y^{\prime}))=H(X)\) for any intervention protocol \(Y^{\prime}\) for \(Y\)._ Proof: Suppose that \(H_{c}(Y\mid X\sim X^{\prime})\neq H(Y)\). By the contrapositive of Proposition 1, this means that there is a total effect of \(X\) on \(Y\). Hence there is no total effect of \(Y\) on \(X\), which again by Proposition 1 yields the desired result. ## 5 Causal Information Gain Causal information gain extends mutual information to the causal context. The causal information gain of \(Y\) for \(X\) will be the average decrease in the entropy of \(Y\) after one atomically intervenes on \(X\). We start this section by giving a rigorous definition of causal information gain, and proceed to study its connection with causal effect. We end this section by revisiting Example 1 armed with this new information theoretical quantity. We will confirm in this example that causal information is the correct tool for assessing which variable has the most causal control over the outcome, as opposed to standard mutual information. Recall the entropy-based definition of mutual information in Equation (4). The mutual information between two variables \(X\) and \(Y\) is the average reduction in uncertainty about \(Y\) if one observes the value of \(X\) (and vice-versa, by symmetry of the mutual information). This view of mutual information allows for a straightforward analogous definition in the causal case, so that one can take causal information gain \(I_{c}(Y\mid\text{do}(X\sim X^{\prime}))\) to signify the average reduction in uncertainty about \(Y\) if one sets \(X\) to \(x\) with probability \(p_{X^{\prime}}(x)\). Definition 8 (Causal Information Gain, \(I_{c}\)): Let \(Y\), \(X\) and \(X^{\prime}\) be random variables such that \(X^{\prime}\) is an intervention protocol for \(X\). The _causal information gain_\(I_{c}(Y\mid do(X\sim X^{\prime}))\) of \(Y\) for \(X\) given the intervention protocol \(X^{\prime}\) is the difference between the entropy of \(Y\) w.r.t. its prior and the causal entropy of \(Y\) given the intervention protocol \(X^{\prime}\). That is: \[I_{c}(Y\mid do(X\sim X^{\prime})):=H(Y)-H_{c}(Y\mid do(X\sim X^{\prime})). \tag{6}\] A few properties of causal information gain can be immediately gleaned from its definition. First, in contrast with mutual information, causal information gain is _not_ symmetric. Also, similarly to causal entropy, one needs to specify an intervention protocol with a distribution to be followed by interventions on \(X\). We can make use of the relation between causal entropy and causal effect to straightforwardly deduce the relation between causal information gain and causal effect. Proposition 2: _If \(I_{c}(Y\mid do(X\sim X^{\prime}))\neq 0\) for some protocol \(X^{\prime}\) for \(X\), then \(X{\rightarrow}Y\). The converse does not hold._ Proof: The implication in this proposition follows directly from Definition 8 and the contrapositive of the implication in Proposition 1. The converse does not hold simply because it is equivalent to the converse of the contrapositive of the implication in Proposition 1, which also does not hold. Corollary 2: _Let \(X^{\prime}\) and \(Y^{\prime}\) be intervention protocols for \(X\) and \(Y\), respectively. At least one of \(I_{c}(Y\mid do(X\sim X^{\prime}))\) or \(I_{c}(X\mid do(Y\sim Y^{\prime}))\) is zero._ Proof: Suppose both \(I_{c}(Y\mid do(X\sim X^{\prime}))\) and \(I_{c}(X\mid do(Y\sim Y^{\prime}))\) are non-zero. Then by Proposition 2 we have both \(X{\rightarrow}Y\) and \(Y{\rightarrow}X\), which is not possible in the context of an SCM. It is worth noting that the last part of Proposition 2 contradicts [17]. In that work, it is stated without proof that "causation is equivalent to non-zero specificity", wherein the term "specificity" coincides with what we refer to as causal information gain given a uniformly distributed intervention protocol. ### Comparison of Causal Information Gain and Mutual Information in Running Example Consider again Example 1. Compare the causal entropy and causal information gain values7 in Table 2 with the conditional entropy and mutual information values from Table 1. Footnote 7: In this particular case it does not matter what intervention protocol \(X^{\prime}\) we choose, since \(H_{c}(Y\mid do(X_{1}=x_{1}))=H(Y)\approx 1.41\) for all \(x_{1}\) and \(H_{c}(Y\mid do(X_{2}=x_{2}))=1\) for all \(x_{2}\). We see that using causal information gain allows us to correctly conclude that using \(X_{1}\) to control \(Y\) would be fruitless: intervening on \(X_{1}\) does not change the entropy of \(Y\). This is reflected by the fact that the causal information gain of \(Y\) for \(X_{1}\) is zero. Since \(X_{1}\) has no causal effect on \(Y\), this result was to be expected by the contrapositive of Proposition 2. On the other hand, \(X_{2}\) does provide us with some control over \(Y\): intervening on \(X_{2}\) decreases the entropy of \(Y\) by \(0.4\) bit on average. In the real-world scenario described in Example 1, utilizing causal information gain to determine which variable to intervene on for controlling the sales volume \(Y\) would lead us to make the correct decision of intensifying advertising efforts (\(X_{2}\)). Furthermore, it would enable us to conclude that manipulating the number of people wearing shorts (\(X_{1}\)) provides no control whatsoever over \(Y\). Thus, causal information gain could be used in this case to assess whether statistical dependence between \(Y\) and another variable in this causal system can be interpreted to have causal significance. ## 6 Related Work Previous work has aimed to provide causal explanations of machine learning models through "counterfactual explanations" [21, 13]. These explanations reveal what the model would have predicted under different feature values. However, they do not offer insights into the causal significance of a feature in influencing the outcome variable. Instead, they merely inform us about the behavior of the model itself. In other words, counterfactual explanations inform us about the changes required for the model to produce a different prediction, but not the changes necessary for the outcome to differ in reality. While counterfactual explanations can be useful, for instance, in advising loan applicants on improving their chances of approval [13], they fall short in providing causal interpretations for tasks such as scientific exploration [23], where it is crucial to understand the actual causal relationships between features and the chosen outcome. As discussed in Section 1, the quantities investigated in this paper can precisely address this need. Information theoretical quantities aimed at capturing aspects of causality have been previously proposed. An important example is the work in [9]. In that paper, the authors suggest a list of postulates that a measure of causal strength should satisfy, and subsequently demonstrate that commonly used measures fall short of meeting them. They then propose their own measure (called "causal influence"), which does satisfy the postulates. Causal influence is the KL divergence of the original joint distribution and the joint distribution resulting from removing the arrows whose strength we would like to measure, and feeding noise to the orphaned nodes. Thus although it utilizes information theory, it does not purport to generalize entropy or mutual information to the causal context. One \begin{table} \begin{tabular}{l c} \hline \(H_{c}(Y\mid do(X_{1}\sim X_{1}^{\prime}))\approx 1.41\) & \(H_{c}(Y\mid do(X_{2}\sim X_{2}^{\prime}))=1\) \\ \hline \hline \(I_{c}(Y\mid do(X_{1}\sim X_{1}^{\prime}))=0\) & \(I_{c}(Y\mid do(X_{2}\sim X_{2}^{\prime}))\approx 0.41\) \\ \hline \end{tabular} \end{table} Table 2: Causal information theoretical values for Figure 1. information-theoretical measure mentioned in [9] is closer to ours. It is called "information flow" [1]. Similarly to causal information gain, this quantity is a causal generalization of mutual information. Their goal was to come up with a generalization of mutual information that would be a measure of "causal independence" in much the same way as standard mutual information is a measure of statistical independence. They take the route of starting from the definition of mutual information as the KL divergence between the joint distribution and the product of the marginal distributions (Equation (3)), and proceed to "make it causal" by effectively replacing conditioning with intervening everywhere. In contrast, we treat entropy as the main quantity of interest, and start from the definition of mutual entropy as the change in entropy due to conditioning (Equation (4)), and proceed to define its causal counterpart as the change in entropy due to intervening. This then results in a quantity that is the appropriate tool for evaluating the control that a variable has over another. The basic idea of extending the concept of mutual information to the causal context as the average reduction of entropy after intervening was introduced in the philosophy of science literature, as part of an attempt to capture a property of causal relations which they refer to as "specificity" [8]. This property can be thought of as a measure of the degree to which interventions on the cause variable result in a deterministic one-to-one mapping [22]. This means that maximal specificity of a causal relationship is attained when: (a) performing an atomic intervention on the cause variable results in complete certainty about the effect variable's value; and (b) no two distinct atomic interventions on the cause variable result in the same value for the effect variable [8]. Notice that (a) means precisely that the cause variable provides maximal causal control over the effect variable. The causal extension of mutual information proposed in [8] was named "causal mutual information". They call "causal entropy" the average entropy of the effect variable after performing an atomic intervention on the cause variable. Their "causal mutual information" is then the difference between the initial entropy of the effect variable and the causal entropy. Although they do not say so explicitly, their definition of causal entropy assumes that one only cares about the entropy that results from interventions that are equally likely: the average of post-intervention entropies is taken w.r.t. a uniform distribution -- hence their "causal entropy" is the same as the causal entropy defined in this paper, but restricted to uniform intervention protocols. This was also noted in [17], where the authors propose that other choices of distribution over the interventions would result in quantities capturing causal aspects that are distinct from the standard specificity. In this paper we both generalized and formalized the information theoretical notions introduced in [8]. We provided rigorous definitions of causal entropy and causal information gain which allow for the use of non-uniform distributions over the interventions. Our causal entropy can thus be seen as a generalized version of their causal entropy, while our causal information gain can be seen as a generalization of their causal mutual information8. Armed with concrete, mathematical definitions, we are able to study key mathematical aspects of these quantities. Footnote 1: The authors are grateful to the anonymous reviewers for their contributions to the authors. ## 7 Discussion and Conclusion The motivation behind extending traditional entropy and mutual information to interventional settings in the context of interpretable machine learning (IML) arises from the necessity to determine whether the high importance assigned to specific features by machine learning models and IML methods can be causally interpreted or is purely of a statistical nature. Information theoretical quantities are commonly used to assess statistical feature importance. We extended these quantities to handle interventions, allowing them to capture the control one has over a variable by manipulating another. The proposed measures, namely causal entropy and causal information gain, hold promise for the development of new algorithms in domains where knowledge of causal relationships is available or obtainable. It is worth noting that the utility of these measures extends well beyond the field of IML, as both information-theoretical quantities and the need for causal control are pervasive in machine learning. Moving forward, a crucial theoretical endeavor involves establishing a fundamental set of properties for the proposed causal information-theoretical measures. This can include investigating a data processing inequality and a chain rule for causal information gain, drawing inspiration from analogous properties associated with mutual information. Other important research directions involve the extension of these definitions to continuous variables, as well as investigating the implications of employing different intervention protocols. Furthermore, the design and study of appropriate estimators for these measures constitute important avenues for future research, as well as their practical implementation. Ideally, these estimators should be efficient to compute even when dealing with high-dimensional data and complex, real-world datasets. Additionally, they ought to be applicable to observational data. In cases where the structural causal model is known, this could be accomplished by utilizing a framework such as _do_-calculus [14] when devising the estimators. This could allow for their application in extracting causal insights from observational data. ## Appendix 0.A Computations for the running example We have \[H(Y) =p_{Y}(0)\log(\frac{1}{p_{Y}(0)})+p_{Y}(1)\log(\frac{1}{p_{Y}(1)} )+p_{Y}(2)\log(\frac{1}{p_{Y}(2)})\] \[=\frac{3}{8}\log(\frac{8}{3})+\frac{1}{2}\log(2)+\frac{1}{8}\log( 8)=2-\frac{3}{8}\log(3)\approx 1.41\,(\text{bit}),\] and \[H(Y\mid W)=H(Y\mid W=0)=\frac{3}{4}\log(\frac{4}{3})+\frac{1}{4}\log(4)\approx 0.81\,(\text{bit}),\] where we used that \(H(Y\mid W=0)=H(Y\mid W=1)\), so that taking the average is unnecessary. Notice that \(X_{1}=0\) implies \(W=0\), in which case \(Y=X_{2}\). Hence \(H(Y\mid X_{1}=0)=H(Y\mid W=0)\approx 0.81\,(\text{bit})\). By a similar argument, \(H(Y\mid X_{1}=2)=H(Y\mid W=1)\approx 0.81\,(\text{bit})\). Now, denote \(q=\frac{1}{64}\). It is easy to check that \(p_{Y\mid X_{1}=1}(0)=\frac{3q}{4}\), \(p_{Y\mid X_{1}=1}(1)=\frac{3}{4}-\frac{q}{2}\) and \(p_{Y\mid X_{1}=1}(2)=\frac{1}{4}(1-q)\). Then \[H(Y\mid X_{1}=1)=-\frac{3q}{4}\log(\frac{3q}{4})-(\frac{3}{4}-\frac{q}{2})\log (\frac{3}{4}-\frac{q}{2})-\frac{1}{4}(1-q)\log(\frac{1}{4}(1-q))\approx 0.89 \,(\text{bit}).\] We can then compute: \[H(Y\mid X_{1}) =p_{X_{1}}(0)\overbrace{H(Y\mid X_{1}=0)}^{0.81}+p_{X_{1}}(1) \overbrace{H(Y\mid X_{1}=1)}^{0.89}+p_{X_{1}}(1)\overbrace{H(Y\mid X_{1}=1)}^ {0.89}+p_{X_{1}}(1)\overbrace{H(Y\mid X_{1}=2)}^{0.81}\] \[=\frac{1}{2}\times(1-q)\times 0.81+\frac{1}{2}\times 0.89+\frac{q}{ 2}\times 0.81\approx 0.85\,(\text{bit}).\] We also have: \[H(Y\mid X_{2})=p_{X_{2}}(0)\overbrace{H(Y\mid X_{2}=0)}^{1}+p_{X_{2}}(1) \overbrace{H(Y\mid X_{2}=1)}^{1}=1\,(\text{bit}),\] It immediately follows that \(I(Y;W)\approx 0.60\), \(I(Y;X_{1})\approx 0.56\,(\text{bit})\) and \(I(Y;X_{2})\approx 0.41\,(\text{bit})\). Moving on to the causal information theoretical quantities, we have \(H(Y\mid do(X_{1}=x_{1}))=H(Y)\approx 1.41\,(\text{bit})\) for every \(x_{1}\in R_{X_{1}}\) and \(H(Y\mid do(X_{2}=x_{2}))=H(W)=1\,(\text{bit})\) for every \(x_{2}\in R_{X_{2}}\). Hence \(H_{c}(Y\mid do(X_{1}\sim X_{1}^{\prime}))\approx 1.41\,(\text{bit})\) and \(H_{c}(Y\mid do(X_{2}\sim X_{2}^{\prime}))=1\,(\text{bit})\) for any intervention protocols \(X_{1}^{\prime},X_{2}^{\prime}\). It follows that \(I_{c}(Y\mid do(X_{1}\sim X_{1}^{\prime}))=0\,(\text{bit})\) and \(I(Y\mid do(X_{2}\sim X_{2}^{\prime}))\approx 0.41\,(\text{bit})\). ## Appendix 0.B Proof of Proposition 1 Proof: Suppose \(X\) has no causal effect on \(Y\). Then \(\forall x\in R_{X},\ p_{Y}^{do(X=x)}=p_{Y}\). The expression for the causal entropy then reduces to \(\mathbb{E}_{x\sim X^{\prime}}\,H(Y)=H(Y)\). This shows the implication in the proposition. We will check that the converse does not hold by giving an example where \(X\) has a causal effect on \(Y\) but \(H_{c}(Y\mid X\sim X^{\prime})=H(Y)\). Consider the SCM with three binary endogenous variables \(X,Y\) and \(M\) specified by: \[\begin{cases}f_{M}(N_{M})=N_{M}\\ f_{X}(M,N_{X})=\begin{cases}(N_{X}+1)\mod 2,M=1\\ N_{X},M=0\\ \end{cases}\\ f_{Y}(M,N_{X})=\begin{cases}X,M=1\\ (X+1)\mod 2,M=0\\ \end{cases}\\ N_{X},N_{M}\sim\text{Bern}(q),\text{ for some }q\in(0,1).\end{cases} \tag{7}\] Then \(p_{Y}^{do(X=0)}\sim\text{Bern}(q)\) and \(p_{Y}^{do(X=1)}\sim\text{Bern}(q)\). Also, \[p_{Y}=p_{X|M=1}(1)p_{M}(1)+p_{X|M=0}(0)p_{M}(0)=1-q\quad\Rightarrow\quad Y\sim \text{Bern}(1-q) \tag{8}\] Hence \(p_{Y}\neq p_{Y}^{do(X=1)}\), meaning that \(X\!\rightarrow\!Y\). And since both post-intervention distributions have the same entropy \(H_{Y\sim\text{Bern}(q)}(Y)=H_{Y\sim\text{Bern}(1-q)}(Y)\), then the causal entropy will also be \(H_{c}(Y\mid X\sim X^{\prime})=H_{Y\sim\text{Bern}(1-q)}(Y)=H(Y)\) (for any chosen of \(X^{\prime}\)).
2305.19830
A family of Counterexamples on Inequality among Symmetric Functions
Inequalities among symmetric functions are fundamental questions in mathematics and have various applications in science and engineering. In this paper, we tackle a conjecture about inequalities among the complete homogeneous symmetric function $H_{n,\lambda}$, that is, the inequality $H_{n,\lambda}\leq H_{n,\mu}$ implies majorization order $\lambda\preceq\mu$. This conjecture was proposed by Cuttler, Greene and Skandera in 2011. The conjecture is a close analogy with other known results on Muirhead-type inequalities. In 2021, Heaton and Shankar disproved the conjecture by showing a counterexample for degree $d=8$ and number of variables $n=3$. They then asked whether the conjecture is true when~ the number of variables, $n$, is large enough? In this paper, we answer the question by proving that the conjecture does not hold when $d\geq8$ and $n\geq2$. A crucial step of the proof relies on variables reduction. Inspired by this, we propose a new conjecture for $H_{n,\lambda}\leq H_{n,\mu}$.
Jia Xu, Yong Yao
2023-05-31T13:14:06Z
http://arxiv.org/abs/2305.19830v1
# A family of Counterexamples on ###### Abstract Inequalities among symmetric functions are fundamental questions in mathematics and have various applications in science and engineering. In this paper, we tackle a conjecture about inequalities among the complete homogeneous symmetric function \(H_{n,\lambda}\), that is, the inequality \(H_{n,\lambda}\leq H_{n,\mu}\) implies majorization order \(\lambda\preceq\mu\). This conjecture was proposed by Cuttler, Greene and Skandera in 2011. The conjecture is a close analogy with other known results on Muirhead-type inequalities. In 2021, Heaton and Shankar disproved the conjecture by showing a counterexample for degree \(d=8\) and number of variables \(n=3\). They then asked whether the conjecture is true when the number of variables, \(n\), is large enough? In this paper, we answer the question by proving that the conjecture does not hold when \(d\geq 8\) and \(n\geq 2\). A crucial step of the proof relies on variables reduction. Inspired by this, we propose a new conjecture for \(H_{n,\lambda}\leq H_{n,\mu}\). keywords: complete homogeneous symmetric function, majorization, symmetric inequalities Msc: 05E05, 14P99, 90C22 + Footnote †: journal: Elsevier ## 1 Introduction Symmetric functions play indispensable ingredients in combinatorics [5; 21], and have various applications in diverse fields [19; 20; 22; 26]. An important collection of tools in the study of symmetric functions is various inequalities. Thus much research has been carried out in the hope of discovering and proving inequalities among symmetric functions, to list a few [1; 3; 7; 8; 11; 12; 13; 15; 16; 17; 18; 23; 24; 25]. Some of them are well known and wide used, such as arithmetic mean and geometric means, Schur, Maclaurin and Muirhead-type. It turns out that all these are special cases of inequalities among the following fundamental symmetric functions: * Monomial symmetric functions \(m_{n,\lambda}\): arithmetic means and geometric means [15], Hardy, littlewood, Polya [8],... * Elementary symmetric functions \(e_{n,\lambda}\): Maclaurin [13], Newton [17],... * Power-sum symmetric functions \(p_{n,\lambda}\): R. Gantmacher [6], Ursell [25],... * Schur functions \(s_{n,\lambda}\): Schur [8] * Complete homogeneous symmetric functions \(h_{n,\lambda}\): Grommer [7], Hunter [11],... Naturally there have been extensive studies on inequalities among the above fundamental symmetric functions [2, 8, 15], resulting in much progress, providing very efficient way to check the inequalities, which make various applications process more efficient. First, we list some notions and notations before concisely illustrating these works. Given a symmetric polynomial \(f(x)\), the term-normalized symmetric polynomial is \[F(x):=\frac{f(x)}{f(1,\cdots,1)}.\] The inequation \(F_{n,\lambda}\leq F_{n,\mu}\) means that \(F_{n,\lambda}(x)\leq F_{n,\mu}(x)\) for every \(x\) in \(\mathbb{R}_{+}^{n}\setminus 0\), where \(\mathbb{R}_{+}\) is the set of nonnegative real numbers and \(n\geq 2\). Thus the term-normalized symmetric functions of \(m_{n,\lambda}\), \(e_{n,\lambda}\), \(p_{n,\lambda}\), \(s_{n,\lambda}\) and \(h_{n,\lambda}\) can be written by \(M_{n,\lambda}\), \(E_{n,\lambda}\), \(P_{n,\lambda}\), \(S_{n,\lambda}\) and \(H_{n,\lambda}\). The following theorem is a summary of known results on these term-normalized symmetric functions. The proofs of the these results can be found in [4, 8, 16, 17, 27]. **Known results:**[9] Let \(\mu,\lambda\in\mathbb{N}^{m}\) such that \(|\mu|=|\lambda|\). Then \[M_{n,\mu}\geq M_{n,\lambda} \iff \mu\succeq\lambda,\] \[E_{n,\mu}\geq E_{n,\lambda} \iff \mu\preceq\lambda,\] \[P_{n,\mu}\geq P_{n,\lambda} \iff \mu\succeq\lambda,\] \[S_{n,\mu}\geq S_{n,\lambda} \iff \mu\succeq\lambda,\] \[H_{n,\mu}\geq H_{n,\lambda} \iff \mu\succeq\lambda.\] where "\(\succeq\) " is majorization order (see [14] or Definition 6 of this paper). Note that unlike the other, the family of complete homogeneous is still open. The techniques successfully used for other families do not work well in general. Hence recently the effort is focused on this, making incremental progresses, producing conjectures that says that the technique could be used in large cases. In 2011, Allison Cuttler, Curtis Greene and Mark Skandera [4] conjectured that \(H_{n,\mu}\geq H_{n,\lambda}\Longrightarrow\mu\succeq\lambda.\) Moreover, they indicated the conjecture is true when the degree \(d=|\lambda|=|\mu|=1,2,\ldots,7\), and lead the question to \(d\geq 8\). In 2021, Alexander Heaton and Isabelle Shankar found some counterexamples which overturn the conjecture for \(d=8,9,10\) (see [9]). Specially, they bring the \(H_{3,(4,4)}-H_{3,(5,2,1)}\) (\(d=8,n=3\)), and certified its nonnegativity by utilizing the sum of squares (SOS) method. The positive semi-definite matrix they found is left on web page (see [10]) due to the enormous and complex output. Instead, they raised the following question in [9] and put the hope to much more variables. **Question**:"Is the following claim true asymptotically: \(H_{n,\mu}\geq H_{n,\lambda}\) implies \(\mu\succeq\lambda\)?" In this paper, we conclude this line of research by showing the technique does not work for even large cases. We show that for every sufficiently large \(n\), there is a counter example. The precise form of our main result is stated in Theorem 7. So the answer of the above question is as follows. **Answer**: "No." Hence there is no hope in tackling the complete homogeneous case using the previous approach. There is a need for an alternative approach. In this paper, we suggest such a potential alternative approach, as a conjecture. **Conjecture**: Let \(\mu,\lambda\in\mathbb{N}^{m},|\mu|=|\lambda|\), then \[H_{n,\mu}\geq H_{n,\lambda}\Longleftrightarrow\ \underset{u+v=n}{\forall}\ \ \underset{t\in\mathbb{R}_{+}}{\forall}\ \ H_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v})\geq H_{n, \lambda}(\mathbf{t}_{u},\mathbf{1}_{v}).\] The above conjecture gives another ideas for studying Muirhead-type inequalities of complete homogenous polynomials. ## 2 Preliminaries In order to precisely state the main theorem, we recall some definitions and notations. **Definition 1** (Partition [21]).: _Let \(d\geq 1\). The \(d\)-partition \(Par(d)\) is defined by_ \[Par(d)=\left\{\left(\lambda_{1},\ldots,\lambda_{d}\right)\in\mathbb{N}^{d}: \lambda_{1}\geq\cdots\geq\lambda_{d}\geq 0\,\text{ and }\lambda_{1}+\cdots+\lambda_{d}=d\right\}.\] **Example 2**.: _Note_ \[Par(3)=\left\{\left(3,0,0\right),\left(2,1,0\right),\left(1,1,1\right)\right\}.\] **Remark 3**.: 1. _We will delete_ \(0\) _included in the elements of a partition if there is no confusion. For example,_ \((2,1,0)\) _can be written briefly as_ \((2,1)\)_._ 2. _If there are_ \(m\) _consecutive_ \(\lambda_{i}\) _that are same, then we can abbreviate them as_ \(\lambda_{\mathbf{i}m}\)_. For example,_ \((1,1,1)\) _can be written as_ \((\mathbf{1}_{3})\)_._ **Definition 4** (Complete homogenous symmetric function [21]).: _For a partition \(\lambda\in Par(d)\), a complete homogeneous symmetric function \(h_{n,\lambda}\) is written as_ \[h_{n,\lambda}=\prod_{i=1}^{d}h_{n,\lambda_{i}},\] _where_ \[h_{n,\lambda_{i}}=\sum_{1\leq j_{1}\leq\cdots\leq j_{\lambda_{i}}\leq n}x_{j_ {1}}\cdots x_{j_{\lambda_{i}}},\quad(\text{with }\ h_{n,0}=1).\] **Remark 5**.: _The term-normalized form of the complete homogeneous symmetric function is_ \[H_{n,\lambda}=\frac{1}{\binom{n+\lambda_{1}-1}{\lambda_{1}}\cdots\binom{n+\lambda _{d}-1}{\lambda_{d}}}\ h_{n,\lambda}.\] **Definition 6** (Majorization [14]).: _Let \(\mu,\lambda\in Par(d)\). We say that \(\mu\) majorizes \(\lambda,\) and write \(\mu\succeq\lambda\), if_ \[\underset{1\leq j\leq d-1}{\forall}\ \sum_{i=1}^{j}\mu_{i}\geq\sum_{i=1}^{j} \lambda_{i}.\] ## 3 Main theorem **Theorem 7** (Main Result).: _For every \(n\geq 2\) and \(d\geq 8\), there exist \(\ \mu,\lambda\in Par(d)\) such that \(H_{n,\mu}\geq H_{n,\lambda}\) but \(\mu\) does not majorizes \(\lambda\), that is,_ \[\underset{d\geq 8}{\forall}\ \ \underset{\mu,\lambda\in Par(d)}{\exists}H_{n, \mu}\geq H_{n,\lambda}\ \ \text{but}\ \mu\not\succeq\lambda.\] Before we plunge into technical details, we will first provide the top-level structure of the proof, in the hope of helping the reader to grasp the essence. Top-level structure: Let \(n\geq 2\) and \(d\geq 8\) be arbitrary but fixed. It is sufficient to prove that there exist \(\lambda,\mu\in P_{d}\) such that \(H_{n,\mu}\geq H_{n,\lambda}\ \ \text{but}\ \mu\not\succeq\lambda\). In general there are two different strategies for proving existence: (1) constructive, guess a potential witness and check it. (2) non-constructive, assume non-existence and derive contradiction. In this paper, we follow the constructive approach, since it is more interesting. 1. Guess a witness for \(\mu,\lambda\). Since \(Par(d)\) expands rapidly while \(d\) is growing. For example, \(|Par(17)|=297\) while \(|Par(18)|=385\). It takes a little luck to guess the following witness. \[\text{Case}\ d=2m : \mu=(\underbrace{2,\ldots,2}_{m})=(\mathbf{2}_{m}) \lambda=(3,\underbrace{1,\ldots,1}_{2m-3})=(3,\mathbf{1}_{2m-3})\] \[\text{Case}\ d=2m+1 : \mu=(\underbrace{2,\ldots,2}_{m},1)=(\mathbf{2}_{m},1) \lambda=(3,\underbrace{1,\ldots,1}_{2m-2})=(3,\mathbf{1}_{2m-2})\] 2. Check that it is indeed a witness. 1. \(\mu\not\succeq\lambda\). Trivial. 2. \(H_{n,\mu}\geq H_{n,\lambda}\) This is non-trivial, requiring much technical details. Again before we plunge into the detail, here we provide a high-level strategy. We first tackle the smallest still "open" degree \(d=8\), that is, \[\mu=(\mathbf{2}_{4})\ \text{and}\ \lambda=(3,\mathbf{1}_{5})\] We prove it by transforming the problem into an optimization problem on the simplex. The details are given in Lemma 11 and its proof is given below. Briefly, the proof is divided into two parts, interior and boundary of the simplex. In the interior, we reduce the number of variables into \(2\) by Lagrange's equation ( see Lemma 9 ). At boundary, we deal with it by proving an inequality ( see Lemma 10 ). After this, we extend the result with degree \(8\) to arbitrary degree \(d\) by using relaxation method repeatedly. The details are given in Lemma 12 and its proof is given below. This concludes the top-level structure of the proof. **Remark 8**.: 1. _It will be interesting to find different counter-examples._ 2. _In fact, one wonders about the set of all counter-examples. Does it have any discernable structure?_ **Lemma 9**.: _Let \(\mu=(\mathbf{2}_{4})\) and \(\lambda=(3,\mathbf{1}_{5})\). Then we have_ \[\underset{\begin{subarray}{c}u+v=n\\ u,v\geq 0\end{subarray}}{\forall}\underset{t\in\mathbb{R}_{+}}{\forall}H_{n,\mu} (\mathbf{t}_{u},\mathbf{1}_{v})\ \geq\ H_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v}).\] Proof.: Let \(J_{n}=H_{n,\mu}-H_{n,\lambda}\). Note \[\underset{\begin{subarray}{c}u+v=n\\ u,v\geq 0\end{subarray}}{\forall}\underset{t\in\mathbb{R}_{+}}{\forall}J_{n}( \mathbf{t}_{u},\mathbf{1}_{v})\geq 0\] \[\iff \underset{\begin{subarray}{c}u+v=n\\ u,v\geq 1\end{subarray}}{\forall}\underset{t\in\mathbb{R}_{+}}{\forall}J_{n}( \mathbf{t}_{u},\mathbf{1}_{v})\geq 0\quad\text{ (since if $u=0$ or \ $v=0$ then $J_{n}(\mathbf{t}_{u},\mathbf{1}_{v})=0$)}\] \[\iff \underset{\begin{subarray}{c}k+l=n-2\\ k,l\geq 0\end{subarray}}{\forall}\underset{t\in\mathbb{R}_{+}}{\forall}J_{n}( \mathbf{t}_{k+1},\mathbf{1}_{l+1})\geq 0\ \ \text{(obtained by $u=k+1$ and $v=l+1$)}\] Direct computation show that \[J_{n}(\mathbf{t}_{k+1},\mathbf{1}_{l+1})=\frac{(k+1)(l+1)}{{n+2\choose 3}{n\choose 1 }^{5}{n+1\choose 2}^{4}}\left(t-1\right)^{2}W\left(k,l,t\right)\] for some polynomial \(W\). Thus it suffices to show that \[\underset{\begin{subarray}{c}k+l=n-2\\ k,l\geq 0\end{subarray}}{\forall}W\left(k,l,t\right)\geq 0\] Direct calculation shows that all the coefficients of \(W\) are non-negative (see Appendix). Hence the claim holds. **Lemma 10**.: _Let \(\mu=(\mathbf{2}_{4})\) and \(\lambda=(3,\mathbf{1}_{5})\). We have the inequality_ \[\underset{x\in\mathbb{R}_{+}^{n}}{\forall}\quad H_{n+1,\mu}\left(x,0\right)-H_{ n+1,\lambda}\left(x,0\right) \geq \frac{n^{6}}{(n+3)(n+1)^{5}}\left(H_{n,\mu}\left(x\right)-H_{n, \lambda}\left(x\right)\right).\] Proof.: Note \(h_{n+1,\mu}\left(x,0\right)=h_{n,\mu}\left(x\right)\) and \(h_{n+1,\lambda}\left(x,0\right)=h_{n,\lambda}\left(x\right)\). Then we have \[\begin{array}{ccccc}\frac{H_{n+1,\mu}\left(x,0\right)}{H_{n,\mu}\left(x \right)}&=&\frac{\frac{h_{n+1,\mu}\left(x,0\right)}{\left(\frac{n+2}{2} \right)^{4}}}{\frac{h_{n,\mu}\left(x\right)}{\left(\frac{n+1}{2}\right)^{4}} }&=&\left(\frac{\binom{n+1}{2}}{\binom{n+2}{2}}\right)^{4}&=&\left(\frac{n}{n +2}\right)^{4},\\ \\ \frac{H_{n+1,\lambda}\left(x,0\right)}{H_{n,\lambda}\left(x\right)}&=&\frac{ \frac{h_{n+1,\lambda}\left(x,0\right)}{\left(\frac{n+3}{3}\right)^{1}\left( \frac{n+1}{1}\right)^{5}}}{\frac{h_{n,\lambda}\left(\frac{n+2}{3}\right)^{1} \left(\frac{n}{1}\right)^{5}}{\left(\frac{n+3}{3}\right)}}&=&\left(\frac{n+2}{ 3}\right)^{1}\left(\frac{n}{\binom{n+1}{1}}\right)^{5}&=&\left(\frac{n}{n+3} \right)^{1}\left(\frac{n}{n+1}\right)^{5}.\end{array}\] One can verify that \[\left(\frac{n}{n+3}\right)^{1}\left(\frac{n}{n+1}\right)^{5} < \left(\frac{n}{n+2}\right)^{4}.\] Thus \[H_{n+1,\mu}\left(x,0\right) > \left(\frac{n}{n+3}\right)^{1}\left(\frac{n}{n+1}\right)^{5}H_{n, \mu}\left(x\right)\] \[H_{n+1,\lambda}\left(x,0\right) = \left(\frac{n}{n+3}\right)^{1}\left(\frac{n}{n+1}\right)^{5}H_{n, \lambda}\left(x\right)\] Thus \[H_{n+1,\mu}\left(x,0\right)-H_{n+1,\lambda}\left(x,0\right) \geq \frac{n^{6}}{(n+3)(n+1)^{5}}\left(H_{n,\mu}\left(x\right)-H_{n, \lambda}\left(x\right)\right).\] **Lemma 11**.: _Let \(\mu=(\mathbf{2}_{4})\) and \(\lambda=(3,\mathbf{1}_{5})\). We have_ \[H_{n,\mu}\geq H_{n,\lambda}\ \ (n\geq 2).\] Proof.: Let \(J_{n}=H_{n,\mu}-H_{n,\lambda}\). We will prove \(J_{n}\geq 0\) by induction on \(n\). _Base case_: The following calculation verifies that the claim is true when \(n=2\). Direct computation show that \[J_{2}=H_{2,\mu}-H_{2,\lambda}=\frac{h_{2,\mu}}{\binom{2+1}{2}^{4}}-\frac{h_{2, \lambda}}{\binom{2+2}{3}^{1}\binom{2}{1}^{5}}=(x_{1}-x_{2})^{2}P(x_{1},x_{2}),\] where \[P\left(x_{1},x_{2}\right)=\frac{1}{10368}\left(47(x_{1}^{6}+x_{2}^{6})+120(x_{ 1}^{5}x_{2}+x_{1}x_{2}^{5})+177(x_{1}^{4}x_{2}^{2}+x_{1}^{2}x_{2}^{4})+176x_{1} ^{3}x_{2}^{3}\right).\] Thus, \(J_{2}\geq 0\) holds. _Induction step_: Given that \(J_{n-1}(x)\geq 0\) holds for \(n\geq 3\), we will show that \(J_{n}(x)\geq 0\) holds too. Since \(J_{n}(x)\) is homogeneous, it suffices to show that \[\min_{x\in\Delta_{n}}J_{n}(x)\geq 0,\] where \[\Delta_{n}=\{x\in\mathbb{R}_{+}^{n}:\ x_{1}+\cdots+x_{n}=1\}.\] Note that \(\Delta_{n}\) is compact, hence there exists \(p\in\Delta_{n}\) such that \(J_{n}(p)=\min_{x\in\Delta_{n}}J_{n}(x)\). It remains to prove \(J_{n}(p)\geq 0\), and will be done in the following two cases. 1. \(p\in\Delta_{n}^{\circ}\) (the interior of \(\Delta_{n}\)). We claim that \(p=(\mathbf{t}_{u},\mathbf{r}_{v})\) for some \(t,r\) and \(u+v=n\). Since \(p\) is an extreme point, it follows from Lagrange multiplier theorem that there is a real \(\lambda\) such that \(p\) satisfies the following equations. \[\frac{\partial J_{n}}{\partial x_{i}}(p)=\lambda\frac{\partial h_{n,1}}{ \partial x_{i}}(p),\ i=1,2,\ldots,n.\] Since \[\frac{\partial\ h_{n,1}}{\partial x_{i}} =1,\] \[\frac{\partial\ h_{n,2}}{\partial x_{i}} =x_{i}+h_{n,1},\] \[\frac{\partial\ h_{n,3}}{\partial x_{i}} =x_{i}^{2}+h_{n,1}x_{i}+h_{n,2}\] \[\frac{\partial J_{n}}{\partial x_{i}} =\frac{\partial\left(H_{n,\mu}-H_{n,\lambda}\right)}{\partial x_{ i}},\] \[=\frac{\partial\left(\frac{h_{n,2}^{4}}{\binom{n+1}{2}^{4}}- \frac{h_{n,1}^{5}h_{n,3}}{\binom{n+2}{3}^{4}\binom{n}{1}^{5}}\right)}{\partial x _{i}},\] this in turn implies that each of the \(p_{i}\) is a root of the quadratic equation \[ax_{i}^{2}+bx_{i}+c=0,\] where \[a =-\binom{n+2}{3}^{-1}\binom{n}{1}^{-5}\] \[b =4\binom{n+1}{2}^{-4}h_{n,2}^{3}(p)-\binom{n+2}{3}^{-1}\binom{n} {1}^{-5}\] \[c =4\binom{n+1}{2}^{-4}h_{n,2}^{3}(p)-\binom{n+2}{3}^{-1}\binom{n} {1}^{-5}(h_{n,2}(p)+5h_{n,3}(p))-\lambda\] Thus \(p_{1},\cdots,p_{n}\) take at most two different numbers. Without loss of generality, suppose \(\{p_{1},\cdots,p_{n}\}=\{t,r\}\). \(J_{n}\) is symmetric, so \(p\) can be written as follows. \[p=(\underbrace{t,\cdots,t}_{u},\underbrace{r,\cdots,r}_{v})=(\mathbf{t}_{u}, \mathbf{r}_{v}),\ \ u,v\in\mathbb{N},\ u+v=n.\] Noticed that \(J_{n}(\mathbf{t}_{u},\mathbf{1}_{v})\geq 0\Longleftrightarrow J_{n}( \mathbf{t}_{u},\mathbf{r}_{v})\geq 0\) due to homogeneity of \(J_{n}\). Hence by Lemma 9, we have \[J_{n}(\mathbf{t}_{u},\mathbf{1}_{v})\geq 0\Longrightarrow J_{n}(\mathbf{t}_{u}, \mathbf{r}_{v})\geq 0\Longrightarrow J_{n}(p)\geq 0.\] 2. \(p\in\partial\Delta_{n}\) (the boundary of \(\Delta_{n}\)). Let \(p=(p_{1},\cdots,p_{n-1},0)\) by symmetry. Thus, \(J_{n}(p)\geq 0\) is trivial. Since from Lemma 10 and the induction hypothesis, we have \[J_{n}(x_{1},\cdots,x_{n-1},0)\geq\frac{(n-1)^{6}}{(n+2)n^{5}}\ J_{n-1}(x_{1}, \cdots,x_{n-1})\geq 0.\] According to the principle of induction, the proof is done. **Lemma 12**.: _We have_ \[H_{n,(\mathbf{2}_{m})}\geq H_{n,(3,\mathbf{1}_{2m-3})},\ H_{n,(\mathbf{2}_{m},1)}\geq H_{n,(3,\mathbf{1}_{2m-2})}\ \ (m\geq 4).\] _where \(\mathbf{2}_{m}=\underbrace{2,\cdots,2}_{m},\ \ \mathbf{1}_{v}= \underbrace{1,\cdots,1}_{v}\)._ Proof.: From Lemma 11, we have \[\frac{H_{n,(\mathbf{2}_{4})}}{H_{n,(3,\mathbf{1}_{5})}}\geq 1, \tag{1}\] where \(n\geq 2\) and \(m=4\). Generally, let \(F_{n,m}=\frac{H_{n,(\mathbf{2}_{m})}}{H_{n,(3,\mathbf{1}_{2m-3})}}\). We claim that \[F_{n,m}\geq F_{n,m-1} \tag{2}\] It is because \[\frac{F_{n,m}}{F_{n,m-1}} =\frac{\binom{n+2}{3}\binom{n}{1}^{2m-3}}{\binom{n+1}{2}^{m}} \frac{(h_{n,2})^{m}}{h_{n,3}\ (h_{n,1})^{2m-3}}\] \[=\frac{2n}{n+1}\frac{\sum\limits_{1\leq i\leq n}x_{i}^{2}+\sum \limits_{1\leq i\leq j\leq n}x_{i}x_{j}}{(\sum\limits_{1\leq i\leq n}x_{i})^{2}}\] \[=\frac{n(\sum_{i=1}^{n}x_{i}^{2})+n(\sum_{i=1}^{n}x_{i})^{2}}{(n+1 )(\sum_{i=1}^{n}x_{i})^{2}}\] \[\geq\frac{(\sum_{i=1}^{n}x_{i})^{2}+n(\sum_{i=1}^{n}x_{i})^{2}}{( n+1)(\sum_{i=1}^{n}x_{i})^{2}}\ \ \ \ \mbox{from}\ \ n\left(\sum_{i=1}^{n}x_{i}^{2}\right)\geq\left(\sum_{i=1}^{n}x_{i}\right)^ {2}\] \[=1.\] By using inequality (2) repeatedly and combining formula (1), we have \[F_{n,m}\geq F_{n,m-1}\geq\cdots\geq F_{n,4}=\frac{H_{n,(\mathbf{2}_{4})}}{H_{n,(3,\mathbf{1}_{5})}}\geq 1.\] Hence \[H_{n,(\mathbf{2}_{m})}\geq H_{n,(3,\mathbf{1}_{2m-3})}.\] Further, note that \(H_{n,\lambda}\) is multiplicative, then \[\frac{H_{n,(\mathbf{2}_{m},1)}}{H_{n,(3,\mathbf{1}_{2m-2})}}=\frac{H_{n,( \mathbf{2}_{m})}}{H_{n,(3,\mathbf{1}_{2m-3})}}=F_{n,m}\geq 1,\] Hence \[H_{n,(\mathbf{2}_{m},1)}\geq H_{n,(3,\mathbf{1}_{2m-2})}.\] Now let us complete the proof of Theorem 7. Proof.: [Proof of Theorem 7] 1. \(d\geq 8\) and even: Let \(d=2m\) where \(m\geq 4\). Take \(\mu=(\mathbf{2}_{m}),\lambda=(3,\mathbf{1}_{2m-3})\). From Lemma 12, we have \(H_{n,\mu}=H_{n,(\mathbf{2}_{m})}\geq H_{n,(3,\mathbf{1}_{2m-3})}=H_{n,\lambda}\), but \[\mu=(\mathbf{2}_{m})=(\underbrace{2,\ldots,2}_{m})\not\asymp(3,\underbrace{1,\ldots,1}_{2m-3})=(3,\mathbf{1}_{2m-3})=\lambda.\] 2. \(d\geq 9\) and odd: Let \(d=2m+1\) for \(m\geq 4\). Take \(\mu=(\mathbf{2}_{m},1),\lambda=(3,\mathbf{1}_{2m-2})\). From Lemma 12, we have \(H_{n,\mu}\geq H_{n,\lambda}\), but \(\mu\not\asymp\lambda\). We have completed the proof. ## 4 A conjecture In this section, we propose a conjecture for an alternative characterization. The conjecture (see below) is inspired by the following observation. **Proposition 13**.: _Let \(\mu,\lambda\in Par(d)\). We have_ \[\begin{array}{ccccccccc}M_{n,\mu}&\geq&M_{n,\lambda}&\Longleftrightarrow& \forall&\forall&M_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v})&\geq&M_{n,\lambda}( \mathbf{t}_{u},\mathbf{1}_{v})\\ E_{n,\mu}&\geq&E_{n,\lambda}&\Longleftrightarrow&\forall&\forall&E_{n,\mu}( \mathbf{t}_{u},\mathbf{1}_{v})&\geq&E_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v })\\ P_{n,\mu}&\geq&P_{n,\lambda}&\Longleftrightarrow&\forall&\forall&P_{n,\mu}( \mathbf{t}_{u},\mathbf{1}_{v})&\geq&P_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{ v})\\ S_{n,\mu}&\geq&S_{n,\lambda}&\Longleftrightarrow&\forall&\forall&S_{n,\mu}( \mathbf{t}_{u},\mathbf{1}_{v})&\geq&S_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{ v})\end{array}\] Proof.: \(\Longrightarrow\): It is obvious. \(\Longleftarrow\): * \(M\) The following proof is essentially based on comparing degrees. It is straightforward to show \[\deg_{t}M_{n,\alpha}(\mathbf{t}_{u},\mathbf{1}_{v})=\sum_{i=1}^{u}\alpha_{i}\] Now observe \[\begin{array}{l}\underset{u+v=n}{\forall}\ \underset{t\in\mathbb{R}_{+}}{ \forall}M_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v})\geq M_{n,\lambda}(\mathbf{t}_ {u},\mathbf{1}_{v})\\ \Longrightarrow\ \underset{u+v=n}{\forall}\ \deg_{t}M_{n,\mu}(\mathbf{t}_{u}, \mathbf{1}_{v})\geq\deg_{t}M_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v})\ \ \ \ \ \text{ by comparing them when }t\to\infty\\ \Longleftrightarrow\ \underset{u+v=n}{\forall}\ \sum_{i=1}^{u}\mu_{i}\geq\sum_{i=1}^{u}\lambda_{i}\\ \Longleftrightarrow\ \mu\succeq\lambda\ \ \text{from the definition of}\ \succeq\\ \Longleftrightarrow\ M_{n,\mu}\geq M_{n,\mu}\end{array}\] * \(S\) The following proof is the same as the proof for \(M\). It is straightforward to show \[\deg_{t}S_{n,\alpha}(\mathbf{t}_{u},\mathbf{1}_{v})=\sum_{i=1}^{u}\alpha_{i}\] Now observe \[\begin{array}{l}\underset{u+v=n}{\forall}\ \underset{t\in\mathbb{R}_{+}}{ \forall}S_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v})\geq S_{n,\lambda}(\mathbf{t}_ {u},\mathbf{1}_{v})\\ \Longrightarrow\ \underset{u+v=n}{\forall}\ \deg_{t}S_{n,\mu}(\mathbf{t}_{u}, \mathbf{1}_{v})\geq\deg_{t}S_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v})\ \ \ \ \ \text{ by comparing them when }t\to\infty\\ \Longleftrightarrow\ \underset{u+v=n}{\forall}\ \sum_{i=1}^{u}\mu_{i}\geq\sum_{i=1}^{u} \lambda_{i}\\ \Longleftrightarrow\ \mu\succeq\lambda\ \ \text{from the definition of}\ \succeq\\ \Longleftrightarrow\ \ S_{n,\mu}\geq S_{n,\mu}\end{array}\] by comparing them when \(t\to\infty\) * \(E\) The following proof is almost the same as the proof for \(M\). However there is a subtle difference. It is straightforward to show \[\deg_{t}E_{n,\alpha}(\mathbf{t}_{u},\mathbf{1}_{v})=\sum_{i=1}^{u}\alpha^{\prime}_ {i}\] where \(\alpha^{\prime}\) denotes the conjugate of the partition \(\alpha\), that is, \(\alpha^{\prime}_{j}=\max\{i|\alpha_{i}>j\}\). Now observe \[\underset{u+v=n}{\forall} \underset{t\in\mathbb{R}_{+}}{\forall}E_{n,\mu}(\mathbf{t}_{u}, \mathbf{1}_{v})\geq E_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v})\] \[\implies \underset{u+v=n}{\forall} \deg_{t}E_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v})\geq\deg_{t}E_{n, \lambda}(\mathbf{t}_{u},\mathbf{1}_{v})\quad\text{ by comparing them when }t\to\infty\] \[\iff \underset{u+v=n}{\forall} \sum_{i=1}^{u}\mu^{\prime}_{i}\geq\sum_{i=1}^{u}\lambda^{\prime}_{i}\] \[\iff \mu^{\prime}\succeq\lambda^{\prime}\ \text{ from the definition of }\succeq\] \[\iff \mu\preceq\lambda\ \text{ from Theorem \ref{thm:2} of \@@cite[cite]{[\@@bibref{}{thm:2}{}{}]}}\] \[\iff E_{n,\mu}\geq E_{n,\mu}\] * \(P\) It was proved in P.753 of [4], using a proof technique quite different from the proof for \(M\), because the degree comparison does not provide any information since \[\deg_{t}P_{n,\alpha}(\mathbf{t}_{u},\mathbf{1}_{v})=d.\] The above results naturally leads to the following conjecture. **Conjecture 14**.: _We conjecture that_ \[H_{n,\mu}\geq H_{n,\lambda} \iff \underset{u+v=n}{\forall} \underset{t\in\mathbb{R}_{+}}{\forall}H_{n,\mu}(\mathbf{t}_{u},\mathbf{1}_{v}) \geq H_{n,\lambda}(\mathbf{t}_{u},\mathbf{1}_{v}).\] **Remark 15**.: _The proof technique used for \(M\), \(S\) and \(E\) does not work since \(\deg_{t}H_{n,\alpha}(\mathbf{t}_{u},\mathbf{1}_{v})=d\). The proof technique used for \(P\) does not seem to work either._ **Remark 16**.: _We checked the conjeture on all possible \(\mu,\lambda\) with increasing degrees and number of variables. We used the following tools._ 1. _LHS: difference substitution method (DS)_ _[_30, 28, 29, 31, 32_]__._ 2. _RHS: Sturm sequence._ We have verified this by explicit computation up through \(d=12\) and \(n=12\), and have not found any counter-example. We invite the reader to help complete the proof or disproof of the conjecture. ## 5 Appendix \[W(k,l,t)= \left(k+2\right)\left(k+1\right)^{3}\left(k^{4}+2\,k^{3}l+k^{2}l^{2 }+12\,k^{3}+17\,k^{2}l+5\,kl^{2}+49\,k^{2}+43\,kl\right.\] \[+5\,l^{2}+82\,k+32\,l+47\right)t^{6}+2\,\left(k+2\right)\left(k+1 \right)^{3}\left(3\,k^{3}l+6\,k^{2}l^{2}+3\,kl^{3}+2\,k^{3}\right.\] \[+32\,k^{2}l+37\,kl^{2}+7\,l^{3}+21\,k^{2}+106\,kl+52\,l^{2}+64\,k +109\,l+60\right)t^{5}\] \[+\left(l+1\right)\left(k+1\right)^{2}\left(15\,k^{4}l+30\,k^{3}l ^{2}+15\,k^{2}l^{3}+11\,k^{4}+173\,k^{3}l+208\,k^{2}l^{2}\right.\] \[+46\,kl^{3}+121\,k^{3}+677\,k^{2}l+426\,kl^{2}+35\,l^{3}+442\,k^ {2}+1074\,kl+272\,l^{2}\] \[+662\,k+599\,l+354)t^{4}+4\,\left(l+1\right)^{2}\left(k+1\right) ^{2}\left(5\,k^{3}l+10\,k^{2}l^{2}+5\,kl^{3}+6\,k^{3}\right.\] \[+53\,k^{2}l+53\,kl^{2}+6\,l^{3}+51\,k^{2}+157\,kl+51\,l^{2}+125\, k+125\,l+88\right)t^{3}\] \[+\left(l+1\right)^{2}\left(k+1\right)\left(15\,k^{3}l^{2}+30\,k^ {2}l^{3}+15\,kl^{4}+46\,k^{3}l+208\,k^{2}l^{2}+173\,kl^{3}\right.\] \[+11\,l^{4}+35\,k^{3}+426\,k^{2}l+677\,kl^{2}+121\,l^{3}+272\,k^{2 }+1074\,kl+442\,l^{2}\] \[+599\,k+662\,l+354)t^{2}+2\,\left(l+2\right)\left(l+1\right)^{3} \left(3\,k^{3}l+6\,k^{2}l^{2}+3\,kl^{3}+7\,k^{3}\right.\] \[+37\,k^{2}l+32\,kl^{2}+2\,l^{3}+52\,k^{2}+106\,kl+21\,l^{2}+109\, k+64\,l+60\right)t\] \[+(l+2)\left(l+1\right)^{3}\left(k^{2}l^{2}+2\,kl^{3}+l^{4}+5\,k ^{2}l+17\,kl^{2}+12\,l^{3}+5\,k^{2}+43\,kl\right.\] \[+49\,l^{2}+32\,k+82\,l+47\right)\] **Acknowledgements**. The authors are grateful to Bi-can Xia for drawing their attention to some relevant references and to Hoon Hong for helpful conversations.This work was supported by the Fundamental Research Funds for the Central Universities, Southwest Minzu University (2020NYB40).
2305.20069
A survey on the complexity of learning quantum states
We survey various recent results that rigorously study the complexity of learning quantum states. These include progress on quantum tomography, learning physical quantum states, alternate learning models to tomography and learning classical functions encoded as quantum states. We highlight how these results are paving the way for a highly successful theory with a range of exciting open questions. To this end, we distill 25 open questions from these results.
Anurag Anshu, Srinivasan Arunachalam
2023-05-31T17:44:07Z
http://arxiv.org/abs/2305.20069v1
# A survey on the complexity of learning quantum states ###### Abstract We survey various recent results that rigorously study the complexity of learning quantum states. These include progress on quantum tomography, learning physical quantum states, alternate learning models to tomography and learning classical functions encoded as quantum states. We highlight how these results are paving the way for a highly successful theory with a range of exciting open questions. To this end, we distill 25 open questions from these results. ## 1 Introduction In the last decade, machine learning has received tremendous attention with the success of deep neural networks (or in more generality deep learning) in practically relevant tasks such as natural language processing, speech recognition and image processing. Some popular applications of deep learning include AlphaGo and AlphaZero (to play the games of Go and chess), chat-GPT (to mimic a human conversation) and AlphaFold (for solving instances of protein folding) [1, 2, 3, 4]. Although these machine learning techniques work very well in practice, they are not well understood from a theoretical standpoint. In a seminal work in 1984, Valiant [14] introduced the well-known probability approximately correct (PAC) model of learning, which laid the mathematical foundation to understand machine learning from a computational complexity theory perspective. Since then, several mathematical models for machine learning have been proposed, some of which have theoretically justified the successes of practical learning algorithms. The study of machine learning from this complexity theoretic perspective is often referred to as _computational learning theory_. In another line of research, a century old quest which includes physicists, mathematicians and - now - computer scientists, is understanding the dividing line between simple and complex quantum states. Some prominent measures of complexity have been formulated in this process - for instance, correlation length and entanglement entropy [1] from the physics point of view; quantum circuit size and description size [1] from the computer science point of view. A recent revolution in quantum information - inspired by practical implementations of quantum devices and incredible success of machine learning - has brought another measure in picture: _learnability_. In the last decade, there have been several works to understand what classes of quantum states are learnable efficiently and why some classes of states are hard to learn. Here we argue that learnability as a complexity-theoretic metric is remarkably powerful and has been revealing fundamentally new properties of physically and computationally relevant quantum states. This is akin to the aforementioned PAC learning framework used to understand machine learning from a complexity theoretic framework. A general formalism for learning quantum states is as follows. A learning algorithm (which we often refer to as a learner) receives many independent copies of an unknown quantum state - guaranteed to be within a "class" of states (known to the learner). Using quantum measurements, the learner extracts information about the unknown state, and then outputs a sufficiently accurate description of the quantum state. We stress on the three defining notions in this general framework: the class of states, the type of measurement done by the learner and the metric for accuracy. Modifying any one of these parameters can change the quantum learning model in an interesting way and we discuss these models in this survey. The complexity metric associated with these learning models is the quantum _sample complexity_, defined as the number of copies of the unknown state used by the learning algorithm and quantum _time complexity_, defined as the total number of gates used by the algorithm.1 Footnote 1: In this survey, we will also discuss classical sample and time complexity and it’s definition will be clear when we discuss these complexities. ### Organization of this survey Our survey discusses these learning models that come with rigorous guarantees on the sample and time complexity, as detailed below. 1. _Learning arbitrary quantum states._ Here the goal is to learn an arbitrary \(n\)-qubit quantum state \(\rho\), given copies of \(\rho\), up to small trace distance. Given the generality of this task, the sample complexity of this task is known to be exponential in \(n\). We discuss this in Section 2. 2. _Learning physical quantum states._ A natural followup question is, can we learn _interesting_ subclasses of quantum states efficiently? In this direction we look at stabilizer states, states from the Clifford hierarchy, Gibbs states at different temperature regimes and matrix product states. We discuss this in Section 3. 3. _Learning states in alternate models._ Suppose the goal of the learner was to still learn an unknown quantum state, can we weaken the requirement for the learner and still learn the unknown \(\rho\)? To this end, there are models of learning called PAC learning, online learning, shadow tomography and several equivalences between them. We discuss this in Section 4. 4. _Learning classical functions encoded as states._ Suppose the unknown state \(\rho\) encodes a classical function, what is the complexity of learning? Here we discuss known results on quantum PAC learning, agnostic learning, statistical query learning and kernel methods which encode classical data into quantum states, and exhibit the strengths and weaknesses of quantum examples for learning classical functions. We discuss this in Section 5 Finally we conclude in Section 6 with some perspective on other works related to sample and time complexity of learning quantum states. Throughout this survey we have put together several open questions that would improve our understanding on the complexity of quantum states from the perspective of learning theory. ## 2 Tomography Quantum state tomography (QST) is the following task: given many independent copies of an unknown \(n\)-qubit quantum state \(\rho\) living in \(\mathbb{C}^{d}\) where \(d=2^{n}\),2 output a \(\hat{\rho}\) such that \(\|\hat{\rho}-\rho\|_{tr}\leq\delta\) (where \(\|\cdot\|_{tr}\) is the trace norm). Understanding the sample complexity of QST has been a fundamental question in quantum information theory with applications in tasks such as verifying entanglement [13], understanding correlations in quantum states [14], and is useful for understanding, calibrating and controlling noise in quantum devices. A simple protocol for \(\mathsf{QST}\) uses \(T=O(d^{6})\) copies: simply let \(P_{1},\ldots,P_{d^{2}}\) be all the \(d\)-dimensional Pauli matrices, use \(O(d^{2}/\delta)\) copies of \(\rho\) to estimate \(\mathsf{Tr}(P_{i}\rho)\) up to error \(\delta/d^{2}\). Using a technique of linear inversion, this is sufficient to produce \(\hat{\rho}\) that satisfies \(\|\hat{\rho}-\rho\|_{tr}{\leq}\ \delta\). The overall sample complexity is \(d^{2}\cdot O(d^{4}/\delta)=O(d^{6}/\delta)\). The dependence on the error \(\delta\) is intuitive as more accurate description requires more measurements. Subsequently [15] used techniques from compressive sensing to improve the complexity to \(O(d^{4}/\delta^{2})\) and after that Kueng et al. [16] used more sophisticated techniques to improve the sample complexity to \(O(d^{3}/\delta^{2})\) and it was open for a while what was the right sample complexity of tomography. Two breakthrough works by Haah et al. [17] and O'Donnell and Wright [18] finally obtained optimal bounds for the sample complexity of \(\mathsf{QST}\). **Theorem 1**.: _The sample complexity of \(\mathsf{QST}\) up to trace distance \(\delta\) is \(O(d^{2}/\delta^{2})\). Promised that the state is rank \(r\), the sample complexity of \(\mathsf{QST}\) up to infidelity \(\varepsilon\) is \(\tilde{\Theta}(dr/\varepsilon)\).3_ Footnote 3: Infidelity between quantum states \(\rho,\sigma\) is defined as \(1-\|\sqrt{\rho}\sqrt{\sigma}\|_{tr}\). We now give a proof overview of a special case of this theorem - when the quantum state is pure (rank \(r=1\)). It makes use of the symmetric subspace and achieves a sample complexity of \(\tilde{O}(d/\varepsilon)\). This is tight in \(d\), as shown in [17]. Special case of Theorem 1.: Given an unknown \(d\) dimensional pure state \(|\psi\rangle^{\otimes k}\), with \(k\) yet undetermined, note that the state lives inside the symmetric subspace \(\Pi_{sym}^{d,k}\). To determine the state, one can perform the so-called _pretty-good measurement_[1], which has (continuous) POVM elements \(\{|\phi\rangle\langle\phi|^{\otimes k}\}_{|\phi\rangle\in\mathbb{C}^{d}}\). Note that this measurement has infinitely many outcomes, which is ill defined, but we can address this by appropriate discretization. As a consequence, the measurement to be performed is \[X\rightarrow\binom{d+k-1}{k}\int_{\phi}d\phi\,|\phi\rangle\langle\phi|^{ \otimes k}X|\phi\rangle\langle\phi|^{\otimes k}\otimes|\text{description of }\phi\rangle\langle\text{description of }\phi|,\] which is a valid POVM whenever \(X\) is in the symmetric subspace. The factor \(\binom{d+k-1}{d}\) is the dimension of the symmetric subspace and ensures that the measurement is trace-preserving. Given \(|\psi\rangle\langle\psi|^{\otimes k}\) as input, observe that a state \(|\phi\rangle\langle\phi|\) is output with probability \[\binom{d+k-1}{k}\langle\phi|^{\otimes k}|\psi\rangle\langle\psi|^{\otimes k}| \phi\rangle^{\otimes k}=\binom{d+k-1}{k}|\langle\phi|\psi\rangle|^{2k}.\] Thus, the probability that \(|\langle\phi|\psi\rangle|{\leq}\ 1-\varepsilon\) is at most \[\binom{d+n-1}{n}\int_{\phi:|\langle\phi|\psi\rangle|{\leq}1-\varepsilon}d\phi |\langle\phi|\psi\rangle|^{2k}{\leq}\binom{d+k-1}{k}\cdot(1-\varepsilon)^{2k} \leq\left(e\cdot\frac{k+d-1}{d}\right)^{d}e^{-2k\varepsilon}.\] Choosing \(k=\frac{10d}{\varepsilon}\log\frac{1}{\varepsilon}\), we can guarantee that RHS is small. In order to go from the special case to the theorem above, [17, 18] consider a generalization of this argument and proceed by looking at subspaces that are invariant under permutations of registers and local unitary action. We refer the interested reader to [17, 18] for a detailed exposition of the general proof. Very recently, the work of Flammia and O'Donnell [19] considered the sample complexity of tomography under various distance metrics. A drawback of these tomography algorithms is that the time complexity of the procedure scales exponentially in \(d\) (i.e., doubly-exponentially in \(n\)). A natural question that was open from their work was, is there a _time-efficient_ procedure for tomography? In particular, is it possible to solve \(\mathsf{QST}\) using only single-copy measurements? There were a few works in this direction recently [22, 10] and very recently Chen et al. [14] answered this question with a surprisingly short proof. **Theorem 2**.: _The sample complexity of \(\mathsf{QST}\) using single copy measurements is \(\Theta(d^{3}/\delta^{2})\)._ The upper bound comes from the result of Kueng et al [11] and Chen et al. [14] proved the lower bound of \(\Omega(d^{3}/\delta^{2})\) for \(\mathsf{QST}\) with single-copy (and, adaptive) measurements. We now sketch their lower bound. A technical challenge they had to overcome was the following: prior works that established sample lower bounds, proved this in the context of _property testing_, where they proved the hardness between distinguishing two hard distributions over states whose statistics (on separable measurements) were far apart. However, for tomography there are not too many techniques that we know to prove lower bounds against separable measurements. In this paper they use the so-called "learning-tree framework" (which was first used in the prior work of Chen et al. [14] and inspired by classical decision trees which is used to analyze query complexity [1]) to prove their lower bounds. Here there is a tree where each node corresponds to a measurement operator applied onto a copy of the unknown state and the leaves are given by classical bit string, corresponding to measurement labels. Based on the classical output in the leaves, the algorithm outputs an hypothesis state \(\sigma\). The depth of the tree is the sample complexity of the learning algorithm. Chen et al. [14] construct a hard distribution of quantum states based on Gaussian ensemble matrices and their main technical contribution is to show the following: if a separable tomography protocol is run on this hard instance, the leaves of the decision tree above (i.e., the output quantum state \(\sigma\)) is anti-concentrated around the unknown target quantum state if the depth of the tree is \(o(d^{3})\). Proving this anti-concentration is non-trivial, however the proof is fairly short and we refer to their work for more. We conclude this section by discussing a simpler problem than \(\mathsf{QST}\): quantum _spectrum estimation_. Here the goal is to learn the spectrum of an unknown quantum state \(\rho\), given copies of \(\rho\). It was showed [13] that \(O(d^{2}/\varepsilon^{2})\) copies of \(\rho\) suffices to estimate the spectrum of \(\rho\) up to \(\ell_{1}\) distance \(\varepsilon\) and they also showed a lower bound of \(\Omega(d/\varepsilon^{2})\).4 Spectrum learning has been an important subroutine in several property testing algorithms [13, 13, 14, 15]. One question that remains open is the following: Footnote 4: They also showed that a _class of algorithms_ based on Schur sampling require an \(\Omega(d^{2}/\varepsilon^{2})\) sample complexity. **Question 1**.: _What is the tight sample complexity of quantum spectrum estimation?_ ## 3 Learning physical quantum states In the previous section we saw that fully learning arbitrary quantum states could require exponentially many copies of the unknown state. A natural question is, are there _physical_ subclasses of quantum states which can be learned using polynomially many copies (and even polynomial time)? In this section, we discuss a few classes of physical states that can be learned using polynomial sample or time complexity. ### Stabilizer states A natural candidate class that was considered for efficient tomography were states that are known to be classically simulable. To this end, one of the first classes of states that were known to be learnable in polynomial time are stabilizer states. These are \(n\)-qubit states produced by the action of \(n\)-qubit Clifford circuits acting on \(|0^{n}\rangle\). Aaronson and Gottesman [1, 2] considered this question and showed the following theorem. **Theorem 3**.: _The sample complexity of exactly learning \(n\)-qubit stabilizer states is \(O(n)\) and the time complexity is \(O(n^{3})\)._ In their paper, [1] also showed that with single-copy measurements \(O(n^{2})\) copies of a stabilizer state \(|\psi\rangle\) suffice to learn \(|\psi\rangle\). Subsequently, Montanaro [15] gave a fairly simple procedure to learn stabilizer states using \(O(n)\) copies that only uses entangled measurements over 2 copies (prior to his work, Low [14] showed how to learn stabilizer states when one is allowed to make queries to the Clifford circuit preparing the unknown stabilizer state). We now discuss Montanaro's protocol: it is well-known [13, 12] that every \(n\)-qubit stabilizer state can be written as \(|\psi\rangle=\frac{1}{\sqrt{|A|}}\sum_{x\in A}i^{\ell(x)}(-1)^{q(x)}|x\rangle\), where \(A\subseteq\{0,1\}^{n}\) is a subspace and \(\ell\) (resp. \(q\)) is a linear (resp. quadratic) polynomial over \(\mathbb{F}_{2}\) in the variables \(x_{1},\ldots,x_{n}\). Special case of Theorem 3.: We consider the case when \(\ell(x)=1\) for all \(x\). Without loss of generality we can assume that \(A=\{0,1\}^{n}\) as well: a learning algorithm can measure \(\tilde{O}(n)\) copies of \(|\psi\rangle\) in the computational basis, learn the basis for \(A\) and apply an invertible transformation to convert \(|\psi\rangle\) to \(\sum_{x\in\{0,1\}^{k}\times 0^{n-k}}(-1)^{q(x)}|x\rangle\) where \(\text{rank}(A)=k\) and now apply a learning procedure on states of the form \(|\phi\rangle=\frac{1}{\sqrt{2^{k}}}\sum_{x\in\{0,1\}^{k}}(-1)^{q(x)}|x\rangle\). With this assumption, the learning algorithm uses the so-called Bell-sampling procedure: given two copies of \(|\phi_{q}\rangle=\frac{1}{\sqrt{2^{n}}}\sum_{x}(-1)^{q(x)}|x\rangle\) where \(q(x)=x^{\top}Bx\) (where \(B\in\mathbb{F}_{2}^{n\times n}\)), perform \(n\) CNOTs between the first copy and second copy, and measure the second copy. One obtains a uniformly random \(y\in\mathbb{F}_{2}^{n}\) and the state \[\frac{1}{\sqrt{2^{n}}}\sum_{x}(-1)^{f(x)+f(x+y)}|x\rangle=\frac{(-1)^{y^{\top }Ay}}{\sqrt{2^{n}}}\sum_{x}(-1)^{x^{\top}(B+B^{\top})\cdot y}|x\rangle.\] The learning algorithm then applies the \(n\)-qubit Hadamard transform and measures to obtain bit string \((B+B^{\top})\cdot y\). Repeating this process \(O(n\log n)\) many times, one can learn \(n\) linearly independent constraints about \(B\). Using Gaussian elimination, allows one to learn the off-diagonal elements of \(B\). To learn the diagonal elements of \(B\), a learner applies the operation \(|x\rangle\rightarrow(-1)^{x_{ij}}|x\rangle\) if \(B_{ij}=1\) for \(i\neq j\). Repeating this for all \(i\neq j\), the resulting quantum state is \(\sum_{x}(-1)^{\sum_{i}x_{i}B_{ii}}|x\rangle\). Again applying the \(n\)-qubit Hadamard transform, the learner learns the diagonal elements of \(B\). Given that stabilizer states are learnable using \(O(n)\) copies, a followup question which hasn't received much attention is the following. **Question 2**.: _The stabilizer rank of \(|\psi\rangle\) is the minimum \(k\) for which \(|\psi\rangle=\sum_{i}\alpha_{i}|\phi_{i}\rangle\) where \(|\phi_{i}\rangle\) is an \(n\)-qubit stabilizer state. Can we learn stabilizer rank-\(n\) states in polynomial time?_ Inspired by a result of Raz [11] who proved time-space tradeoffs for parity learning, we also pose the following question. **Question 3**.: _The standard Bell-sampling approach for learning stabilizer states uses \(O(n)\) copies of the stabilizer state and \(O(n^{2})\) classical space. If we have \(o(n^{2})\) classical space, what is the sample complexity of learning stabilizer states? Similarly, can we prove sample-space tradeoffs when the algorithm is given quantum space?5_ Footnote 5: Recently, Liu et al. [14] showed that an algorithm for learning parities needs either \(\Omega(n^{2})\) classical space, \(\Omega(n)\) quantum space or \(\Omega(2^{n})\) labelled examples. ### Learning circuits with non-Clifford gates We saw how to learn the output states of Clifford circuits; a natural question is, if the circuit consists of a few _non-Clifford_\(T\) gates, can we still learn the output state? It is known that that Clifford\(+T\) circuits are universal for quantum computation and they have received much attention in fault-tolerance, circuit compilation and circuit simulation [1, 12, 13, 14, 15, 16, 17, 18, 19]. An arbitrary quantum circuit can be decomposed as a alternating sequence of Clifford stages and \(T\) stages (by Clifford stage, we mean a Clifford circuit and by \(T\)-stage we mean either a \(T\) gate or identity is applied to each qubit). The number of \(T\) stages is the _\(T\)-depth_ of the circuit. The learning task we consider is: Suppose \(U\) is an \(n\)-qubit quantum circuit belonging to the class of \(T\) depth-one circuits, can one learn \(U\)? In particular, if we are allowed to apply \(U\) to specified prepared states and measure under a class of POVMs, how many measurements are required for learning \(U\)? In [10], they proved the following theorem. **Theorem 4**.: _Let \(U\) be an \(n\)-qubit \(T\)-depth one quantum circuit comprising of \(O(\log n)\) many \(T\) gates. There exists a procedure that makes \(\textsf{poly}(n)\) queries to \(U\) and outputs a circuit \(\tilde{U}\) that is equivalent to \(U\) when the input states are restricted to the computational basis._ We omit the proof of this theorem and refer the reader to [10] for more details. Recently, there was a hardness for learning the output distributions of Clifford circuits with a _single_\(T\) gate [13],6 it is surprising that \(T\)-depth \(1\) circuits are learnable in polynomial time. This theorem naturally motivates the following questions. Footnote 6: The hardness result [13] is in a weaker _statistical query_ model which we discuss in Section 5.2, whereas the positive result [10] considers the standard tomography model wherein the learner is given copies of the state. **Question 4**.: _What is the complexity of learning circuits with \(T\)-depth \(t\) (for some \(t\geq 2\))?_ Recently, there have been a few works by Grewal et al. [11, 12, 13] where they showed polynomial-time algorithms for learning states prepared by Clifford circuits with \(O(\log n)\) many \(T\) gates. They are also able to learn the output states of such circuits in polynomial time.7 Footnote 7: We remark that the states produced by these circuits have stabilizer rank \(\leq n\), still leaving open the question we asked in the previous section. **Question 5**.: _Can we learn \(n\)-qubit states and circuits that consist of \(\omega(\log n)\) many \(T\) gates? If not, is there a conditional hardness result one could show for learning these states?_ ### Learning phase states In this section, we consider learning classical low-degree Boolean functions encoded as the amplitudes of quantum states, aka _phase states_, which can be viewed as a generalization of stabilizer states. In recent times phase states have found several applications in cryptography, pseudorandomness, measurement-based quantum computing, IQP circuits, learning theory [11, 12, 13, 14, 15, 16, 17]. A degree-\(d\)_binary phase state_ is a state of the form \(|\psi_{f}\rangle=2^{-n/2}\sum_{x\in\{0,1\}^{n}}(-1)^{f(x)}|x\rangle\) where \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) is a degree-\(d\) function. Similarly, a degree-\(d\)_generalized phase state_ is a state of the form \(|\psi_{f}\rangle=2^{-n/2}\sum_{x\in\{0,1\}^{n}}\omega_{q}^{f(x)}|x\rangle\) where \(f:\{0,1\}^{n}\rightarrow\mathbb{Z}_{q}\) is a degree-\(d\) polynomial, \(\omega_{q}=e^{2\pi i/q}\) and \(q\) is a prime. It is known that the output state of a random \(n\)-qubit Clifford circuit is a generalized \(q=4\), degree-2 phase state with a constant probability [1], and a generalized degree-\(d\) phase states with \(q=2^{d}\) can be prepared from diagonal unitaries in the \(d\)-th level of the Clifford hierarchy [10, 11]. The learning question is: how many copies of \(|\psi_{f}\rangle\) suffice to learn \(f\) exactly? Earlier works [1, 21, 22] showed \(O(n)\) samples suffice for learning degree-1, and \(O(n^{2})\) suffices for learning degree-2 binary phase states; learning degree-\(d\) for \(d\geq 3\) has remained open (in fact it was plausible that it was a hard learning task since IQP circuits produce degree-3 phase states [21, 1]). Sample complexity of learning _generalized_ phase states had not been studied before. In a recent work [1] they provided separable and entangled bounds for learning phase states. Below we sketch the upper and lower bounds for the case of separable measurements. We refer the reader to [1] for the proof of the sample complexity with entangled measurements. **Separable measurements**_upper bound._ The proof makes the following simple observation: given \(|\psi_{f}\rangle=2^{-n/2}\sum_{x}\omega_{q}^{f(x)}|x\rangle\), suppose we measure qubits \(2,3,\ldots,n\) in the computational basis and obtain \(y\in\{0,1\}^{n-1}\). The post-measurement state is then \(|\psi_{f,y}\rangle=(\omega_{q}^{f(0y)}|0\rangle+\omega_{q}^{f(1y)}|1\rangle)/ \sqrt{2}.\) If the base of the exponent was \((-1)\), then applying a Hadamard on \(|\psi_{f,y}\rangle\) produces \(c=f(0y)-f(1y)\). Their main idea is, it is still possible to obtain a value \(b\in\mathbb{Z}_{q}\) such that \(b\neq c\) with certainty. To this end, consider a POVM whose elements are given by \(\mathcal{M}=\{|\phi_{b}\rangle\langle\phi_{b}|\}_{b\in\mathbb{Z}_{q}}\), where \(|\phi_{b}\rangle=(|0\rangle-\omega_{q}^{b}|1\rangle)/\sqrt{2}\). Applying this POVM \(\mathcal{M}\) onto an unknown state \((|0\rangle+\omega_{q}^{c}|1\rangle)/\sqrt{2}\) they observe that \(c\) is the outcome with probability \(0\) and furthermore one can show that _every_ other outcome \(b\neq c\) appears with probability \(\Omega(d^{-3})\). Repeating this process \(m=O(n^{d-1})\) many times, one obtains \((y^{(k)},b^{(k)})\) for \(k=1,2,\ldots,m\) such that \(f(1y^{(k)})-f(0y^{(k)})\neq b^{(k)}\) for all \(k\in[m]\). Let \(g(y)=f(1y^{(k)})-f(0y^{(k)})\) (i.e., \(g=\nabla_{1}f\)). Clearly \(g\) is a degree \(\leq d-1\) polynomial. A non-trivial analysis in [1] shows the following: the probability of having more than one polynomial degree-\(d-1\) polynomial \(g\) satisfying the constraints \(g(y^{k})\neq b^{k}\) is exponentially small if we choose \(k=\tilde{O}(q^{3}n^{d-1})\). Hence \(k\) many copies \(|\psi_{f}\rangle\), allows a learning algorithm to learn the derivative \(\nabla_{1}f\). Repeating this for \(n\) directions, we can learn \(\nabla_{1}f,\ldots,\nabla_{n}f\), hence \(f\). **Separable measurements**_lower bound._ Furthermore, they show that the above protocol is optimal even if allowed single _copy_ measurements. The main idea is the following: for a uniformly random degree-\(d\) function \(f\), suppose a learning algorithm measures the phase state \(|\psi_{f}\rangle\) in an arbitrary orthonormal basis \(\{U|x\rangle\}_{x}\). One can show that the distribution describing the measurement outcome \(x\) is "fairly" uniform. In particular, \(\mathbb{E}_{f}[H(x|f)]\geq n-O(1)\), where \(H(x|f)\) is the Shannon entropy of a distribution \(P(x|f)=|\langle x|U^{*}|\psi_{f}\rangle|^{2}\). To prove this, they first lower bound the Shannon entropy by Renyi-two entropy and prove a technical statement to bound the latter by deriving an explicit formula for \(\mathbb{E}_{f}[|\psi_{f}\rangle\langle\psi_{f}|^{\otimes 2}]\). Thus, for a typical \(f\), measuring one copy of the phase state \(|\psi_{f}\rangle\) provides at most \(O(1)\) bits of information about \(f\). Since a random uniform degree-\(d\) polynomial \(f\) with \(n\) variables has entropy \(\Omega(n^{d})\), one has to measure \(\Omega(n^{d})\) copies of \(|\psi_{f}\rangle\) in order to learn \(f\). In [1] they also constructed a procedure to learn circuits (consisting of diagonal gates in the Clifford hierarchy) which produce phase states, leaving open the following: **Question 6**.: _What is the complexity of learning circuits consisting of non-diagonal gates in the Clifford hierarchy?8_ Footnote 8: When given query access to the circuit, Low [21] gave a procedure to learn the Clifford hierarchy. To this end, when given query access to the circuit, Low [21] gave a procedure to learn the Clifford hierarchy. However given only copies of \(C|0^{n}\rangle\) where \(C\) consists of _non-diagonal_ gates in the Clifford hierarchy, the question we ask is open. Liang [14] recently showed a conditional hardness of learning Clifford circuits in the _proper learning_ setting. An open question from [1] which might improve our understanding of phase states is, how many copies suffice to _test_ phase states. **Question 7**.: _What is the complexity of property testing degree-\(d\) phase states? In particular, given copies of a state \(|\psi\rangle\) promised it is either a degree-\(d\) phase state or \(\varepsilon\)-far from the set of all degree-\(d\) phase states, how many copies are necessary and sufficient to distinguish these cases?_ ### Gibbs states of local Hamiltonians In this section we discuss the problem of learning a Hamiltonian given copies of its Gibbs state. The setup of this learning problem is as follows: let \(H\) be a local Hamiltonian \(H=\sum_{\alpha=1}^{m}\mu_{\alpha}E_{\alpha}\) on \(n\) qubits, where \(E_{\alpha}\) is some local orthogonal operator basis such as the Pauli matrices, an algorithm receives copies of the Gibbs state \(\rho_{\beta}(H)=\frac{e^{-\beta H}}{\mathsf{If}(e^{-\beta H})}\) and the goal is to output a list of numbers \(\mu^{\prime}:=\{\mu_{1}^{\prime},\mu_{2}^{\prime},\ldots,\mu_{m}^{\prime}\}\) that are close to \(\mu:=\{\mu_{1},\mu_{2},\ldots,\mu_{m}\}\) in either the \(\ell_{\infty}\) or \(\ell_{2}\) distance metric. We make the natural assumption that \(\mu_{1},\ldots,\mu_{m}\in(-1,1)\), which simply says that each local interaction has bounded strength. Learning an unknown Hamiltonian from its Gibbs state has been studied in statistical physics and machine learning [13, 12, 11, 1] for many decades, known as the "inverse Ising problem". For machine learning, one is often interested in Ising interaction (that is, each \(E_{\alpha}\) is a Pauli operator of the form \(Z\otimes Z\)) where the underlying interaction graph9 is sparse and unknown [1, 10, 11]. Learning the Ising model also learns the very important underlying graph structure. In the quantum regime, we are far from being able to learn the underlying graph, solely under the sparsity assumption. Thus, we will assume that the underlying graph is known. For most physics applications, the graph can also respect the geometric constraints that arise from living in a low dimensional space. Before discussing algorithms for Hamiltonian learning, we first discuss motivation for considering this learning question. Footnote 9: An interaction graph has the qubits in the Hamiltonian as its vertices and each Ising interaction as its edge. Hamiltonian learning can be a useful experimental tool in a variety of settings. * **Understanding the lattice structure:** Suppose we wish to know whether interactions in a given quantum material respect a Kagome lattice structure or a square lattice structure, assuming one of them is the case. This knowledge can significantly affect the physical properties, such as the electronic behaviour as looked at by [13]. If our Hamiltonian learning algorithm guarantees that \(\|\mu^{\prime}-\mu\|_{\infty}{\leq\frac{1}{3}}\), then we can figure out which edge is present or absent, in turn the lattice structure. * **Estimating the spectral gap of a Hamiltonian:** Another key quantity of interest is the spectral gap of a Hamiltonian, which dictates a myriad of ground state properties. For learning the spectral gap up to constant precision error (say 0.1), we need to know the Hamiltonian really well and the right regime to consider is \(\|\mu^{\prime}-\mu\|_{1}{\leq 0.1}\). * after all - models of real interactions happening in physics. Effective Hamiltonians regularly arise when we wish to consider interactions between a specific set of particles or quasi-particles. These interactions can be hard to precisely determine theoretically, motivating the use of Hamiltonian learning [12]. * **Entanglement Hamiltonian:** Li-Haldane conjecture states that the marginals of a 2D gapped ground state are Gibbs state of a local Hamiltonian with temperature that depends on the location of the local term. Learning this Hamiltonian is directly relevant to understanding the entanglement structure of the system [14]. We remark that in the applications above, we did not specify the inverse temperature \(\beta\) of the Gibbs state. In some cases, the temperature can be controlled, and then setting \(\beta\) to be a small constant leads to optimal algorithms - see below. In other cases, such as for effective or entanglement Hamiltonians, temperature can be very low at the boundary of the region. Thus, efficient algorithms for Hamiltonian learning at all finite temperatures has interesting consequences in quantum computing. #### 3.4.1 Sufficient statistics We now turn to designing algorithms for the learning task above. One natural question is: given an instance of Hamiltonian learning problem, is there any data about the Gibbs state that would suffice to learn the Hamiltonian? In other words, what are the '_sufficient statistics_' for the Hamiltonian? The answer turns out to be very simple: they are the set of expectation values \(f_{\alpha}=\mathsf{Tr}(E_{\alpha}\cdot\rho_{\beta}(H))\). There are two ways to prove that sufficient statistics suffice for learning * **Information theoretic argument:** Let's consider two Gibbs quantum states \(\rho_{\beta}(H)\) and \(\rho_{\beta}(G)\), where \(H=\sum_{\alpha}\mu_{\alpha}E_{\alpha}\) and \(G=\sum_{\alpha}\nu_{\alpha}E_{\alpha}\). We will argue that their "distance" is characterized by the expectation values. For this, we evaluate the symmetric relative entropy \[\mathrm{S}\left(\rho_{\beta}(H)\|\rho_{\beta}(G)\right)+ \mathrm{S}\left(\rho_{\beta}(G)\|\rho_{\beta}(H)\right) = \beta\mathsf{Tr}((H-G)(\rho_{\beta}(G)-\rho_{\beta}(H)))\] \[= \beta\sum_{\alpha}(\mu_{\alpha}-\nu_{\alpha})\mathsf{Tr}(E_{ \alpha}(\rho_{\beta}(G)-\rho_{\beta}(H))),\] where the first equality follows by routine calculation. Thus, if \(\mathsf{Tr}(E_{\alpha}\rho_{\beta}(H))=\mathsf{Tr}(E_{\alpha}\rho_{\beta}(G))\) for all \(\alpha\), the right hand side vanishes. The above argument also says that if \(\mathsf{Tr}(E_{\alpha}\rho_{\beta}(H))\approx\mathsf{Tr}(E_{\alpha}\rho_{ \beta}(G))\) then the relative entropy between the Gibbs quantum states is small. This is good enough to 'learn' the Gibbs state up to small error in total variational distance. More precisely, \[\mathrm{S}\left(\rho_{\beta}(H)\|\rho_{\beta}(G)\right)+\mathrm{S }\left(\rho_{\beta}(G)\|\rho_{\beta}(H)\right)\] \[\leq\beta\max_{\alpha}\lvert\mu_{\alpha}-\nu_{\alpha}\rvert\cdot \left(\sum_{\alpha}\lvert\mathsf{Tr}(E_{\alpha}\rho_{\beta}(H))-\mathsf{Tr}(E _{\alpha}\rho_{\beta}(G))\rvert\right)\] \[\leq 2\beta\left(\sum_{\alpha}\lvert\mathsf{Tr}(E_{\alpha}\rho_{ \beta}(H))-\mathsf{Tr}(E_{\alpha}\rho_{\beta}(G))\rvert\right).\] However, this estimate is not sufficient to guarantee the closeness of the Hamiltonians. * for our problem description - is based on the convexity of the log partition function. The observation here is simply that the function \(\log\mathsf{Tr}(e^{-\beta H})\) is a convex function in the parameters \(\{\mu_{1},\mu_{2},\ldots,\mu_{m}\}\). The vector \((f_{1},f_{2},\ldots,f_{m})\) of trace expectations then forms the gradient of this function. Furthermore, precise knowledge of the gradient can be used to identify the parameters \(\mu_{1},\ldots,\mu_{m}\). See Figure 1 (a). The quantities \(f_{\alpha}\) can only be known approximately in experiments, due to statistical errors in estimation. Thus, a robust version of sufficient statistics is needed to develop an algorithm for Hamiltonian learning. In [1], strong convexity of the log partition function was established. This roughly says that the log partition function "curves well" (see Figure 1 (b)). An algorithm - based on gradient descent - was constructed which uses \(O\left(m^{3}\cdot 1/\varepsilon^{2}\cdot\mathsf{poly}(1/\beta)\cdot\exp( \mathsf{poly}(\beta))\right)\) copies of the Gibbs state to learn the Hamiltonian with guarantee \(\|\mu^{\prime}-\mu\|_{2}{\leq}\)\(\varepsilon\). The time complexity of the algorithm depends on computing the gradient of the partition function. An efficient computation at high temperatures, for stoquastic Hamiltonians and 1D Hamiltonians - but requiring large run-time for low temperatures and arbitrary Hamiltonians. #### 3.4.2 Commuting Hamiltonians While the above algorithm based on sufficient statistics takes exponential time at low temperatures, classical Hamiltonians can be learned time-efficiently using more refined techniques - as noted earlier [1, 14, 15]. In fact, here we argue that _commuting_ Hamiltonians - that include classical Hamiltonians - can also be efficiently learned at any temperature, as long as the interaction graph is known. The algorithm is fundamentally different from the previous one that was based on estimating the expectation values \(f_{\alpha}=\mathsf{Tr}(E_{\alpha}\rho_{\beta}(H))\). Consider \(H=\sum_{\ell}h_{\ell}\), where the commutator \([h_{\ell},h_{\ell^{\prime}}]=0\). We note that this notation is different from the one we used earlier, in particular here the \(h_{\ell}\)s need not be an _orthogonal_ basis. The algorithm, sketched in [1] is based on the following theorem. See also Figure 2 **Theorem 5**.: _[_1_]_ _For any region \(R\) on the lattice, define the effective reduced Hamiltonian \(H_{R}=\frac{-1}{\beta}\log\mathrm{Tr}_{R^{c}}\left(\rho_{\beta}\right)\).10 Let \(\partial R\) be the boundary of \(R\), and \(\partial_{-}R\) be the inner boundary of \(R\) (which is the set of qubits in \(R\) that interact with a qubit outside \(R\)). Then_ Footnote 10: Here the subscript in \(\mathsf{Tr}\) refers to the registers being traced out. Moreover, \(R^{c}\) is the set of qubits not in \(R\). \[H_{R}=\alpha_{R}I+h_{R}+\Phi,\] _where \(\Phi\) is only supported on \(\partial_{-}R\) and \([\Phi,h_{R}]=0\). Here, \(\alpha_{R}\) is some real number and \(\|\Phi\|{\leq}\,2|\partial R|\)._ Using this theorem, the learning algorithm is straightforward: perform good enough tomography of the region around an interaction \(h_{\ell}\) to reconstruct the marginal to very high accuracy. Then take log of the marginal, followed by computing each \(h_{\ell}\) up to error \(\varepsilon\) (i.e., output a \(h^{\prime}_{\ell}\) such that \(\|h^{\prime}_{\ell}-h_{\ell}\|{\leq}\)\(\varepsilon\)). This is good enough to estimate the unknown Hamiltonian \(H\). The resulting Figure 1: (a) Given the gradient of a convex function - such as the log partition function - there is a unique point that matches the gradient. (b) Strong convexity ensures that good knowledge of the gradient leads to good enough closeness to the desired point. sample complexity [1] is \(\exp(\mathcal{O}(\beta k^{D}))\cdot\mathcal{O}\left(1/\varepsilon^{2}\cdot\log( m/\delta)\right),\) where \(k\) is the locality of the Hamiltonian, \(D\) is the degree of the underlying interaction graph and \(\delta\) is the probability of failure. Time complexity is \(m\cdot\exp(\mathcal{O}(\beta k^{D}))\cdot\mathcal{O}\left(1/\varepsilon^{2} \cdot\log(m/\delta)\right)\). #### 3.4.3 High temperature Gibbs states The idea of using effective reduced Hamiltonian in Theorem 5 can also be applied to non-commuting Hamiltonians, as long as the temperature is high enough (or \(\beta\) smaller than the critical temperature \(\beta_{c}\), which is a constant). This follows from a similar result as Theorem 5 shown by [14] using cluster expansion, with \(H_{R}\) approximated by \(h_{\ell}+\Phi\) as \(\|H_{R}-h_{\ell}-\Phi\|_{\infty}\leq\exp(-\Omega(r))\) for a spherical region \(R\) of radius \(r\) around \(\ell\) (see Figure 3 for an example). If we use the same approach as above, to estimate each \(h_{\ell}\) with error \(\varepsilon\), we thus need \(r=O(\log 1/\varepsilon)\). The sample complexity now incurs an additional factor of \(\exp(r^{D})\), where \(D\) is the lattice dimension or the degree of the graph. Thus, the sample complexity is \(O\Big{(}\exp(\beta(\log\frac{1}{\varepsilon})^{D})\cdot 1/\varepsilon^{2} \cdot\log(m/\delta)\Big{)}\) and time complexity is \(O\Big{(}m\cdot\exp((\log\frac{1}{\varepsilon})^{D})\cdot 1/\varepsilon^{2} \cdot\log(m/\delta)\Big{)}\). For constant \(\varepsilon\), this is very efficient; however for \(\varepsilon=\frac{1}{m}\), in which case the \(\ell_{1}\) error of learning is small enough, the sample complexity is super-polynomial in \(m\). This is somewhat unsati Figure 3: In the high temperature regime, it has been shown by [14] that the marginal of a Gibbs state is still the Gibbs state of the original Hamiltonian (within the brown circle of radius \(r\)) up to boundary correction. However, the boundary term has some support within the circle and has strength \(\approx e^{-r}\) near the center. Thus, to learn \(h_{\ell}\), we need to make sure that \(e^{-r}\) is small enough. Figure 2: Consider the marginal of the Gibbs state in the brown circle. For commuting Hamiltonians, this marginal is the Gibbs state of a Hamiltonian that is the boundary correction to the Hamiltonian strictly within the region. seems worse than [1] in the regime where each local term has to be learned very accurately. In a subsequent work [14] provided a unifying and complete answer for \(\beta<\beta_{c}\). Employing the cluster expansion method of [13, 15], the authors directly express the sufficient statistics \(\mathsf{Tr}(E_{\alpha}\rho_{\beta}(H))\) as an infinite series in \(\beta\) with coefficients polynomial in the local Hamiltonian terms. Approximate knowledge of the sufficient statistics is then inverted to estimate the Hamiltonian terms. They achieve tight sample and time complexity of \(\mathcal{O}\left(1/\varepsilon^{2}\cdot\log(m/\delta)\right)\) and \(\mathcal{O}\left(m/\varepsilon^{2}\cdot\log(m/\delta)\right)\) respectively.11 Footnote 11: We note that there were gaps in the above results - that originated in [15] - were fixed in [16]. See [14] for a discussion. #### 3.4.4 Discussion The problem of time efficient Hamiltonian learning - on a fixed geometry and at arbitrary \(\beta\) - remains open. The fact that this is possible in the commuting case is encouraging, as there is no prior reason to expect that the commuting and non-commuting cases would be fundamentally different. Indeed, very good heuristic methods exist for the task [1, 17]. We end this section with two relevant open questions. **Question 8**.: _Can we achieve Hamiltonian learning under the assumption that the Gibbs states satisfy an approximate conditional independence?12_ Footnote 12: Given a quantum state \(\rho\) on registers \(A,B,C\), \(\mathrm{I}(A:C|B)_{\rho}=S(\rho_{AB})+S(\rho_{BC})-S(\rho_{B})-S(\rho_{ABC})\) is the quantum conditional mutual information. We say that \(\rho\) satisfies approximate conditional independence if \(\mathrm{I}(A:C|B)_{\rho}\approx 0\). Approximate conditional independence is known to hold in 1D [15] and conjectured to hold for every dimension. **Question 9**.: _Pseudorandomness is a bottleneck for learnability. If a family of quantum states are pseudo-random, then polynomially many copies of the state are indistinguishable from Haar random states by any efficient quantum algorithm. Could low temperature Gibbs states be pseudorandom, which would explain the difficulty in finding time efficient algorithm?_ ### Matrix product states Matrix Product States (MPS) are a widely used representation of quantum states on a spin-chain. Mathematically, a state \(|\psi\rangle\in(\mathbb{C}^{d})^{\otimes n}\) is a _matrix product state_ (MPS) if \(|\psi\rangle\) can be written as \[|\psi\rangle=\sum_{i_{1},\ldots,i_{n}\in[d]}\mathsf{Tr}(A_{i_{1}}^{(1)}\cdot A _{i_{2}}^{(2)}\cdots A_{i_{n}}^{(n)})\,|i_{1},\ldots,i_{n}\rangle,\] where for all \(j\in[n],i\in[d]\), \(A_{i}^{(j)}\) is a \(D_{j}\times D_{j+1}\) matrix. We call the set of matrices \(\{A_{i}^{(j)}\}\) an _MPS representation of \(|\psi\rangle\)_. We refer to \(D=\max_{j}D_{j}\) as the _bond dimension_ of \(|\psi\rangle\), when minimized over all MPS representations. Many physically relevant \(n\)-qubit quantum states - such as gapped ground states - can be approximated by MPS with bond dimension polynomial in \(n\). A learning algorithm for an MPS state \(\rho\) takes as input, copies of \(\rho\) promised to be an MPS of certain bond dimension \(D\) and outputs an MPS of bond dimension \(D^{\prime}\) that approximates \(\rho\) in fidelity. The goal is learn these states with polynomial sample and time complexity along with keeping \(D^{\prime}\) close to \(D\). Unlike Gibbs states, local observable statistics do not always determine an MPS. For example, consider the CAT states \(\frac{1}{\sqrt{2}}|00\cdots 0\rangle\pm\frac{1}{\sqrt{2}}|11\cdots 1\rangle\), which are MPS of bond dimension 2. These states can't be distinguished on any set of \(n-1\) qubits. Thus, any algorithm for MPS must make global measurements. Indeed, [1, 1] gave a polynomial time algorithm to learn an MPS, using global-but-efficient measurements. Their algorithm learns a sequential circuit that prepares the MPS. The resulting output has bond dimension \(D^{\prime}=\operatorname{poly}(D)\). However, local measurements are more ideal in the experimental settings. Under the assumption of injectivity (see Figure 5) learning an MPS with just local measurements may be possible. Let us first observe that a 'local' sufficient statistics holds for injective MPS. This is because the marginals on \(O(\log D)\) qudits determine the parent Hamiltonian which has the MPS as its unique ground state. Barring the statistical errors - which can be addressed using injectivity - the knowledge of parent Hamiltonian allows one to reconstruct an approximation to the MPS state using the rigorous algorithm in [12, 1]. A drawback of this approach is that the algorithm in [12, 1] outputs an MPS with bond dimension \(D^{\prime}=\operatorname{poly}(n)\), which may be much larger than a constant \(D\). Thus, we see a large blowup in bond dimension of the output MPS. Cramer et al. [13] proposes a heuristic efficient algorithm based on the singular value thresholding algorithm in which the output bond dimension does not suffer such blow-up; however there is no guarantee that the output MPS is close to the input MPS. This leads us to the following question: **Question 10**.: _Can an injective MPS be learned efficiently using local measurements, with the output bond dimension \(D^{\prime}=\operatorname{poly}(D)\)?_ A possible direction is to improve [12, 1] under suitable guarantees. **Question 11**.: _Promised that the ground state is an MPS of bond dimension \(D\), can the algorithm in [12, 1] be improved to produce an output MPS that also has \(\operatorname{poly}(D)\) bond dimension?_ Projected Entangled Pair States (PEPS) are higher dimensional generalizations of Matrix Product States. Learnability of PEPS is far from clear, even in terms of sample complexity. Observe that the parent Hamiltonians of PEPS are frustration-free and locally-gapped. A natural question is, can the learning task become simpler assuming injectivity? **Question 12**.: _Given an injective PEPS, an approximation to the parent Hamiltonian can be learned with polynomial sample and time complexity. Can this be used to write down a description of another PEPS that represents a similar state?_ Figure 4: A matrix product state is specified by its bond dimension \(D\) and a sequence of \(D\times D\) matrices. The virtual bonds (black lines) indicate the amount of entanglement and the physical blue lines represent qudits. Figure 5: Under coarse graining (blue rectangles) the physical dimension exceeds the bond dimension. For example, if \(D=4\) and each physical blue line is a qutrit, the physical dimension of blue regions is \(3^{3}=27\), which is larger than the total bond dimension at the boundary \(4^{2}=16\). For typical tensors \(A_{1},\dots,A_{n}\), this makes the map from the virtual bonds to physical qudits invertible. Such an MPS is injective. The key bottleneck above is that there is no two dimensional analogue of [14], despite an area law for locally-gapped frustration-free spin systems [1]. ## 4 Alternate models of learning quantum states In order to perform full state tomography on \(n\) qubits we saw that it is necessary and sufficient to obtain \(\Theta(2^{2n})\) many copies of the unknown state. The exponential scaling of the complexity is prohibitive for experimental demonstrations. of course a natural question is, is it necessary to approximate the unknown state up to small trace distance? In particular, do there exist weaker but still practically useful learning goals, with smaller sample complexity? These questions have led some to consider learning only the 'useful' properties of a unknown quantum state. There have been several models of learning quantum states (inspired by computational learning theory) where exponential savings in sample complexity is possible and we discuss these models in this section. ### PAC learning and online learning PAC learning.In a seminal work, Valiant [13] introduced the _Probably Approximately Correct_ (PAC) model of learning which lays the foundation for computational learning theory. In this model, there is a concept class consisting of Boolean functions \(\mathcal{C}\subseteq\{c:\{0,1\}^{n}\to\{0,1\}\}\) and an underlying distribution \(D:\{0,1\}^{n}\to[0,1]\). The learning algorithm is provided with labelled examples of the form \((x,c(x))\) where \(x\) is sampled from the distribution \(D\).13 We say a learning algorithm \((\varepsilon,\delta)\)-learns a concept class \(\mathcal{C}\) if it satisfies the following: Footnote 13: We assume that the concept class is Boolean here, one could also consider real-valued classes where \(c(x)\) is then specified up to certain bits of precision. For every \(c\in\mathcal{C}\), distribution \(D:\{0,1\}^{n}\to[0,1]\), given labelled examples \((x,c(x))\) where \(x\) is sampled from \(D\): with probability at least \(1-\delta\), the algorithm outputs \(h:\{0,1\}^{n}\to\{-1,1\}\) such that \(\Pr_{x\sim D}[h(x)=c(x)]\geq 1-\varepsilon\). The sample complexity of a learning algorithm \(\mathcal{A}\) is the maximum number of labelled examples, over all the concepts \(c\in\mathcal{C}\) and distributions \(D\). The \((\varepsilon,\delta)\)-sample complexity of a concept class \(\mathcal{C}\) is the minimum sample complexity over all \((\varepsilon,\delta)\)-PAC learners \(\mathcal{A}\) for \(\mathcal{C}\). Similarly, one can define the sample complexity (resp. time complexity) of \((\varepsilon,\delta)\)-learning \(\mathcal{C}\) as the samples used (resp. time taken) by the \((\varepsilon,\delta)\)-learning algorithm. There have been many works in classical literature that have looked at _distribution-dependent_ PAC models wherein the distribution \(D\) is known to the learner and the algorithm needs to perform well under \(D\). Aaronson [1] considered the natural analog of learning quantum states in the PAC model. In this model of learning, the concept class \(\mathcal{C}\) is a collection of functionals described by an unknown quantum states, \(\rho\in\mathcal{C}\) acting on the class of measurements operators \(\mathcal{E}\) and \(D:\mathcal{E}\to[0,1]\) is an unknown distribution over all possible \(2\)-outcome measurements. A quantum learning algorithm obtains several examples of the form \((E_{i},\mathsf{Tr}(\rho E_{i}))\) where \(E_{i}\) is drawn from the distribution \(D\) and the goal is to approximate \(\rho\). We say a learning algorithm \((\varepsilon,\delta,\gamma)\)-learns \(\mathcal{C}\) if it satisfies the following: For every \(\rho\in\mathcal{C}\), given examples \((E_{i},\mathsf{Tr}(\rho E_{i}))\) where \(E_{i}\sim D\), with probability at least \(1-\delta\), output \(\sigma\) that satisfies \(\Pr_{E\sim D}[|\mathsf{Tr}(\rho E)-\mathsf{Tr}(\sigma E)|\leq\varepsilon] \geq 1-\gamma\). In contrast to tomography where the output state \(\sigma\) is close to the unknown \(\rho\) in trace distance, i.e., \(\sigma\) should satisfy \(\max_{E}\lvert\mathsf{Tr}(E\rho)-\mathsf{Tr}(E\sigma)\rvert\leq\varepsilon\), in PAC learning the goal is for \(\mathsf{Tr}(E\rho)\) to be close to \(\mathsf{Tr}(E\sigma)\) for _most Es_. In a surprising result, Aaronson showed that the class of all \(n\)-qubit quantum states can be \(\mathsf{PAC}\)-learned using just \(O(n)\) samples. **Theorem 6**.: _The sample complexity of \(\mathsf{PAC}\) learning \(n\)-qubit quantum states is \(O(n\cdot\mathsf{poly}(1/\varepsilon,1/\delta,1/\gamma))\)._ Similarly, Cheng et al. [1] considered the "dual problem" of learning a quantum measurement in the \(\mathsf{PAC}\) learning framework. We do not prove these theorems here, we refer the reader to the survey [1, Theorem 4.16]. A natural question left open by Aaronson was, what classes are states are _time-efficiently_\(\mathsf{PAC}\) learnable? To this end, Rocchetto [14] observed that the class of stabilizer states is \(\mathsf{PAC}\) learnable in polynomial time. The learning algorithm of Rocchetto assumed that the distribution \(D\) was over Pauli observables and he crucially used that \(\mathsf{Tr}(P\rho)\in\{-1,1,0\}\) when \(\rho\) was a stabilizer state. This allowed Rocchetto to learn the stabilizers of the unknown \(\rho\) and with some extra work, the entire stabilizer state \(\rho\). A natural question that remains open is the following. **Question 13**.: _What is the time complexity of \(\mathsf{PAC}\) learning states prepared by Clifford circuits with \(t\) many T gates? What is the \(\mathsf{PAC}\) sample complexity of learning stabilizer-rank \(k\) states?_ Gollakota and Liang [11] considered a natural question of learning stabilizer states in the presence of noise. They looked at a restrictive version of \(\mathsf{PAC}\) learning, called _statistical query_ learning (we discuss this model in further detail in Section 5.2). Here the learning algorithm is allowed to make single-copy "queries" to learn the unknown _noisy_ stabilizer state (the noise model they consider is the depolarizing noise). In this model, [11] showed that learning stabilizer states with noise is as hard as learning parities with noise (LPN) using classical samples (which is believed to require exponentially many samples [1]). Online learning.Subsequently, Aaronson et al. [1], Chen et al. [1] looked at the setting of online learning quantum states (inspired by the classical model of online learning functions). The online model can be viewed as a variant of tomography and \(\mathsf{PAC}\) learning. Consider the setting of tomography, suppose it is infeasible to possess \(T\)-fold tensor copies of a quantum state \(\rho\), but instead we can obtain only sequential copies of \(\rho\). The quantum online learning model consists of repeating the following rounds of interaction: the learner obtains a copy of \(\rho\) and a description of measurement operator \(E_{i}\) (possibly adversarially) and uses it to predict the value of \(\mathsf{Tr}(\rho E_{i})\). In the \(i\)th round, if the learners prediction was \(\alpha_{i}\) and \(\alpha_{i}\) satisfies \(|\mathsf{Tr}(\rho E_{i})-\alpha_{i}|\leq\varepsilon\) then it is correct, otherwise it has made a mistake. The goal of the learner is the following: minimize \(m\) so that after making \(m\) mistakes (not necessarily consecutively), it makes a correct prediction on _all_ future rounds. Aaronson [1] showed that it suffices to let \(m\) be the _sequential fat-shattering dimension_ of \(\mathcal{C}\), denoted \(\mathsf{sfat}(\mathcal{C})\) (a combinatorial parameter introduced in [1] to understand classical online learning), which in turn can be upper bounded by \(O(n/\varepsilon^{2})\) for the class of \(n\)-qubit quantum states. ### Shadow tomography A caveat of the quantum \(\mathsf{PAC}\) learning model is that the learning algorithm has to only perform well under a distribution and it is a priori unclear if the \(\mathsf{PAC}\) model is a natural model of learning. Aaronson [1] introduced another learning model called _shadow tomography_. Here, the goal of a learning algorithm algorithm is as follows: let \(E_{1},\ldots,E_{m}\) be positive semi-definite operators satisfying \(\|E_{i}\|\leq 1\), how many copies of an \(n\)-qubit state \(\rho\) are necessary and sufficient in order to estimate \(\mathsf{Tr}(\rho E_{1}),\ldots,\mathsf{Tr}(\rho E_{m})\) up to additive error \(\varepsilon\). There are two naive protocols for this task: \((i)\) either do quantum state tomography which takes \(\exp(n)\) many copies and allows to estimate \(\mathsf{Tr}(\rho E_{i})\) for all \(i\), or \((ii)\) take \(O(m/\varepsilon^{2})\) many copies of \(\rho\) and estimate up to error \(\varepsilon\) each of the \(\mathsf{Tr}(\rho E_{i})\)s. Surprisingly, Aaronson showed that one can perform the task of shadow tomography exponentially better in both \(m\) and \(n\) in _sample complexity_, but still running in time exponential in \(n\). **Theorem 7**.: _There is a protocol for shadow tomography that succeeds with probability \(\geq 2/3\) using \(\widetilde{O}((n\log^{4}m)/\varepsilon^{4})\) many copies of \(\rho\)._ We now sketch a proof of this theorem. For simplicity, we let \(\varepsilon\) be a constant, say \(1/3\). Aaronson's proof is based on the technique of post selected learning [1] which was introduced in the context of communication complexity. In this communication task, there are two players Alice and Bob: Alice has a \(d\)-dimensional quantum state \(\rho\) (unknown to Bob) and together they know a set of \(m\) many operators \(\{E_{1},\ldots,E_{m}\}\). The goal is for Alice to send a classical message to Bob, who should output \(\mathsf{Tr}(\rho E_{1}),\ldots,\mathsf{Tr}(\rho E_{m})\) up to error \(1/3\). The same two trivial protocols we mentioned earlier would work here, giving a communication upper bound of \(O(m+d^{2})\). Surprisingly, Aaronson [1] showed that there exists a communication protocol with cost \(\mathsf{poly}(\log d,\log m)\) which solves the communication task, whose proof we sketch first. Bob starts by guessing the state Alice possesses. To this end, he lets \(\rho_{0}=\mathbb{I}/d\), the maximally mixed state, and updates his guess in every round. At the \(t\)th round, suppose Bob's guess is \(\rho_{t}\) (whose classical description is known to Alice), Alice communicates to Bob a \(j\in[m]\) for which \(|\mathsf{Tr}(\rho E_{j})-\mathsf{Tr}(\rho_{t}E_{j})|\) is the largest and sends him \(b=\mathsf{Tr}(E_{j}\rho)\). With this, Bob updates \(\rho_{t}\to\rho_{t+1}\) as follows: let \(q=O(\log\log d)\) and \(F_{t}\) be a two-outcome measurement on \(\rho_{t}^{\otimes q}\) that applies the POVM \(\{E_{j},\mathbb{I}-E_{j}\}\) to each of the \(q\) copies of \(\rho_{t}\) and accepts if and only if the number of \(1\)-outcomes was at least \((b-1/3)q\). Suppose \(\sigma_{t+1}\) is the state obtained by post-selecting on \(F_{t}\) accepting \(\rho_{t}^{\otimes q}\), then \(\rho_{t+1}\) is the state obtained by tracing out the last \(q-1\) registers of \(\sigma_{t+1}\). Aaronson showed that after \(T=O(\log d)\) rounds, Bob will have \(\rho^{\prime}\) which satisfies \(|\mathsf{Tr}(E_{i}\rho)-\mathsf{Tr}(E_{i}\rho^{\prime})|\leq 1/3\) for \(i\in[m]\). Returning to shadow tomography, observe that there is no Alice, and Bob is replaced by a quantum learner. So, at the \(t\)th stage, without any assistance, the learner needs to figure out \(j\in[m]\) for which \(|\mathsf{Tr}(E_{j}\rho_{t})-\mathsf{Tr}(E_{j}\rho)|\) is large. To this end, Aaronson used a variant of the Quantum OR lemma [14], which uses \(O(\log m)\) copies of \(\rho\) and outputs "yes" if there exists a \(j\in[m]\) for which \(|\mathsf{Tr}(E_{j}\rho)-\mathsf{Tr}(E_{j}\rho)|\geq 2/3\) and outputs "no" if \(|\mathsf{Tr}(E_{j}\rho)-\mathsf{Tr}(E_{j}\rho)|\leq 1/3\) for every \(j\in[m]\). However, in order to use the ideas from the communication protocol, in the "yes" instance of the OR lemma, Bob needs to know \(j\) (not just the existence of \(j\)) in order to update \(\rho_{t}\) to \(\rho_{t+1}\). Aaronson shows how to do this by using a simple binary search over \(\{E_{1},\ldots,E_{m}\}\) to find such a \(j\). Putting these ideas together, Aaronson shows the sample complexity upper bound for the shadow tomography. ### Max-entropy principle and Matrix Multiplicative Weight Update Recall that to solve shadow tomography, the goal is to find a quantum state \(\sigma\) that satisfies \(\mathrm{Tr}(\sigma E_{i})\approx\mathrm{Tr}(\rho E_{i})\) for all \(i\). Further, one would like to minimize the number of copies of \(\rho\), suggesting that \(\sigma\) should be no more informative than matching the above expectations. This is an ideal ground to invoke the _max entropy principle_, which states that the quantum state \(\sigma\) maximizing \(S(\sigma)\) (maximum uncertainty) subject to the constraints \(\mathrm{Tr}(\sigma E_{i})=\mathrm{Tr}(\rho E_{i})\) is the Gibbs quantum state \(\frac{e^{-\sum_{i}\alpha_{i}E_{i}}}{\mathrm{Tr}(e^{-\sum_{i}\alpha_{i}E_{i}})}\). Here, \(\alpha_{i}\)s are determined by the expectations \(\mathrm{Tr}(\rho E_{i})\). From here, an algorithm for shadow tomography can start with a trivial guess for \(\sigma\) - the maximally mixed state - which is then updated as new knowledge from \(\rho\) arrives. Since the maximally mixed state is the Gibbs state of the trivial Hamiltonian '0', the updates can be done directly to the Hamiltonian. To see how this update can be determined, consider a technical theorem from [11]. **Theorem 8**.: _Consider a Hamiltonian \(G\) and an operator \(E\) with \(\|E\|_{\infty}\leq 1\). For \(\eta\in\mathbb{R}\), consider the Gibbs states \(\sigma=\frac{e^{-\beta H}}{\operatorname{Tr}(e^{-\beta H})}\) and \(\sigma^{\prime}=\frac{e^{-\beta(H+\eta E)}}{\operatorname{Tr}(e^{-\beta(H+\eta E )})}\). It holds that for any quantum state \(\rho\),_ \[\operatorname{S}\left(\rho\|\sigma^{\prime}\right)-\operatorname{S}\left(\rho \|\sigma\right)\leq\beta\cdot\eta\Big{(}\beta\eta e^{|\beta\eta|}+\operatorname {Tr}(E(\rho-\sigma))\Big{)}.\] _In particular, setting \(\eta=-\frac{\operatorname{Tr}(E(\rho-\sigma))}{4\beta}\), we find that_ \[\operatorname{S}\left(\rho\|\sigma^{\prime}\right)-\operatorname{S}\left(\rho \|\sigma\right)\leq-\operatorname{Tr}(P(\rho-\sigma))^{2}/8.\] Proof.: To prove this result, a direct calculation reveals that \[\operatorname{S}\left(\rho\|\sigma^{\prime}\right)-\operatorname{S}\left(\rho \|\sigma\right)=\beta\eta\operatorname{Tr}(\rho E)+\log\frac{\operatorname{ Tr}(e^{-\beta(H+\eta E)})}{\operatorname{Tr}(e^{-\beta H})}.\] Using the Golden-Thompson inequality, we find that \[\operatorname{S}\left(\rho\|\sigma^{\prime}\right)-\operatorname{S}\left(\rho \|\sigma\right)\leq\beta\eta\operatorname{Tr}(\rho E)+\log\frac{\operatorname {Tr}(e^{-\beta H}e^{-\beta\eta E})}{\operatorname{Tr}(e^{-\beta H})}=\beta \eta\operatorname{Tr}(\rho E)+\log\operatorname{Tr}(\sigma e^{-\beta\eta E}).\] Since \(\|E\|_{\infty}\leq 1\), we can estimate \(\operatorname{Tr}(\sigma e^{-\beta\eta E})\leq 1-\beta\eta\operatorname{Tr}( \sigma E)+\beta^{2}\eta^{2}e^{|\beta\eta|}\), which implies \[\operatorname{S}\left(\rho\|\sigma^{\prime}\right)-\operatorname{S}\left(\rho \|\sigma\right)\leq\beta\eta\operatorname{Tr}(\rho P)+\log(1-\beta\eta \operatorname{Tr}(\sigma E)+\beta^{2}\eta^{2}e^{|\beta\eta|})\leq\beta\eta \operatorname{Tr}((\rho-\sigma)P)+\beta^{2}\eta^{2}e^{|\beta\eta|}.\] This proves the theorem statement. Thus, the alternate algorithm for shadow tomography proceeds by identifying an \(E_{i}\) that still does not satisfy \(\operatorname{Tr}(E_{i}\rho)=\operatorname{Tr}(E_{i}\sigma)\) and then updating the weight of such an \(E_{i}\) in \(\sigma\). In order to find such an \(E_{i}\) with \(\operatorname{poly}(\log m)\) sample complexity, one can use the 'quantum OR lemma' as described earlier. We also highlight that this procedure can be used to learn the Hamiltonian. Assuming that \(\rho\) itself is a Gibbs state, we update the weights of the basis operators \(E_{\alpha}\) until the expectation values are close. In such a case, the strong convexity from [1] ensures that the Hamiltonian is learned up to desired error. ### Subsequent works building on shadow tomography There have been several subsequent works that have built upon Aaronson's shadow tomography protocol which we discuss in this section. #### 4.4.1 Classical shadows A subsequent work of Huang, Kueng and Preskill [14] presented an alternate protocol for a restricted version of shadow tomography that is more time efficient than Aaronson's original shadow tomography protocol. In particular, they proved the following. **Theorem 9**.: _Let \(B>0\) be an integer and \(\varepsilon,\delta\in[0,1]\). Given \(O(B/\varepsilon^{2}\log(1/\delta))\) copies of \(\rho\), there exists a procedure that satisfies the following: for every observable \(M\) that satisfies \(\operatorname{\text{Tr}}(M^{2})\leq B\), with probability \(\geq 1-\delta\), the quantity \(\operatorname{\text{Tr}}(\rho M)\) can be computed to error \(\varepsilon\)._ To compare this procedure and shadow tomography, suppose the algorithm needs to estimate \(m\) many expectation values, then by letting \(\delta\sim 1/m\) with success probability \(\geq 2/3\), the overall sample complexity scales as \(O((\log m)\cdot B/\varepsilon^{2})\). Additionally, observe that the procedure above is _independent_ of the observables \(M\), unlike Aaronson's protocol [1] which used the observables \(E_{1},\ldots,E_{m}\) in a crucial way to learn \(\mathsf{Tr}(\rho E_{i})\). We now give a proof sketch of the theorem: they first give a polynomial-time procedure for generating _classical shadows_ of the unknown quantum state \(\rho\) using \(T=O(B/\varepsilon^{2}\log(1/\delta))\) copies of \(\rho\). These classical shadows are generated by running the following procedure: given copies of \(\rho\), the algorithm samples a uniformly random Clifford \(C\), computes \(C\rho C^{\dagger}\) and measures the state in the computational basis to get an \(n\)-bit string \(b\). So the classical shadows is the set \(\{(C_{i},b_{i}\}_{i\in[T]}\). Using these classical shadows, [1] use a simple median of means estimation procedure to estimate \(\mathsf{Tr}(\rho M)\) for an arbitrary observable \(M\) satisfying \(\mathsf{Tr}(M^{2})\leq B\). Thus the sample complexity is \(O(B/\varepsilon^{2}\log(1/\delta))\). #### 4.4.2 Improved shadow tomography and agnostic learning Badescu and O'Donnell [1] improved the complexity of shadow tomography to \(\tilde{O}((n\cdot\log^{2}m)/\varepsilon^{2})\), which simultaneously obtains the best known dependence on each of the parameters \(n,m,\varepsilon\). We do not sketch their protocol, but remark on one interesting corollary of shadow tomography is a procedure which they call _quantum hypothesis selection_. Although not phrased in this language, quantum hypothesis selection can be viewed as _agnostic_ learning quantum states. The setup for quantum agnostic learning states is the following: \(\mathcal{C}\) is a collection of known quantum states \(\{\rho_{1},\ldots,\rho_{m}\}\), a learning algorithm is provided with copies of an unknown state \(\sigma\) and needs to find \(\rho_{k}\in\mathcal{C}\) which is closest to \(\sigma\) in the following sense: output \(\rho_{k}\in\mathcal{C}\) such that \[\|\rho_{k}-\sigma\|_{1}\leq\alpha\cdot\min_{\rho\in\mathcal{C}}\|\rho-\sigma \|_{1}+\varepsilon, \tag{1}\] for some \(\alpha\). We briefly sketch the reduction from quantum agnostic learning to shadow tomography: consider the two states \(\rho_{i},\rho_{j}\) in the concept class \(\mathcal{C}\), by Holevo-Helstrom's theorem there exists an optimal measurement \(\{A_{ij},\mathbb{I}-A_{ij}\}\) such that \(\mathsf{Tr}(A_{ij}\cdot(\rho_{i}-\rho_{j}))=\|\rho_{i}-\rho_{j}\|_{tr}\). Now perform shadow tomography using \(\tilde{O}((n\cdot\log^{2}m)/\varepsilon^{2})\) copies of \(\sigma\) along with the operators \(\{A_{ij}\}_{i,j\in[m]}\) to obtain \(\alpha_{ij}\)s satisfying \(|\alpha_{ij}-\mathsf{Tr}(A_{ij}\sigma)|\leq\varepsilon/2\). At this point, [1] simply goes over all \(\rho\in\mathcal{C}\) to find a \(\rho_{k}\) that minimizes the quantity \(\max_{i,j}|\mathsf{Tr}(\rho_{k}A_{ij})-\alpha_{ij}|\) (this is inspired by classical hypothesis selection [20]). Let \(\eta=\min_{\rho\in\mathcal{C}}\|\rho-\sigma\|_{1}\) and \(i^{*}=\operatorname{argmin}_{\rho\in\mathcal{C}}\|\rho-\sigma\|_{1}\). Observe that \[\|\rho_{k}-\sigma\|_{tr} \leq\eta+\|\rho_{k}-\rho_{i^{*}}\|_{tr}\] \[=\eta+|\mathsf{Tr}(A_{i^{*}k}\rho_{k})-\mathsf{Tr}(A_{i^{*}k}\rho _{i^{*}})|\] \[\leq\eta+|\mathsf{Tr}(A_{i^{*}k}\rho_{k})-\alpha_{i^{*}k}|+| \mathsf{Tr}(A_{i^{*}k}\rho_{i^{*}})-\alpha_{i^{*}k}|\leq 3\eta+\varepsilon.\] Hence the resulting \(\rho_{k}\) satisfies Eq. (1) with \(\alpha=3\). As far as we are aware, [1, 1, 10, 11] are the only few works to look at agnostic learning of quantum states. These works gives rise to the following two interesting questions. **Question 14**.: _What is the sample complexity of quantum agnostic learning if we require \(\alpha=1\,\)?14_ Footnote 14: In classical computational learning theory, reducing the value of \(\alpha\) to \(1\) has been resolved for certain Boolean function classes in the seminal works [13, 14]. **Question 15**.: _What classes of states can be agnostic learned time-efficiently? Can we learn stabilizer states efficiently in the quantum agnostic model?_ #### 4.4.3 Shadow tomography with separable measurements Chen et al. [1] considered the problem of shadow tomography if one was only allowed _separable_ measurements. In this setting, they showed that \(\tilde{\Omega}(\min\{m,d\})\) many copies are necessary for shadow tomography, matching the upper bound of Huang et al. [12] of \(\tilde{O}(\min\{m,d\})\). They in fact show that, in order to estimate the expectation values of all \(4^{n}\) many \(n\)-qubit Pauli observables, one needs \(\Omega(2^{n})\) copies of \(\rho\) (given access to only separable measurements). The proof of this lower bound follows the following three-step approach (_i_) They first consider the learning tree framework that we discussed below Theorem 2, wherein there is a tree with each node corresponding to a measurement applied to the unknown state \(\rho\) and the leaves of the tree correspond to the \(m\) many expectation values. (_ii_) Using this learning tree technique, the main technical lemma they show is that, in order to prove the hardness of estimating \(\mathsf{Tr}(\rho Q_{i})\) for arbitrary \(Q_{i}\), using separable measurements, it suffices to upper bound \(\delta(Q_{1},\ldots,Q_{2^{n}})=\frac{1}{m}\sup_{|\psi\rangle}\sum_{i}\langle \psi|Q_{i}|\psi\rangle^{2}\). (_iii_) Finally they show that for the Paulis \(P_{i}\), we have that \(\delta(P_{1},\ldots,P_{2^{n}})\) is exactly \(1/(2^{n}+1)\), which immediately gives them their sample complexity lower bound of \(\Omega(2^{n})\). Additionally they also consider settings wherein the learning algorithm is adaptive (i.e., the learner can perform measurements based out of previous measurement outcomes) and non-adaptive (i.e., the learning algorithm needs to decide at the beginning a sequence of measurements to carry out). Similarly, a followup work of Gong and Aaronson [1] showed how to perform shadow tomography when given \(m\) many \(k\) outcome measurements using \(\mathsf{poly}(k,\log m,n,1/\varepsilon)\) copies of \(\rho\). ### Equivalence between quantum learning models So far, we saw many many seemingly (unrelated) models of computation aimed at learning an unknown quantum state such as, shadow tomography, \(\mathsf{PAC}\) learning, communication complexity, online learning. Aaronson and Rothblum [1] also considered differential privacy in learning quantum states and used this notion to prove new bounds on online learning and shadow tomography.15 A natural question is, is there a connection between these models? In [1] they showed "equivalences" between all these models of computation. A high-level overview of the results in their work is summarized in the figure below. For technical reasons, we do not discuss pure and approximate \(\mathsf{DP}\) in detail, we simple remark that _pure_\(\mathsf{DP}\) is a stronger requirement than _approximate_\(\mathsf{DP}\) and refer the reader to [1] for more details. In particular, these equivalences imply that algorithms in one framework gives rise to quantum learning algorithms in other frameworks. We remark that only a few of these arrows are efficient in both sample and time complexity, otherwise these implications are primarily information-theoretic. Footnote 15: Classically differential privacy was formalized in the seminal works by Dwork [14, 15]: we say a learning algorithm is differentially private if it behaves approximately the same when given two training datasets which differ in only limited number of entries. Although _a priori_, it seems that \(\mathsf{PAC}\) learning, online learning and \(\mathsf{DP}\) learning have little to do with one another, classically there have been a sequence of works establishing tight connections between these three fields [13]. The main center piece in establishing these connections is the notion of stability which was introduced in a recent breakthrough w In [1] they "quantize" these connections. Below we give a sketch of their proofs and refer to their work for a detailed overview. It is well-known classically that if there is a DP PAC learning algorithm for a class \(\mathcal{C}\) then the _representation dimension_ of the class is small. Representation dimension then upper-bounds classical communication complexity and \(\mathsf{sfat}(\mathcal{C})\). In [1] they show that this connection carries over in a simple way to the quantum setting. _(1)_: Let \(\mathcal{C}\) be a concept class of states with finite \(\mathsf{sfat}(\mathcal{C})\). In order to describe an online-learner for \(\mathcal{C}\) making at most \(\mathsf{sfat}(\mathcal{C})\) mistakes, in [1] they construct a _robust_ standard optimal algorithm (denoted RSOA) whose accuracy guarantees are robust to adversarial imprecision in the training feedback. The RSOA algorithm is inspired by the classical standard optimal algorithm for Boolean functions (however in the quantum setting it needs to work for real functions as well as with adversarial noise). Aaronson et al. [1] showed an upper bound of \(\mathsf{sfat}(\mathcal{C})\leq n\) on the number of mistakes in this setting asking if there is an _explicit_ algorithm that achieves this bound (their RSOA made explicit this algorithm). _(2)_: Here, they show that a concept class \(\mathcal{C}\) with \(\mathsf{sfat}(\mathcal{C})=d\) can be learned by a stable algorithm. To prove this, they follow the technique of [1] which feeds a standard optimal algorithm (which they replace with RSOA) with a specially-tailored input sample. The tailoring algorithm deliberately injects "mistake" examples into the sample, each of will force a prediction mistake in RSOA. Since the RSOA _completely_ identifies the target concept after making at most \(d\) prediction mistakes, the injection step allows the stable algorithm to output the correct hypothesis. In [1], their quantum-focused adaptation of this technique handles the twin challenges of accurately engineering the mistake examples for real-valued functions, and having \(\varepsilon\)-uncertainty in the adversary's feedback (both of which are not present in the Boolean setting). _(3)_: Now that one has a stable algorithm established above, one need to make it differentially private. In the Boolean setting, given a stable algorithm \(\mathcal{A}\), there is a well-known "Stable Histograms" algorithm may be used a 'wrapper' around \(\mathcal{A}\), to privately identify \(\mathcal{A}\)'s high-probability output functions. This involves running \(\mathcal{A}\) many times and outputting its most frequent output, while adding Laplacian noise to make it DP. However, they encounter an additional complication in the quantum setting: outputting the "most frequent" quantum state doesn't make sense, since two quantum states could be arbitrarily close and qualify as valid outputs. Addressing this, [1] modify Stable Histograms and show that it can be used to make the quantum stable learning algorithm DP. ## 5 Learning classical functions through quantum encoding ### Learning Boolean functions The _quantum_ PAC model for learning a concept class of Boolean functions \(\mathcal{C}\subseteq\{c:\{0,1\}^{n}\rightarrow\{0,1\}\}\) was introduced by Bshouty and Jackson [1]. In this model, instead of access to labelled examples \((x,c(x))\) where \(x\) is sampled from \(D\), the quantum learning algorithm is provided with copies of the _quantum example_ state \(|\psi_{c}\rangle=\sum_{x\in\{0,1\}^{n}}\sqrt{D(x)}|x,c(x)\rangle\). Quantum examples are a natural generalization of classical labelled examples (by measuring a single quantum example, we obtain a classical labelled example). A quantum PAC learner is given copies of the quantum example state, performs a POVM (where each outcome of the POVM is associated with an hypothesis) and outputs the resulting hypothesis. The sample complexity of the learner here is measured as the number of _copies_ of \(|\psi_{c}\rangle\) and the \((\varepsilon,\delta)\)-quantum sample complexity of learning \(\mathcal{C}\) is defined similarly to the classical PAC learning. There have been a few works that have looked at quantum PAC learning function classes [1, 2, 1, 1] and showed some strengths and weakness of quantum examples in the PAC model of learning: under the distribution independent setting, we know that quantum examples are not useful for learning [1], for uniform and product distributions we know quantum examples are useful [2, 1, 1, 14],16 for the uniform distribution we know they are not useful for learning circuit families [1] and for certain applications such as learning parities with noise, quantum examples are known to be useful [1]. For further details, we refer the reader to [1]. Footnote 16: We remark that almost all known quantum speedups are based on a version of quantum Fourier sampling. **Question 16**.: _Almost all quantum learning speedups are in the uniform distribution setting, is there a quantum learning speedup in the distribution-independent model in terms of sample or time complexity?_ In [1] they also considered two other models of learning (motivated by classical computational learning theory): \((i)\)_random classification noise learning_: here, the learner is given copies of \(\sum_{x}\sqrt{D(x)}|x\rangle\otimes(\sqrt{1-\eta}|c(x)\rangle+\sqrt{\eta}| \overline{c}(x)\rangle)\) and the goal of the learning algorithm is the same as the PAC learner, \((ii)\)_agnostic learning_: here \(D:\{0,1\}^{n+1}\to[0,1]\) is an unknown distribution, the learner is given copies of \(\sum_{(x,b)\in\{0,1\}^{n+1}}\sqrt{D(x,b)}|x,b\rangle\) and needs to find the concept \(c\in\mathcal{C}\) that best approximates \(D\), i.e., output \(c\) that satisfies \(\operatorname{err}_{D}(c)\leq\min_{c^{\prime}\in\mathcal{C}}\{\operatorname{ err}_{D}(c^{\prime})\}+\varepsilon\), where \(\operatorname{err}_{D}(c^{\prime})=\operatorname{Pr}_{(x,b)\sim D}[c^{\prime} (x)\neq b]\). In both these distribution-independent learning models, [1] showed that quantum sample complexity of learning is equal to classical sample complexity of learning up to constant factors. A natural question is, what _can_ be learned in polynomial time in these models? As far as we are aware, only parities are known to be learnable in the classification noise model [1, 1] when \(D=\{0,1\}^{n}\) and agnostic learning interesting concept classes has not received any attention in literature. **Question 17**.: _Can we learn DNF formulas in the quantum agnostic model in polynomial time?17_ Footnote 17: A positive answer to this question would imply a polynomial-time quantum algorithm for PAC learning depth-3 circuits in the uniform distribution model [11]. ### Statistical query model The quantum statistical query model was introduced in [1], inspired by the classical statistical query model introduced by Kearns [12]. Classically, it is well-known that _many_ algorithms used in practice can be implemented using a statistical query oracle, for example, expectation maximization, simulated annealing, gradient descent, support vector machine, Markov chain Monte carlo methods, principal component analysis, convex optimization (see [13, 14] for these applications). We first discuss the classical SQ model for learning an unknown concept \(c\) from the concept class \(\mathcal{C}\subseteq\{c:\{0,1\}^{n}\to\{-1,1\}\}\) under an unknown distribution \(D:\{0,1\}^{n}\to[0,1]\). The SQ learner has access to a _statistical query oracle_ which takes as input two quantities: _tolerance_\(\tau\geq 0\), a function \(\phi:\{0,1\}^{n}\times\{-1,1\}\to\{-1,1\}\) and returns \(\alpha\in\mathbb{R}\) satisfying \(\left|\alpha-\mathbb{E}_{x\sim D}[\phi(x,c(x))]\right|\leq\tau\). The SQ learning algorithm adaptively chooses a sequence \(\{(\phi_{i},\tau_{i})\}\), and based on the responses of the statistical oracle \(\{\alpha_{i}\}_{i}\), it outputs an hypothesis \(h:\{0,1\}^{n}\to\{-1,1\}\) that approximates \(c\), similar to the setting of PAC learning. The QSQ model is similar to the quantum PAC model, except that the learning algorithm isn't allowed entangled measurements on several copies of the quantum example state. More formally, let \(\mathcal{C}\subseteq\{c:\{0,1\}^{n}\to\{-1,1\}\}\) be a concept class, \(D:\{0,1\}^{n}\to[0,1]\) be a distribution and let \(|\psi_{c}\rangle=\sum_{x}\sqrt{D(x)}|x,c(x)\rangle\). In the \(\mathsf{QSQ}\) model, a learning algorithm specifies an operator \(M\) satisfying \(\|M\|\leq 1\), tolerance \(\tau\in[0,1]\) and obtains a number \(\beta\in[\langle\psi_{c}|M|\psi_{c}\rangle-\tau,\langle\psi_{c}|M|\psi_{c} \rangle+\tau]\). An intuitive way to think about the \(\mathsf{QSQ}\) model is, a learning algorithm can specify a two-outcome measurement \(\{M,\mathbb{I}-M\}\) and obtains a \(\tau\)-approximation of the probability of this measurement accepting \(|\psi_{c}\rangle\). Ideally, one would want a \(\mathsf{QSQ}\) algorithm for which \(\tau=1/\mathsf{poly}(n)\) and \(M\) can be implemented using \(\mathsf{poly}(n)\) gates. A \(\mathsf{QSQ}\) algorithm is amenable to near-term implementation since unlike the quantum \(\mathsf{PAC}\) framework, it works only by making single copy measurements on the quantum example state \(|\psi_{c}\rangle\). Surprisingly, in [1], they show that positive results for quantum \(\mathsf{PAC}\) learning that we discussed in the previous section (such as learning parities, DNF formulas, \((\log n)\) juntas) can actually be implemented in the \(\mathsf{QSQ}\) framework. **Theorem 10**.: _The concept classes consisting of parities, juntas, DNF formulas, sparse functions, can be learned under the uniform distribution in the \(\mathsf{QSQ}\) model._ The crucial (and simple) observation in order to see this theorem is that computing the Fourier mass of a subset can be done in the \(\mathsf{QSQ}\) model. Given that learning algorithms for parities, juntas, DNF formulas, sparse functions are via Fourier sampling [1], this observation implies the theorem. To see the observation, let \(M=\sum_{S\in T}|S\rangle\langle S|\) and consider the observable \[M^{\prime}=\mathsf{H}^{\otimes(n+1)}\cdot\left(\mathbb{I}^{\otimes n}\otimes| 1\rangle\langle 1|\right)\cdot M\cdot\left(\mathbb{I}^{\otimes n}\otimes|1 \rangle\langle 1|\right)\cdot\mathsf{H}^{\otimes(n+1)}.\] Operationally, \(M^{\prime}\) corresponds to first applying the Fourier transform on \(|\psi_{f}\rangle\), post-selecting on the last qubit being \(1\) and finally applying \(M\) to the first \(n\) qubits. In order to see the action of \(M^{\prime}\) on \(|\psi_{f}\rangle\), first observe that \(\mathsf{H}^{\otimes(n+1)}|\psi_{f}\rangle\) yields \(\frac{1}{\sqrt{2^{n}}}\sum_{x}|x,f(x)\rangle\to\frac{1}{2^{n}}\sum_{x,y}\sum_ {b\in\{0,1\}}(-1)^{x\cdot y+b\cdot f(x)}|y,b\rangle\). Conditioned on the \((n+1)\)-th qubit being \(1\), we have that the resulting quantum state is \(|\psi^{\prime}_{f}\rangle=\sum_{Q}\widehat{f}(Q)|Q\rangle\). The expectation value of \(M\) with respect to the resulting state is given by \(\langle\psi^{\prime}_{f}|M|\psi^{\prime}_{f}\rangle=\sum_{S\in T}\widehat{f}( S)^{2}.\) Therefore, one quantum statistical query with measurement \(M^{\prime}\), tolerance \(\tau\) produces a \(\tau\)-approximation of \(\sum_{S\in T}\widehat{f}(S)^{2}\). In [1], they use this observation to prove Theorem10. We pose the following question, which would serve as a tool to understand the fundamental question "is entanglement needed for quantum learning Boolean functions?".18 Footnote 18: For learning the general class of quantum states, the recent work of Chen et al. [1] showed entanglement is needed for learning quantum states. **Question 18**.: _Is there a concept class separating \(\mathsf{QSQ}\) and quantum \(\mathsf{PAC}\) learning with separable measurements?_ More recently, there have been few works that considered the "diagonal-\(\mathsf{QSQ}\)" framework: here, the \(\mathsf{QSQ}\) learner can only specify a _diagonal_ measurement operator \(M\), i.e., the \(\mathsf{QSQ}\) learner specifies a \(\phi(x)\in[-1,1]\) and makes a \(\mathsf{QSQ}\) query with \(M=\sum_{x}\phi(x)|x\rangle\langle x|\) for the unknown state \(|\phi\rangle\). Recently [11, 12, 13] looked at learning unknown circuits \(U\) given diagonal-\(\mathsf{QSQ}\) access to \(|\psi_{U}\rangle=U|0^{n}\rangle\) (these learning algorithms allow to learn the output distributions \(\{\langle x|U|0^{n}\rangle^{2}\}_{x}\) of unknown quantum circuits \(U\) in the computational basis). In particular, [12] showed that distributions induced by Clifford circuits can be learned in the \(\mathsf{QSQ}\) framework, however, if we add a single \(T\) gate, then classical SQ learning the output distribution is as hard as learning parities with noise. Subsequent works [12, 13] looked at larger circuit families showing stronger lower bounds. One interesting question left open by their work is the following **Question 19**.: _What is \(\mathsf{QSQ}\) complexity of learning output distributions of constant-depth circuits in the diagonal-\(\mathsf{QSQ}\) framework?_ In another direction Du et al. [14] showed that the QSQ model can be effectively simulated by noisy quantum neural networks (QNN). Since we saw above that the QSQ model can learn certain concept classes in polynomial time, their result suggests that QNNs implemented on a noisy device could potentially retain the quantum speed-up. ### Kernel Methods So far we discussed a family of quantum algorithms that implicitly assumed the learning algorithm could learn a classical function by given access to quantum examples that encode classical information. Furthermore, these quantum examples use a number of qubits that is only logarithmic in the size of the unknown function. In this framework there have been several quantum machine learning algorithms that are able to achieve polynomial or even exponential speed-ups over classical approaches [14, 15, 16, 17, 18, 19, 2, 13, 14, 15, 16, 17, 18, 19, 20]. However, it is not known whether data can be efficiently provided this way in practically relevant settings. This raises the question of whether the advantage comes from the quantum algorithm, or from the way data is provided [1]. Indeed, recent works have shown that if classical algorithms have an analogous sampling access to data, then some of the proposed exponential speed-ups do no longer exist [13, 13, 14, 15, 16, 17, 18]. A natural question is, if we demand classical input and classical output, but let the intermediate operation be a quantum operations, can one hope for a quantum speedup? To this end, a powerful technique called the _quantum kernel method_ was introduced [13, 14]. These papers proposed obtaining a quantum speedup via the use of a _quantum-enhanced feature space_, where each data point is mapped non-linearly to a quantum state and then classified by a linear classifier in the high-dimensional Hilbert space. The advantage of the quantum learner stems from its ability to recognize classically intractable complex patterns using the quantum feature map, which maps each classical data point non-linearly through a parameterized family of unitary circuits to a quantum state, \(x\mapsto|\phi(x)\rangle=U(x)|0^{n}\rangle\), in both training and testing. The learning algorithm proceeds by finding the optimal separating hyperplane for the training data in the high-dimensional feature space. To do so efficiently, they use the standard kernel method in _support vector machines_ (SVMs), a well-known family of supervised classification algorithms [21]. More specifically, their algorithm only uses the quantum computer to estimate a kernel function and then implement a conventional SVM on a classical computer. In general, kernel functions are constructed from the inner products of the feature vectors for each pair of data points, which can be estimated as the transition amplitude of a quantum circuit as \(\left|\langle\phi(x_{j})|\phi(x_{i})\rangle\right|^{2}=\left|\langle 0^{n}|U^{ \dagger}(x_{j})U(x_{i})|0^{n}\rangle\right|^{2}\) (see Figure 6 for the quantum circuit implementation of this). One can therefore estimate each kernel entry up to a small additive error using the quantum computer - a procedure that is referred to as _quantum kernel estimation_ (\(QKE\)). Then, the kernel matrix is given to a classical optimizer that efficiently finds the linear classifier that optimally separates the training data in feature space by running a convex quadratic program. Despite the popularity of these quantum kernel methods, it was unclear if, quantum algorithms using kernel methods could provide a provable advantage over classical machine learning algorithms. There have been several proposal for interesting feature maps [15, 16] based on group covariant maps, but their utility and implementability is unclear. In [14], they constructed a classification task based on the discrete logarithm problem and showed that given classical access to data, quantum kernel methods can provably solve a classically intractable learning problem, even in the presence of finite sampling noise. While the particular problem they consider does require a fault-tolerant quantum computer, the learning algorithm is general in nature which suggests potential to find near-term implementable problems and is suitable for error-mitigation techniques. Their result can be viewed as one of the first formal evidence of quantum advantage using _quantum kernel methods_, a widely-studied family of quantum learning algorithms that can be applied to a wide range of problems. **Question 20**.: _Can quantum kernel methods give an unconditional polynomial advantage over classical learning for a natural problem?_ ## 6 Perspective on other works The theory of quantum information and computation lies at the intersection of computer science and physics. Quantum learning theory has evolved in the same spirit, addressing questions that are native to both bodies of knowledge. The field is inspired, on one hand, from the notions of PAC learning and statistical query learning from theoretical computer science and on the other hand, from the experimental goal of learning physics of a system from natural quantum states. This survey adopts the view that the most exciting questions in the field lie precisely at this intersection. The success of learning theory lies in its adaptation of the 'number of samples' as a natural complexity measure - which is well motivated from the point of view of practical machine learning. It is remarkable that we can obtain sample efficient algorithms - in many cases even time efficient - for a wide class of learning problems. These successful results are accompanied by new insights into the structure of corresponding families of quantum states, such as phase states, Gibbs quantum states and Matrix product states. Here, we list several notable works related to learning quantum states that haven't been covered in this survey. These include results on learning quantum noise [10] in quantum experiments, tomography of quantum channels [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 233, 240, 209, 225, 208, 209, 231, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 336, 337, 338, 341, 342, 343, 35, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 72, 74, 75, 76, 78, 79, 80, 81, 82, 82, 83, 84, 85, 86, 87, 88, 89, 91, 80, 82, 84, 85, 86, 87, 88, 89, 92, 85, 89, 93, 86, 88, 89, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 114, 108, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 14, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 211, 209, 210, 22, 232, 240, 208, 209, 225, 209, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 263, 264, 265, 266, 266, 267, 278, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 322, 323, 324, 325, 326, 327, 328, 333, 334, 329, 335, 340, 341, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 108, 109, 111, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 181, 191, 192, 193, 194, 195, 196, 197, 198, 209, 210, 222, 233, 240, 241, 242, 25, 248, 25, 266, 279, 282, 299, 300, 311, 323, 343, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 80, 82, 89, 8 the interested reader to the references for more details. Sample and time complexity beyond learnability.Finally, we highlight that - beyond learnability - the notion of sample complexity is well motivated even in quantum information problems that are not canonical learning tasks. We discuss a few directions and questions here. _1. Sample complexity as a measure in quantum communication._ In the standard quantum communication complexity setting, Alice and Bob compute a classical function with classical inputs, using quantum resources. One can also define a model where inputs are quantum and functions of quantum inputs are to be computed. An example of this is: Alice's input is a quantum state \(|\psi\rangle\), Bob's input is a quantum state \(|\phi\rangle\), and they wish to estimate \(\langle\psi|M|\phi\rangle\) for a given \(M\). A single copy of each input is insufficient and unbounded number of inputs render the problem classical. An interesting intermediate regime is to allow several independent copies of inputs and minimize the sample complexity. The work [1] first considered this for \(M=\mathbb{I}\) and showed exponential separation in sample complexity between classically communicating Alice-Bob and quantumly communicating Alice-Bob. **Question 21**.: _What is the sample complexity of evaluating \(\langle\psi|M|\phi\rangle\) when Alice and Bob are only allowed classical communication, and how does it relate to the sample complexity when quantum communication is allowed?_ _2. Sample complexity as a measure in the Local Hamiltonian problem:_ A canonical Quantum Merlin-Arthur complete problem is the Local Hamiltonian Problem, with the goal of determining if the ground energy of a \(n\)-qubit local Hamiltonian is small or large. The proof - that certifies that the ground energy is small - is a quantum state, and there is some evidence that the proofs cannot be polynomial sized classical strings. In fact, the famous result of Marriot and Watrous shows that one copy of the witness suffices [14]. Now, let's restrict the proof to be a simple quantum state, such as a state that can be prepared by a low-depth circuit or a stabilizer state. We can find local Hamiltonians whose ground states have very small overlap with one such state [1]. Thus, many copies of the simple witness would be needed to eventually reach a complex witness of the ground state (via phase estimation algorithm). But it is not clear if such a simple witness could be useful in other ways to estimate the ground energy. **Question 22**.: _Can we provide a sample complexity lower bound for interesting class of simple witness states, when the goal is to use them to estimate the ground energy of a Hamiltonian? Is this problem easier if the Hamiltonian itself is a sparse Hamiltonian with oracle access?_ _3. Time-efficient learning coset states._ One way to view the Hidden subgroup problem (HSP) is in terms of sample complexity of learning the coset state. In the HSP, there is a group \(G\). Let \(\mathcal{H}(G)\) be the set of all subgroups \(H\leq G\) of \(G\). We say a function \(f_{H}:G\to S\) hides a subgroup \(H\) if \(f(x_{1})=f(x_{2})\) for all \(x_{1},x_{2}\in H\) and is distinct for different cosets. Given quantum query access to \(f\), the goal is to learn \(H\). The so-called _standard approach_ (which has been the focus of almost all known HSP algorithms) is the following: prepare \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x\rangle\), query \(f\) to produce \(\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x,f(x)\rangle\) and discard the second register to obtain the state \(\rho_{H}=\frac{\lvert H\rvert}{\lvert G\rvert}\sum_{g\in K}\lvert gH\rangle \langle gH\rvert\) where \(\lvert gH\rangle=\frac{1}{\sqrt{\lvert H\rvert}}\sum_{h\in H}\lvert gh\rangle\) and \(K\) is a complete set of left coset representatives of the subgroup \(H\leq G\). The state \(\rho_{H}\) is called the _coset state_ and the question is: what is the sample complexity and time complexity of learning \(H\) given copies of \(\rho_{H}\)? A well-known result [1] shows that the sample complexity of learning \(H\) is \(O(\log^{2}\lvert G\rvert)\). However, time-efficient learning \(H\) for arbitrary groups has remained a long-standing open question. We know time efficient implements for special groups [1, 14, 15, 16, 17]. Given that learning \(H\) given copies of \(\rho_{H}\) for arbitrary groups has been open for decades, this motivates the following questions. **Question 23**.: _For arbitrary groups, can we time-efficiently learn coset states in the alternate models of learning that we discussed in Section 4? What other groups can we time-efficiently learn \(H\) given copies of \(\rho_{H}\)?_ Additionally, we remark that all known \(\mathsf{HSP}\) algorithms following the standard approach where they measure the second register, which leads to the following question. **Question 24**.: _Does there exist a proposal for non-Abelian \(\mathsf{HSP}\) that doesn't measure the second register in \(|\psi_{f}\rangle=\frac{1}{\sqrt{|G|}}\sum_{x\in G}\lvert x,f(x)\rangle\) and takes advantage of the function register to learn the unknown subgroup \(H\)? Similarly, can we extend the lower bounds in [14] to the setting where the learning algorithm has access to copies of \(|\psi_{f}\rangle\)?_ _4. Sample complexity of generalizing LMR._ Lloyd, Mohseni, and Rebentrost [13] understood the following question (in the context of Hamiltonian simulation): How many copies of an unknown quantum state \(\rho\) are required to simulate a unitary \(U=e^{-i\rho t}\) which encodes \(\rho\) for some \(t\in\mathbb{R}\)? The LMR protocol [13] showed that the sample complexity of implementing \(U\) up to diamond norm \(\delta\) is \(O(t/\delta^{2})\), and has found several applications in quantum computing. Subsequently the sample complexity obtained by the LMR protocol was shown to be optimal [13]. A natural followup question is the following. **Question 25**.: _What is the sample complexity of approximately implementing \(e^{-if(\rho)t}\) for other functions \(f\) acting on density matrices?_ **Acknowledgements.** We thank Matthias Caro and the anonymous reviews of Nature Reviews Physics for several comments improving the presentation of this work and Abhinav Deshpande for useful comments. We thank Iulia Georgescu for commissioning this survey for the Nature Reviews Physics. AA acknowledges support through the NSF CAREER Award No. 2238836 and NSF award QCIS-FF: Quantum Computing & Information Science Faculty Fellow at Harvard University (NSF 2013303).
2309.14031
Phase-space iterative solvers
I introduce a new iterative method to solve problems in small-strain non-linear elasticity. The method is inspired by recent work in data-driven computational mechanics, which reformulated the classic boundary value problem of continuum mechanics using the concept of "phase space". The latter is an abstract metric space, whose coordinates are indexed by strains and stress components, where each possible state of the discretized body corresponds to a point. Since the phase space is associated to the discretized body, it is finite dimensional. Two subsets are then defined: an affine space termed "physically-admissible set" made up by those points that satisfy equilibrium and a "materially-admissible set" containing points that satisfy the constitutive law. Solving the boundary-value problem amounts to finding the intersection between these two subdomains. In the linear-elastic setting, this can be achieved through the solution of a set of linear equations; when material non-linearity enters the picture, such is not the case anymore and iterative solution approaches are necessary. Our iterative method consists on projecting points alternatively from one set to the other, until convergence. The method is similar in spirit to the "method of alternative projections" and to the "method of projections onto convex sets", for which there is a solid mathematical foundation that furnishes conditions for existence and uniqueness of solutions, upon which we rely to uphold our new method's performance. We present two examples to illustrate the applicability of the method, and to showcase its strengths when compared to the classic Newton-Raphson method, the usual tool of choice in non-linear continuum mechanics.
Joaquin Garcia-Suarez
2023-09-25T10:52:31Z
http://arxiv.org/abs/2309.14031v1
# Phase-space iterative solvers ###### Abstract I introduce a new iterative method to solve problems in small-strain non-linear elasticity. The method is inspired by recent work in data-driven computational mechanics, which reformulated the classic boundary value problem of continuum mechanics using the concept of "phase space". The latter is an abstract metric space, whose coordinates are indexed by strains and stress components, where each possible state of the discretized body corresponds to a point. Since the phase space is associated to the discretized body, it is finite dimensional. Two subsets are then defined: an affine space termed "physically-admissible set" made up by those points that satisfy equilibrium and a "materially-admissible set" containing points that satisfy the constitutive law. Solving the boundary-value problem amounts to finding the intersection between these two subdomains. In the linear-elastic setting, this can be achieved through the solution of a set of linear equations; when material non-linearity enters the picture, such is not the case anymore and iterative solution approaches are necessary. Our iterative method consists on projecting points alternatively from one set to the other, until convergence. The method is similar in spirit to the "method of alternative projections" and to the "method of projections onto convex sets", for which there is a solid mathematical foundation that furnishes conditions for existence and uniqueness of solutions, upon which we rely to uphold our new method's performance. We present two examples to illustrate the applicability of the method, and to showcase its strengths when compared to the classic Newton-Raphson method, the usual tool of choice in non-linear continuum mechanics. _Keywords--_ Solvers Non-linearity Large systems List of symbols \begin{tabular}{c c} **Symbol** & **Meaning** & **Comment** \\ \hline \(n_{e}\) & Number of strain/stress \\ & components per element \\ \(n_{\text{dofs}}\) & Number of nodal degrees of \\ & freedom components per node \\ \(N_{n}\) & Number of nodes \\ \(N_{e}\) & Number of elements \\ \(N_{\text{dofs}}\) & Number of degrees of freedom \(=N_{n}\cdot n_{\text{dofs}}\) \\ \end{tabular} ###### Abstract We consider the following problem of the following problem: \[\begin{array}{ll}\mathbf{B}_{e}&\mbox{Discrete gradient operator}\\ \mathbf{B}_{e}^{\top}&\mbox{Discrete divergence operator}\\ \mathbf{\sigma}_{e}&\mbox{Physically-admissible stress}\\ &\mbox{(e-th element)}\\ \mathbf{\varepsilon}_{e}&\mbox{Physically-admissible strain}\\ &\mbox{(e-th element)}\\ \mathbf{\sigma}_{e}^{\prime}&\mbox{Materially-admissible stress}\\ &\mbox{(e-th element)}\\ \mathbf{\varepsilon}_{e}^{\prime}&\mbox{Materially-admissible strain}\\ &\mbox{(e-th element)}\\ \mathbf{m}&\mbox{Constitutive-law function}\\ \mbox{$E$}&\mbox{Physically-admissible set}\\ &\mbox{Equilibrium and}\\ &\mbox{compatibility}\\ P_{E}&\mbox{Projection onto $E$}\\ D&\mbox{Materially-admissible set}\\ P_{D}&\mbox{Projection onto $D$}\\ Z_{e}&\mbox{Local phase space}\\ Z&\mbox{Global phase space}\\ \mathbf{z}_{e}&\mbox{Point in local phase space}\\ \mathbf{z}&\mbox{Point in global phase space}\\ \mathbf{F}_{int}&\mbox{Internal force vector}\\ &\mbox{$\in\mathbb{R}^{N_{\mbox{\tiny{data}}}}$},\mbox{ it depends on}\\ &\mbox{stresses}\\ \mathbf{F}_{ext}&\mbox{External force vector}\\ \mbox{$\Delta$}\mathbf{F}&\mbox{Force residual}\\ \mathbf{C}&\mbox{Distance constant matrix}\\ \mathbf{K}&\mbox{Zero-strain stiffness matrix}\\ \mbox{$\mathbb{D}$}&\mbox{Zero-strain elastic moduli}\\ &&\mbox{matrix}\\ Y&\mbox{Young modulus}\\ Y_{0}&\mbox{Zero-strain Young modulus}\\ \nu&\mbox{Poisson's ratio}\\ p&\mbox{Non-linearity parameter}\\ \end{array}\] Introduction Most practical problems in continuum mechanics lack a simple closed-form solution. Since the advent of modern computers, numerical methods have been developed to overcome this fact and to provide approximate solutions of particular problems for specific values of the relevant parameters [1]. The Newton-Raphson method (NR) has reigned supreme, due to its simplicity, ease of implementation and quadratic convergence [2, 3] (see the appendix for a brief refresh). However, the method is not free from limitations. For once, it requires recomputing the stiffness of the system at every deformation level, what in practical terms means that "tangent stiffness" matrices have to be reassembled at each iteration. This can be time-consuming, and can also entail numerical issues if there are elements in the mesh whose lower stiffness worsens the conditioning of the overall matrix. A number of alternatives to the Newton-Raphson method have been proposed. Most of them are aimed at minimizing the "force residual", i.e., the imbalance between internal and external forces, just like NR does, while simultaneously trying to avoid some of its limitations. For instance, quasi-Newton methods approximate successive inverse tangent matrices using the zero-strain stiffness and rank-one corrections by means of Sherman-Morrison formula [4]. Other methods iteratively scan the residual function locally, and search for the optimal direction, and increment, to minimize it. These are termed "search methods" [3, 4], the conjugate gradient method being one of the most popular flavors. Yet another approach, "gradient flow" (a.k.a. "dynamic relaxation") methods transform the elliptic problem of statics into an auxiliary parabolic problem [4]. This can be solved with time-marching iterations [5] until a steady-state solution is achieved, and that one is then taken to be the solution of the original static problem. I introduce a new family of iterative solvers for boundary value problems in linear elasticity. The concept of "phase space" introduced in Ref. [6] in the context of data-driven solvers is leveraged. The phase space is \(Z=\mathbb{R}^{2n_{e}N_{e}}\), where \(N_{e}\) is the number of elements in the mesh and \(n_{e}\) is the number of independent components in either the strain or stress tensor in each element. Hence, each coordinate in the phase space corresponds to either a stress or a strain component in an element. We are implicitly assuming one integration point per element, so stresses and strains are computed at a single location, and that all elements in the mesh are similar (e.g., only 1D bar elements). The global phase space \(Z\) can be expressed as the Cartesian product of "local" phase spaces \(Z_{e}=\mathbb{R}^{2n_{e}}\) (\(e=1,\ldots,N_{e}\)), defined at the element level: \(Z=Z_{1}\times\ldots\times Z_{N_{e}}\). Inspired by this seminal work of Kirchdoerfer and Ortiz, we regard the exact solution of the problem as that that satisfies simultaneously boundary conditions, physical balance constraints (in our case Newton's second law, which boils down to static equilibrium in this case), kinematic considerations (the relation between strains and displacements), and a material-dependent relation between strain and stress. The former two combined defined an affine domain in the _global_ phase space that we term \(E\), the "physically-admissible set". In those authors' original work, the latter "materially-admissible set" was known as a discrete set of points in the _local_ phase space. In this text, we will assume that, in each element, there is a function \(\mathbf{m}:\mathbb{R}^{n_{e}}\rightarrow\mathbb{R}^{n_{e}}\) that defines a constitutive law \(\mathbf{\sigma}_{e}^{\prime}=\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})\). As a pivotal assumption, furthermore, we assume that \(\{\mathbf{\varepsilon}_{e}^{\prime},\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})\}\subset Z _{e}\) defines a curve in \(Z_{e}\) that defines limit points of a convex set, and that \(\mathbf{m}\) is \(C^{1}\)-continuous. We term these "phase-space iterative solvers" (PSIs), which include DDCM ones as a particular case in which the constitutive information is known only at discrete points. The mathematical characterization of this algorithm borrows much from previous work, starting from Von Neumann's [7, 8] on the methods of alternating projections. His iterative method was shown to be effective to solve large systems of equations [9]. In this context, the method of alternative projections has also been shown to be closely related with the method of subspace corrections [10, 11]. The same philosophy was used to solve a slightly different problem: finding the intersection between convex sets. Convexity happened to be key as it ensured the desired mathematical properties of the method, i.e., convergence, existence and uniqueness. The literature for the method of "projections onto convex sets" (POCS) is extensive, applications [12] ranging from general inverse analysis [13] to image restoration [14, 15]. NR is the alternative that the phase-space iterations' method has to contend with, so let us advance the advantages with respect to it: 1. Unlike Newton methods, PSI does not require assembling tangent stiffness matrices (or its inverse) at every step; an auxiliary matrix is assembled once, before any iteration, and it is reused later. This matrix is to be chosen by the user, but we will choose \(\mathbf{K}\), the stiffness matrix at zero strain, throughout this text. 2. Half of the algorithm is trivially parallelizable: solvers like NR that act at the structure level require domain partitions [16, 17, 18] to distribute the work between processors. Setting this up requires meticulous preparation. Conversely, phase-space solvers perform the projection onto the constitutive law element-wise, so this procedure can be divided among processors much more easily, and working on the "local" phase spaces is much less involved than doing so in the "global" one, even though we have to solve a low-dimensionality minimization. The other half of the algorithm is also parallelizable in the sense of classic "domain decomposition" method [19, 20], as it resembles a traditional finite-element algorithm, but in a much more complex manner that requires careful inter-processor communication [21, 22, 23, 16]. 3. The distance-minimization method is more general, as it allows handling constitutive laws with less continuity. NR demands derivatives of the constitutive law to be well-defined. But even when that is not the case, we can always solve the distance minimization problem with some of the classic techniques for that purpose (i.e., Brent's derivative-less method [24]). ## 2 General formulation Herein we present the new numerical method, leaving a brief presentation of its main competitor (NR) to the appendix. ### Introduction Consider a discretized body whose mesh is made up of \(N_{n}\) nodes and \(N_{e}\) elements, in which each node contains \(n_{\text{dofs}}\) degrees of freedom ("dofs"), and stresses and strains are computed at one quadrature point (constant stress elements). In total, there are \(N_{\text{dofs}}=n_{\text{dofs}}\times N_{e}\) degrees of freedom, some of them may be constrained while others are free or external forces are applied. Stresses and strains for each element are collected into vectors \(\mathbf{\sigma}_{e}\), \(\mathbf{\varepsilon}_{e}\in\mathbb{R}^{n_{e}}\), where \(n_{e}\) is the minimal number of components to be considered (\(n_{e}=1\) for 1D elements, 3 for plane elasticity, 6 for 3D isotropic elasticity and 9 for 3D generalized continua [25]). Thus, each element's local phase space is isomorphic to \(\mathbb{R}^{2n_{e}}\) while the global phase space is to \(\mathbb{R}^{2N_{e}n_{e}}\). We then define a (convex) norm for elements in both the global (\(\left\lVert\cdot\right\rVert\)) and local (\(\left\lVert\cdot\right\rVert_{e}\)) phase space: \[\left\lVert\mathbf{z}\right\rVert=\left(\sum_{e=1}^{N_{e}}w_{e}\left\lVert\mathbf{z} \right\rVert_{e}^{2}\right)^{1/2}=\left(\sum_{e=1}^{N_{e}}\frac{w_{e}}{2}\left[ \mathbf{\varepsilon}_{e}^{\top}\mathbf{C}\mathbf{\varepsilon}_{e}+\mathbf{\sigma}_{e}^{\top }\mathbf{C}^{-1}\mathbf{\sigma}_{e}\right]\right)^{1/2}\,, \tag{1}\] which naturally induces a metric and a distance \[d(\mathbf{z},\mathbf{z}^{\prime})=\left\lVert\mathbf{z}-\mathbf{z}^{\prime}\right\rVert=\left( \sum_{e=1}^{N_{e}}\frac{w_{e}}{2}\left[(\mathbf{\varepsilon}_{e}-\mathbf{\varepsilon} _{e}^{\prime})^{\top}\mathbf{C}(\mathbf{\varepsilon}_{e}-\mathbf{\varepsilon}_{e}^{\prime })+(\mathbf{\sigma}_{e}-\mathbf{\sigma}_{e}^{\prime})^{\top}\mathbf{C}^{-1}(\mathbf{\sigma}_{e }-\mathbf{\sigma}_{e}^{\prime})\right]\right)^{1/2}\,, \tag{2}\] where \(\mathbf{C}\) is a matrix with numbers with proper units so that the two addends are congruent, which is chosen to be both invertible and symmetric. \(\mathbf{C}\) can be selected by the user, but we will use (a) the zero-strain material moduli, \(\mathbf{C}=\mathbb{D}\), for the global space distance when projecting onto \(E\), (b) a diagonal distance matrix \(\mathbf{C}=\mathcal{C}\mathbf{I}\), for some \(\mathcal{C}>0\), for projecting onto \(D\). #### 2.1.1 Physical admissibility The static equilibrium, in discretized fashion, can be written as \[\mathbf{F}_{int}(\mathbf{\sigma})-\mathbf{F}_{ext}=\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top }\mathbf{\sigma}_{e}-\mathbf{F}_{ext}=0\,, \tag{3}\] where \(\mathbf{F}_{ext}\in\mathbb{R}^{N_{\text{dots}}}\) is the nodal force vector (containing both external forces and reactions), \(w_{e}\) and \(\mathbf{\sigma}_{e}\) are the volume of each element and its stresses (written as in vector form, i.e., Voigt notation), respectively, while \(\mathbf{B}_{e}^{\top}\in\mathbb{R}^{N_{\text{dots}}\times n_{e}}\) is the discrete divergence operator. Likewise, kinematic compatibility between displacements and infinitesimal strains \(\mathbf{\varepsilon}_{e}\) (also a vector) in discrete form is \[\mathbf{\varepsilon}_{e}=\mathbf{B}_{e}\mathbf{u}\,, \tag{4}\] where \(\mathbf{u}\) is the nodal displacement field of that element and \(\mathbf{B}_{e}\in\mathbb{R}^{n_{e}\times N_{\text{dots}}}\) is the discrete gradient. Thus, the first projection will take an initial phase space point satisfying the constitutive law (i.e., \(\mathbf{z}^{\prime}\in D=\{[(\mathbf{\varepsilon}_{e}^{\prime},\mathbf{\sigma}_{e}^{ \prime})]_{e=1}^{N_{e}}:\mathbf{\sigma}_{e}^{\prime}-\mathbf{m}(\mathbf{\varepsilon}_{e}^ {\prime})=0\quad\forall e=1,\ldots,N_{e}\}\)) to the closest, in the sense of eq. (22), that belongs in the physically-admissible set, i.e., \(\mathbf{z}\in E=\{[(\mathbf{\varepsilon}_{e}^{\prime},\mathbf{\sigma}_{e}^{\prime})]_{e=1 }^{N_{e}}:eq.\,(\ref{eq: which must be solved \(N_{e}\) times, one per element (each of those sub-problems i of much less complexity than the global one). Hence, the second projection is defined as \[\mathbf{z}^{\prime}=\{[\mathbf{z}^{\prime}_{e}]_{e=1}^{N_{e}}\}=P_{D}(\mathbf{z})=\{[\text{ argmin}\,\Pi_{D_{e}}(\cdot,\mathbf{z}_{e})]_{e=1}^{N_{e}}\}\,. \tag{9}\] #### 2.1.3 Consecutive iterations We envision an iterative method, geometric in nature, in which each iteration consists of two projections. After the two projections, the point obtained at the prior iteration (say, \(\mathbf{z}^{\prime(n)}\)) yields \(\mathbf{z}^{\prime(n+1)}=P_{D}P_{E}(\mathbf{z}^{\prime(n)})\). The method, under some conditions, is guaranteed to converge in norm, in the sense that the distance \(\left\|(P_{D}P_{E})^{k+1}\mathbf{z}_{0}-(P_{D}P_{E})^{k}\mathbf{z}_{0}\right\|\to 0\) as \(k\rightarrow\infty\). See that the convergence of the method in this manner does not automatically imply that the force residual (i.e., the difference between external and internal forces) goes to zero in the same way. This is a remarkable difference when comparing to most solid mechanics solvers, which tend to be predicated in the minimization of the latter. For this reason, it is logical that PSI solvers may be equipped with a dual stop condition, which simultaneously checks both "phase-space convergence" (in terms of phase-space distance between iterations becoming smaller) and "equilibrium convergence" (in terms of force residual). The solution procedure is as follows: 1. Choose a point \(\mathbf{z}^{\prime(0)}\in Z\) that satisfies the constitutive laws defined at each element level, or simplify starting from the origin (which is part of the constitutive law). 2. Apply \(\mathbf{z}^{\prime(k)}=(P_{D}P_{E})^{k}(\mathbf{z}^{\prime(0)})\) until either * (a) \(\left\|\mathbf{F}_{int}(\mathbf{\sigma}^{\prime})-\mathbf{F}_{ext}\right\|_{L^{2}}<\text {tol}_{1}\left\|\mathbf{F}_{ext}\right\|_{L^{2}}\), or * (b) \(\left\|\mathbf{z}^{\prime(k)}-\mathbf{z}^{\prime(k-1)}\right\|<\text{tol}_{2}\left\| \mathbf{z}^{\prime(k-1)}\right\|\). Later in this text, we will argue that \(\text{tol}_{1}>\text{tol}_{2}\); in particular, we shall show that \(\text{tol}_{2}=\text{tol}_{1}/10\) to be a satisfactory choice. ### Operative form of the projections #### 2.2.1 Projection onto \(E\) Assume given \(\mathbf{z}^{\prime}=\{(\mathbf{\varepsilon}^{\prime}_{e},\mathbf{\sigma}^{\prime}_{e})_{ e=1}^{N_{e}}\}\). Enforcing the stationarity condition (\(\delta\Pi_{E}=0\)) to obtain, field by field, \[\delta\mathbf{u} \rightarrow\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top}\mathbf{C}(\mathbf{B} _{e}\mathbf{u}-\mathbf{\varepsilon}^{\prime}_{e})=0\rightarrow\left(\sum_{e=1}^{N_{e }}w_{e}\mathbf{B}_{e}^{\top}\mathbf{C}\mathbf{B}_{e}\right)\mathbf{u}=\sum_{e=1}^{N_{e}}w_{e} \mathbf{B}_{e}^{\top}\mathbf{C}\mathbf{\varepsilon}^{\prime}_{e}\,, \tag{10a}\] \[\delta\mathbf{\sigma}_{e} \rightarrow w_{e}\mathbf{C}^{-1}\left(\mathbf{\sigma}_{e}-\mathbf{\sigma}^{\prime}_{e} \right)-w_{e}\mathbf{B}_{e}\mathbf{\eta}=0\rightarrow\mathbf{\sigma}_{e}=\mathbf{\sigma}^{ \prime}_{e}+\mathbf{C}\mathbf{B}_{e}\mathbf{\eta}\,,\] (10b) \[\delta\mathbf{\eta} \rightarrow\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top}\sigma_{e}-\mathbf{ F}_{ext}=0\,. \tag{10c}\] Upon combination of eq. (10b) with eq. (10c), one obtains: \[\left(\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top}\mathbf{C}\mathbf{B}_{e} \right)\mathbf{\eta}=\mathbf{K}\mathbf{\eta} =\mathbf{F}_{ext}-\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top}\sigma^{ \prime}_{e}=\mathbf{F}_{ext}-\mathbf{F}_{int}\] \[\rightarrow\mathbf{K}\mathbf{\eta}=\mathbf{F}_{ext}-\mathbf{F}_{int}(\mathbf{\sigma}^{ \prime})=\Delta\mathbf{F}(\mathbf{\sigma}^{\prime})\,, \tag{11}\] wherefrom it becomes apparent that \(\mathbf{\eta}\) can be understood as a measure of the imbalance between external forces and internal forces _for a pre-assumed value of materially-admissible stresses_\(\mathbf{\sigma}^{\prime}\). In practice, the equation is solved only for those degrees of freedom that come not imposed by BCs. For the latter, it is set from the start that \(\eta_{i}=0\). Once the nodal imbalance variables \(\mathbf{\eta}\) are available, from eq. (10b) we find \[\mathbf{\sigma}_{e}=\mathbf{\sigma}_{e}^{\prime}+\mathbf{C}\mathbf{B}_{e}\mathbf{K}^{-1}\left(\mathbf{F} _{ext}-\mathbf{F}_{int}(\mathbf{\sigma}^{\prime})\right)\,, \tag{12}\] for each element. The essential boundary conditions must be enforced as part of finding the physically-admissible solution. We can set another system of linear equations from eq. (10a) \[\mathbf{K}\mathbf{u}=\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top}\mathbf{C}\mathbf{\varepsilon}^{ \prime}\,, \tag{13}\] and then enforce the essential BCs (\(u_{i}=\hat{u}_{i}\)) for some prescribed value \(\hat{u}_{i}\). In the current version of the code, this is done by "condensing" the imposed displacements, substituting them directly into the vector \(\mathbf{u}\) and solving eq. (13) for a reduced system that includes the "free" degrees of freedom and forces arising from condensation [1, 5]. Summarizing, the first projection goes from \(\mathbf{z}^{\prime}=\{[(\mathbf{\varepsilon}_{e}^{\prime},\mathbf{\sigma}_{e}^{\prime})]_{ e=1}^{N_{e}}\}\in D\) to \[\mathbf{z} =\{[(\mathbf{\varepsilon}_{e},\mathbf{\sigma}_{e})]_{e=1}^{N_{e}}\}=P_{E} (\mathbf{z}^{\prime})=\{[(\mathbf{\varepsilon}_{e}^{\prime},\mathbf{\sigma}_{e}^{\prime})] _{e=1}^{N_{e}}\}\] \[=\{[\mathbf{B}_{e}\mathbf{K}^{-1}\sum_{e=1}^{N_{e}}w_{e}\mathbf{B}_{e}^{\top} \mathbf{C}\mathbf{\varepsilon}^{\prime},\mathbf{\sigma}_{e}^{\prime}+\mathbf{C}\mathbf{B}_{e}\mathbf{ K}^{-1}\Delta\mathbf{F}(\mathbf{\sigma}^{\prime})]_{e=1}^{N_{e}}\}\in E\,. \tag{14}\] See that the matrix \(\mathbf{K}\) could be anything, depending on the choice of \(\mathbf{C}\). For instance, when \(\mathbf{C}=\mathbb{D}\), \(\mathbf{K}\) becomes the tangent matrix at the origin, which is computed and stored once and needs no updating across iterations. #### 2.2.2 Projection onto \(D\) Assume given \(\mathbf{z}=\{(\mathbf{\varepsilon}_{e},\mathbf{\sigma}_{e})_{e=1}^{N_{e}}\}\). When it comes to the projection over the constitutive law, the stationarity condition for eq. (7) yield the following E-L equations \[\delta\mathbf{\varepsilon}_{e}^{\prime} \to\mathbf{C}\left(\mathbf{\varepsilon}_{e}^{\prime}-\mathbf{\varepsilon}_{ e}\right)+\mathbf{\Lambda}_{e}\nabla\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})=0 \to\mathbf{\varepsilon}_{e}^{\prime}=\mathbf{\varepsilon}_{e}-\mathbf{C}^{-1}\mathbf{\Lambda }_{e}\nabla\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})\,, \tag{15a}\] \[\delta\mathbf{\sigma}_{e}^{\prime} \to\mathbf{C}^{-1}\left(\mathbf{\sigma}_{e}^{\prime}-\mathbf{\sigma}_{e} \right)-\mathbf{\Lambda}_{e}=0\to\mathbf{\Lambda}_{e}=\mathbf{C}^{-1}\left(\mathbf{\sigma}_{e }^{\prime}-\mathbf{\sigma}_{e}\right)\,,\] (15b) \[\delta\mathbf{\Lambda}_{e} \to\mathbf{\sigma}_{e}^{\prime}-\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime} )=0\to\mathbf{\sigma}_{e}^{\prime}=\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})\,. \tag{15c}\] Enough continuity has to be assumed so that the gradient of the constitutive law (\(\nabla\mathbf{m}\)) is well-defined. See that we can minimize the distance directly, thorough some numerical approach, after enforcing the constitutive law relation eq. (15c). We will do so, further simplifying the setting by assuming a diagonal matrix for the distance coefficients, i.e., \(\mathbf{C}=\mathcal{C}\mathbf{I}\), hence \[\mathbf{z}_{e}^{\prime}=\operatorname{argmin}\bar{\Pi}_{D}=\operatorname{argmin} \left\{\frac{\mathcal{C}}{2}(\mathbf{\varepsilon}_{e}^{\prime}-\mathbf{\varepsilon}_ {e})\cdot(\mathbf{\varepsilon}_{e}^{\prime}-\mathbf{\varepsilon}_{e})+\frac{1}{2 \mathcal{C}}(\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime})-\mathbf{\sigma}_{e})\cdot(\mathbf{ m}(\mathbf{\varepsilon}_{e}^{\prime})-\mathbf{\sigma}_{e})\right\}\,. \tag{16}\] Minimizing this distance is precisely the traditional approach in DDCM [6], but notice that in this case, since \(\mathbf{m}(\varepsilon_{e}^{\prime})\) is known, one does not have to scan a discrete dataset, but functional minimization of the objective function, based on its derivatives or not, can be leveraged to find the minimizer directly, dodging the need for Euler-Lagrange equations. Conversely, combining eqs. (15a) to (15c) yields a non-linear vector equation for \(\mathbf{\varepsilon}_{e}^{\prime}\): \[\mathbf{\varepsilon}_{e}^{\prime}=\mathbf{\varepsilon}_{e}+\mathbf{C}^{-1}\mathbf{C}^{-1}(\bm {\sigma}_{e}-\mathbf{m}(\mathbf{\varepsilon}_{e}^{\prime}))\nabla\mathbf{m}(\mathbf{ \varepsilon}_{e}^{\prime})=\mathbf{\varepsilon}_{e}+\frac{(\mathbf{\sigma}_{e}-\mathbf{m}( \mathbf{\varepsilon}_{e}^{\prime}))}{\mathcal{C}^{2}}\nabla\mathbf{m}(\mathbf{\varepsilon }_{e}^{\prime})\,. \tag{17}\] This equation is referred to as "the Euler-Lagrange equation of \(P_{D}\)" hereafter, as it combines eqs. (15a) to (15c) into one. It represents a vector equation whose complexity depends on the form of the function \(\mathbf{m}\), and on the number of components of the vectors (\(n_{e}\)), which depends on the problem: for instance, 3 components for plane stress or strain, and for 1D-element meshes it boils down to a scalar equation. ``` Require: Connectivity, \(\forall e=1,\ldots,N_{e}\), compatibility matrices \(\mathbf{B}_{e}\); zero-strain moduli's matrix \(\mathbb{D}\); matrix of distance constants \(\mathbf{C}\); \(\forall i=1,\ldots,N_{\text{dof}}\), external forces \(F_{ext}\) or boundary conditions. (i) Set \(k=0\). Initial data assignation: for\(e=1,\ldots,N_{e}\)do Set \({\mathbf{z}^{\prime}}_{e}^{(0)}=(0,0)\), unless there being essential boundary conditions in the nodes of the said element, in that case set \(\mathbf{\varepsilon}^{\prime}_{e}=\mathbf{B}_{e}\hat{\mathbf{u}}\). endfor (ii) Project onto E: Given \({\mathbf{z}^{\prime}}^{(k)}\), find \({\mathbf{z}^{(k+1)}}=\{{\mathbf{z}^{(k+1)}_{e}}\}_{e=1}^{N}=\{(\varepsilon_{e}, \sigma_{e})\}_{e=1}^{N}\) from solving eq. (10) to get \(\mathbf{u}^{(k+1)}\) and \(\mathbf{\eta}^{(k+1)}\) for\(e=1,\ldots,N_{e}\)do Compute \(\mathbf{\varepsilon}_{e}\) from eq. (4) and \(\mathbf{\sigma}_{e}\) from eq. (12) endfor (iii) Project onto D: for\(e=1,\ldots,N_{e}\)do Given \({\mathbf{z}^{(k+1)}}=\{{\mathbf{z}^{(k+1)}_{e}}\}_{e=1}^{N}\), find \({\mathbf{z}^{\prime}}_{e}^{(k+1)}=(\mathbf{\varepsilon}^{\prime}_{e},\mathbf{\sigma}^{ \prime}_{e})\), where \(\mathbf{\varepsilon}^{\prime}_{e}\) is got from eq. (16) and then \(\mathbf{\sigma}^{\prime}_{e}=\mathbf{m}(\mathbf{\varepsilon}^{\prime}_{e})\). endfor (iv) Convergence test: if\(\mathbf{K}\mathbf{\eta}^{(k+1)}<\operatorname{tol}_{1}\left\|\mathbf{F}_{ext}\right\|_{L^{2}}\)then Final displacement field \(\mathbf{u}=\mathbf{u}^{(k+1)}\)exit elseif\(\left\|\mathbf{z}^{(k+1)}-\mathbf{z}^{(k)}\right\|<\operatorname{tol}_{2}\left\|\mathbf{z}^{(k)}\right\|\) Final displacement field \(\mathbf{u}=\mathbf{u}^{(k+1)}\)exit else \(k\gets k+1\), goto(ii) endif ``` **Algorithm 1** Phase-space iterative projections ## 3 2D plane-strain example: plate with hole We resort to a 2D benchmark exercise: a thick square plate (side length \(L=0.2\,\mathrm{m}\)) with a circular hole (radius \(r=0.02\,\mathrm{m}\)) loaded in tension on two opposite edges, see Figure 1(a). The applied traction \(\sigma_{0}\) equals \(100\,\mathrm{MPa/m}\). The material the plate is made of is assumed to lose stiffness with increasing mean strain (\(\varepsilon_{m}=(\varepsilon_{\text{I}}+\varepsilon_{\text{II}}+\varepsilon_{ \text{III}})/3=(\varepsilon_{\text{xx}}+\varepsilon_{\text{yy}})/3\)). No inelastic behavior is considered in these simulations. The Young's modulus of the material is assumed to change according to (Figure 1b), thus: \[\mathbf{\sigma}=\mathbb{D}(\varepsilon_{m})\mathbf{\varepsilon}=\frac{Y( \varepsilon_{m})}{(1+\nu)(1-2\nu)}\begin{bmatrix}1-\nu&\nu&0\\ \nu&1-\nu&0\\ 0&0&\frac{1-2\nu}{2}\end{bmatrix}\mathbf{\varepsilon}\,, \tag{18}\] where \[Y(x)=pY_{0}(|x|+c)^{p-1}\,, \tag{19}\] \(Y_{0}\) being the zero-strain modulus. The Poisson's ratio is assumed to remain constant. For the simulations we are to show, \(Y_{0}=200\,\mathrm{GPa}\), \(\nu=0.33\). Figure 1: System geometry and material. (a) Scheme of the system, including loading and reduced mesh (taking advantage of symmetry). (b) Young modulus as a function of mean strain for different values of \(p\). Figure 2: Results: deformation with both methods (\(\times 250\) magnification), material corresponding to \(p=2\cdot 10^{-4}\). The mesh is generated with constant-stress triangular elements (CSTs). The total external force is applied over one step, and NR's convergence condition is \(\left\|\boldsymbol{F}_{int}(\boldsymbol{\sigma}^{\prime})-\boldsymbol{F}_{ext} \right\|_{L^{2}}<0.01\left\|\boldsymbol{F}_{ext}\right\|_{L^{2}}\). Thus, \(\text{tol}_{1}=0.01\); for the PSI, the computation can also stop if \(\left\|\boldsymbol{z}^{\prime(k)}-\boldsymbol{z}^{\prime(k-1)}\right\|<0.001 \left\|\boldsymbol{z}^{\prime(k-1)}\right\|\), i.e., \(\text{tol}_{2}=\text{tol}_{1}/10\). See Figure 3 for a partial depiction of phase-space projections. When properly solved (i.e., in absence of numerical artifacts), both methods yield the same solution, see Figure 2. In the following sections, I will consider the influence of different parameters on the performance of the method. ### Changing the distance function: influence of \(\mathcal{C}\) I first explore the influence of the distance function used. This is tantamount to changing the constants that appear in eq. (2) for the projection \(P_{D}\), since we have established from the get-go that for \(P_{E}\) the constant matrix is \(\boldsymbol{C}=\mathbb{D}\), so the projection matrix \(\boldsymbol{K}\) is the zero-strain stiffness matrix. For simplicity, we chose \(\boldsymbol{C}=\mathcal{C}\boldsymbol{I}\), rendering eq. (16), and express it as ratio with respect to the zero-strain Young modulus, i.e., \(\mathcal{C}/Y_{0}\). This minimization problem has to be solved element by element. For now, we will use Brent's derivative-less method [24] and four CPUs (kernels) to perform the projections onto the constitutive law. We are using Figure 3: Local phase-space iterations for a particular element (showing only strain components). Blue (red) arrows represent successive projections onto \(E\) (\(D\)). Mathematica's function FindMinimum with default parameters, providing the previous physically-admissible strain (\(\mathbf{\varepsilon}_{e}\)) as starting point to the search. For \(p=2.5\cdot 10^{-4}\), see Table 1. Both \(\mathcal{C}/Y_{0}=0.01\) and \(\mathcal{C}/Y_{0}=10\) converge quickly in phase-space distance (\(\mathrm{tol}_{2}\)) but with 95% error \(\mathrm{tol}_{1}\), i.e., equilibrium far from being satisfied even though the phase space distance converged. That is why they are not even shown in the table. ### How does non-linearity affect relative performance? We observe that PSI fares better than NR when most material response is non-linear, i.e., low value of \(p\) (see Table 2). This is attributed to PSI enforcing the constitutive law during a dedicated step (\(P_{D}\)) and element-wise, while NR enforces equilibrium and the constitutive law simultaneously when solving the residual equations. NR relies on the tangent stiffness matrix, so there being elements that accumulate deformation can lead to entries of disparate magnitude and poor conditioning of the matrices. Table 2 suggests that PSI can be especially competitive when it comes to finding approximate solutions of problems featuring widespread non-linear behavior. The \({}^{*}\) denotes that the calculation ends because of \(\mathrm{tol}_{2}\) (phase-space convergence), \(\mathrm{tol}_{1}\) is not met (equilibrium tolerance), the final equilibrium error being 2.6%, i.e., the final stresses satisfy \(\left\|\mathbf{F}_{int}(\mathbf{\sigma}^{\prime})-\mathbf{F}_{ext}\right\|_{L^{2}}=0.026 \left\|\mathbf{F}_{ext}\right\|_{L^{2}}\). I observe that when \(\mathrm{tol}_{2}=\mathrm{tol}_{1}/10\), if the method converges first in phase-space distance, it always yields also a small force residual, less than 4%, despite not reaching the more constrained target. ### Influence of number of kernels Table 3 shows the impact of parallelizing \(P_{D}\), the element-by-element projections onto the material set. The scaling is good up to four kernels, at which point it saturates. This seems to be due to limitations in the parallelization strategy employed by the software where computations are carried out (Mathematica). \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Degree of non-linearity, \(p/10^{-4}\)} \\ \cline{2-4} \multicolumn{1}{c|}{} & 2.5 (stiff) & 2.0 & 1.5 (compliant) \\ \hline NR running time [s] & 8.25 & 9.96 & 93.10 \\ \hline PSI running time [s] & 8.48 & 15.29 & 33.97\({}^{*}\) \\ \hline \end{tabular} \end{table} Table 2: Wallclock time comparisons using distance minimization for \(P_{D}\). Using \(\mathcal{C}=0.1Y_{0}\). Mesh made up by 2872 CST elements. Asterisk (\({}^{*}\)) on PSI results to stress that \(\mathrm{tol}_{2}\) was met before \(\mathrm{tol}_{1}\). \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Degree of non-linearity, \(p/10^{-4}\)} \\ \cline{2-4} \multicolumn{1}{c|}{} & 2.5 (stiff) & 2.0 & 1.5 (compliant) \\ \hline NR running time [s] & 8.25 & 9.96 & 93.10 \\ \hline PSI running time [s] & 8.48 & 15.29 & 33.97\({}^{*}\) \\ \hline \end{tabular} \end{table} Table 2: Wallclock time comparisons using distance minimization for \(P_{D}\). Using \(\mathcal{C}=0.1Y_{0}\). Mesh made up by 2872 CST elements. Asterisk (\({}^{*}\)) on PSI results to stress that \(\mathrm{tol}_{2}\) was met before \(\mathrm{tol}_{1}\). ### Distance minimization: method comparison Another advantage of the PSI solver is its adaptability when it comes to choosing an approach to numerically minimize the distance in eq. (16). We have tried four different methods, three that rely on derivatives of the objective function (Newton, quasi-Newton and conjugate gradient [2]) and one that does not (Brent's principal axis [24]). For this example, the derivative-less method is about a 50% slower, but this situation will reverse when considering simpler local phase spaces. ### Performance as function of mesh size The effect of refining the mesh is examined next. The original mesh features 2872 CST elements, corresponding to a target triangle length \(\ell=0.5\,\mathrm{cm}=L/20\) (\(2L\) is the length of the plate, Figure 0(a)). Reducing to \(\ell/L=1/33.33\) and \(\ell/L=1/50\) yields 7881 and 17540, respectively. In the 2D setting, the number of elements scale quadratically with the element length, i.e., reducing the side length by a factor \(\ell_{2}/\ell_{1}\) yields a mesh containing approximately \((\ell_{2}/\ell_{1})^{2}\) times the starting number of elements. Table 5 shows a remarkable scaling of PSI, significantly better than NR's. As the mesh is refined, the number of elements and degrees of freedom to consider also increases. For NR, this means substantially larger sparse matrices to handle: the size of the tangent matrix also scales quadratically with the characteristic element size. Contrariwise, for PSI, it only means more independent resolutions of eq. (16). Consequently, the running time appears to scale linearly with the number of elements. Note that, if the scaling with the number of processors was optimal, doubling the number of processors would cancel the time increase associated to doubling the number of elements in the mesh. \begin{table} \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Principal Axis & Conjugate & Quasi-Newton & Newton \\ & (derivative-free) & gradient & & \\ \hline Running time [s] & 15.29 & 10.60 & 10.70 & 10.82 \\ \hline \end{tabular} \end{table} Table 4: Wallclock time comparisons using distance minimization for \(P_{D}\) (different methods). Material defined by \(p=2.0\cdot 10^{-4}\). Using \(\mathcal{C}=0.1Y_{0}\). Mesh made up by 2872 CST elements. \begin{table} \begin{tabular}{|l|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{Phase-space iterative solver} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & serial & 2 kernels & 4 kernels & 8 kernels \\ \hline Running time [s] & 9.96 & 43.47 & 25.50 & 15.29 & 14.08 \\ \hline \end{tabular} \end{table} Table 3: Wallclock time comparisons using distance minimization for \(P_{D}\). Material defined by \(p=2.0\cdot 10^{-4}\). Using \(\mathcal{C}=0.1Y_{0}\). Mesh made up by 2872 CST elements. \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\# elements} \\ \cline{2-5} \multicolumn{1}{c|}{} & 2872 & 7881 & 17540 \\ \hline NR running time [s] & 7.73 & 39.16 & 158.93 \\ \hline PSI running time [s] & 8.30 & 21.04 & 54.5 \\ \hline \end{tabular} \end{table} Table 5: Wallclock time comparisons using distance minimization for \(P_{D}\). Using \(\mathcal{C}=0.1Y_{0}\). #### 3.5.1 Plate without hole Interestingly enough, simplifying the geometry of the system also has a strong impact over the performance of PSI. We re-run the plate simulation but removing the hole from its midst, recall that we set \(\text{tol}_{2}=\text{tol}_{1}/10=10^{-3}\). NR still suffers from the same difficulties associated to having to handle ever-larger matrices. It is acknowledged that for this simpler geometry and the two smaller meshes, PSI converged in phase-space norm in just two iterations, while having a relatively small residual (3.73% instead of the target 1%). I reckon that the convergence is faster when the stress state is simpler: removing the hole also removes the need to find/project over complex stress/deformation states. ## 4 1D-elements in 3D space: Kirchdoerfer's truss The special interest of this application is to work with the simplest phase-space possible. We have found, particularly, the projection onto the materially-admissible set using the Euler-Lagrange equations to be more efficient in this context, both in terms of implementation and actual performance. This simple setting allows us to explore also the possibility of introducing an "optimal" distance function [27]. The structure we focus on is "Kirchdoerfer's truss", a simple truss system introduced in [6]. It features 1246 elements and 1128 nodes. The non-linear elastic law presented in [6] is equipped on all the elements, and it is given by \[\sigma=m(\varepsilon)=Y_{0}\left[(|\varepsilon|+c)^{p}-c^{p}\right]\text{sign} (\varepsilon)\,, \tag{20}\] where \(c=p^{(1-p)^{-1}}\), \(Y_{0}\) represents the axial stiffness at zero strain, \(|\cdot|\) is the absolute value function and \(\text{sign}(\cdot)\) is the sign function. See Figure 3(b). In this example, \(Y_{0}=200\)GPa and \(p\) will be changed from \(5\cdot 10^{-5}\) to \(5\cdot 10^{-4}\). We compare the new solver to a damped Newton-Raphson (see appendix), which is more adequate to handle the forced displacement conditions. We will use a damping parameter of 0.8, i.e., the zero-strain stiffness is 20% of each iteration stiffness matrix. ### Simplifying the formulation for 1D elements By virtue of the typology of the connection among bar elements, they work primarily by stretching, so the mechanical state of the \(e\)-th element corresponds to the simplest local phase space, i.e., \(Z_{e}=\mathbb{R}^{2}\) (\(n_{e}=1\)), each point defined by only two coordinates \((\sigma_{e},\varepsilon_{e})\). Let us particularize Section 2 for this the simplest phase space. In this case, the norms and distances \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{\# elements} \\ \cline{2-4} \multicolumn{1}{c|}{} & 728 (3.73\%) & 2886 (3.73\%) & 11708 (1.00\%) \\ \hline NR running time [s] & 0.58 & 4.88 & 95.4 \\ \hline PSI running time [s] & 0.42\({}^{*}\) & 1.33\({}^{*}\) & 8.66 \\ \hline \end{tabular} \end{table} Table 6: Plate _without_ hole. Wallclock time comparisons using distance minimization for \(P_{D}\). Material \(p=2\cdot 10^{-4}\). Using \(\mathcal{C}=0.1Y_{0}\), distance minimization (conjugate gradient) and 4 kernels. In parentheses, the final residual error. Asterisk (\({}^{*}\)) on PSI results to stress that \(\text{tol}_{2}\) was met before \(\text{tol}_{1}\). simplify considerably: \[\|\mathbf{z}\|=\left(\sum_{e=1}^{N_{e}}w_{e}\left\|\mathbf{z}\right\|_{e}^{2}\right)^{1/2 }=\left(\sum_{e=1}^{N_{e}}w_{e}\left[\frac{\varepsilon_{e}^{2}}{2\mathcal{C}^{-1 }}+\frac{\sigma_{e}^{2}}{2\mathcal{C}}\right]\right)^{1/2}\,, \tag{21}\] which naturally induces a metric and a distance \[d(\mathbf{z},\mathbf{z}^{\prime})=\|\mathbf{z}-\mathbf{z}^{\prime}\|=\left(\sum_{e=1}^{N_{e}}w_ {e}\left[\frac{(\sigma_{e}-\sigma_{e}^{\prime})^{2}}{2\mathcal{C}}+\frac{( \varepsilon_{e}-\varepsilon_{e}^{\prime})^{2}}{2\mathcal{C}^{-1}}\right]\right)^ {1/2}\,, \tag{22}\] where \(\mathcal{C}\) is but a number with proper units so that the two addends are congruent. For \(P_{E}\), I shall use \(\mathcal{C}=Y_{0}\), while for \(P_{D}\), the "optimal" value will be used. The projection onto \(E\) does nor change qualitatively, but when it comes to \(D\) (enforcing the constitutive law relation), we obtain a different expression in which inner products (norms) do not appear: \[\mathbf{z}_{e}^{\prime}=\operatorname{argmin}\bar{\Pi}_{D}=\operatorname{argmin} \left\{\frac{(\varepsilon_{e}^{\prime}-\varepsilon_{e})^{2}}{2\mathcal{C}^{-1 }}+\frac{(m(\varepsilon_{e}^{\prime})-\sigma_{e})^{2}}{2\mathcal{C}}\right\}\,. \tag{23}\] and the Euler-Lagrange equation, similarly to eq. (17), yields \[\varepsilon_{e}^{\prime}=\varepsilon_{e}+\frac{(\sigma_{e}-m(\varepsilon_{e} ^{\prime}))}{\mathcal{C}^{2}}\frac{dm(\varepsilon_{e}^{\prime})}{d\varepsilon _{e}^{\prime}}\,. \tag{24}\] We can consider the metric constant \(\mathcal{C}\) as an independent field in eq. (23), and take variations with respect to it [27]: \[\delta\mathcal{C}\to\frac{(\varepsilon_{e}^{\prime}-\varepsilon_{e})^{2}}{2}- \frac{(m(\varepsilon_{e}^{\prime})-\sigma_{e})^{2}}{2\mathcal{C}^{2}}=0\to \mathcal{C}^{*}=\frac{m(\varepsilon_{e}^{\prime})-\sigma_{e}}{\varepsilon_{e}^ {\prime}-\varepsilon_{e}}\,. \tag{25}\] Using this optimal value \[\mathbf{z}_{e}^{\prime}=\operatorname{argmin}\bar{\Pi}_{D}^{\operatorname{opt}}= \operatorname{argmin}\left\{(\varepsilon_{e}^{\prime}-\varepsilon_{e})(m( \varepsilon_{e}^{\prime})-\sigma_{e})\right\}\,. \tag{26}\] The stationarity condition for this functional yields \[\varepsilon_{e}^{\prime}=\varepsilon_{e}+\frac{(\sigma_{e}-m(\varepsilon_{e}^ {\prime}))}{dm(\varepsilon_{e}^{\prime})/d\varepsilon_{e}^{\prime}}\,, \tag{27}\] where the gradient appears in the denominator, unlike in eqs. (17) and (24). ### Detailed analysis of one run The material model corresponds to \(p=10^{-4}\), see Figure 3(b). The tolerance is initially set to be \(\operatorname{tol}_{1}=5\cdot 10^{-2}=5\%\). We employ the optimal phase-space distance for projections onto \(D\), i.e., we are using Mathematica's function FindRoot to solve eq. (27), with Newton's method, step control ("trust region") and computing the necessary Jacobians with finite differences, while also providing the previous physically-admissible strain (\(\mathbf{\varepsilon}_{e}\)) as starting point to the search. #### 4.2.1 Using Euler-Lagrange equation (optimal distance) For a single run, the NR solver took \(2.52\,\mathrm{s}\) and \(27\) iterations to achieve the desired tolerance. Conversely, PSI converged after \(13\) iterations (each iteration here concerns two projections) in \(2.11\,\mathrm{s}\), employing eight parallel processors to perform the projections onto the constitutive law. Qualitatively, Figure 5(a) reveals that the NR solver initial guess leads to a large initial residual, due to inclusion of forced displacements into the initial guess (this is a well-known difficulty). This imbalance is brushed away over subsequent iterations, and the solver converges much quicker (quadratically) in the surroundings of the solution. Conversely, the PSI solution starts from zero initial forces, as we set \({\boldsymbol{z^{\prime}}}^{(0)}=\mathbf{0}\). The driving force behind PSI is not the residual minimization, but the minimization in phase-space distance. The residual increases during the first iterations, but it then quickly stabilizes and starts to reduce until it starts to close on the solution, at which point it reduces more slowly. Let us analyze the time breakdown of one PSI run. There are two tasks per iteration: solving two linear systems (for \(\boldsymbol{u}\) and \(\boldsymbol{\eta}\)) to then evaluate \(\boldsymbol{\sigma}\) and \(\boldsymbol{\varepsilon}\) in the projection onto \(E\), \(P_{E}\), followed by getting \(\boldsymbol{\sigma}^{\prime}\) and \(\boldsymbol{\varepsilon}^{\prime}\) through the projection onto \(D\), \(P_{D}\), which is performed by means of local phase-space distance minimization, i.e., solving either a non-linear equation for each element eq. (27) (note that in this case it is a scalar equation, but will not be the case in general) or minimizing a distance (Section 3). Figure 5(b) consistently shows that the \(P_{D}\) step consumes about twice as much time as \(P_{E}\). In absolute terms, \(P_{D}\) takes \(0.11\) seconds on average, while \(P_{E}\) takes \(0.06\). Logically, the former task would be the first to be dealt with if it came to reducing time, what could be done in two ways: (a) by improving on the performance of the root-finding method, (b) by improving on the parallelization, i.e., bringing more processors to share in the burden of solving one non-linear algebraic equation per element. #### 4.2.2 Solving without Euler-Lagrange equations: distance minimization Instead of using the Euler-Lagrange equation, we could choose to minimize the distance directly during the \(P_{D}\) step. This can be done in two ways: one in which a distance is specified based on a value of \(\mathcal{C}\) for eq. (23), or another one in which the optimal value is always used, eq. (26). The different performances can be seen in Figure 6: (a) shows the iterations leading to the solution when Figure 4: Truss example. (a) Structure: red arrows indicate applied forces, blue ones correspond to imposed displacements. (b) Constitutive law used for the all bars in the truss. the optimal distance is used (13 iterations), while (b) evinces the slow convergence when \(\mathcal{C}=0.1Y_{0}\) is used (112 iterations). So, ideally, one would use the optimal value \(\mathcal{C}^{*}\) as this translates into an optimized distance function for each iteration, which seems to boost convergence (compare Figure 6(a) to (b)), even though the fitting of this "optimal-distance" approach into the context of projections into convex sets needs to be mathematically validated. Table 7 reveals that the performance using the optimal distance does not dramatically change much when changing the method. Interestingly, cf. Table 4, the derivative-less method for solving this scalar minimization problem is faster than those that compute gradients and/or Hessians. On the contrary, the derivative-less method was slower when the phase-space was larger and thus the minimization required working with vectors, recall Section 3.4. BFGS refers to Broyden-Fletcher-Goldfarb-Shanno method, a quasi-Newton method that does not require re-computing the Hessian [28]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Principal Axis & Conjugate & Quasi-Newton & Newton \\ & (derivative-free) & gradient & (BFGS) & \\ \hline Running time [s] & 2.41 & 3.13 & 3.11 & 3.13 \\ \hline \end{tabular} \end{table} Table 7: Wallclock time comparisons using distance minimization for \(P_{D}\) (different methods). Material defined by \(p=2.0\cdot 10^{-4}\). Using \(\mathcal{C}=\mathcal{C}^{*}\). Figure 5: Truss example: (a) Evolution of the residual over iterations, Newton-Raphson solver (NR) v. phase-space iterative (PSI) solver (b) PSI time breakdown in each iteration: time invested in performing projection onto physically-admissible set (\(P_{E}\)) and onto materially-admissible set (\(P_{D}\)). ### Influence of number of kenerls Similarly to the more complex phase space, Section 3.3, the scaling is good up to four processors. Afterward, Mathematica does not take advantage of the extra resources. ## 5 Discussion Based on the examples analyzed above, there are three salient advantages in using PSI, all derived from the different treatment of the constitutive law: numerical handling of non-linearity, scalability, adaptability. * Handling of non-linearity: since PSI does not update the stiffness matrix (nor its inverse), it avoids the poor-conditioning problem associated to local loss of stiffness in some elements [3, 5, 29]. * Scalability: whenever the \(P_{D}\) projection is the performance bottleneck, we expect almost-linear time reduction when parallelizing this step and increasing the number of processors. Moreover, since this step is carried out on a per-element basis, the solver scales linearly with mesh size (number of elements). Figure 6: Iterations for one element, arrows represent successive projections. (a) Using distance minimization with optimal \(\mathcal{C}\) and (b) using \(\mathcal{C}=0.1Y_{0}\). First projection yields strain to satisfy essential BCs. The position of the physically-admissible set evolves as \(E_{e}\) depends on \([\sigma_{e}]_{e=1}^{N_{e}}\), which change each time \(P_{E}\) is applied at the structure level. * Adaptability: the two projections are but two minimizations. For the projection onto the physically-admissible set, enough continuity can be always assumed so that the corresponding Euler-Lagrange equations can be found, what yield a simple linear system in all cases. On the contrary, for non-linear constitutive laws, the per-element projection onto the materially-admissible set is intrinsically non-linear, so the user can opt among different ways of proceeding. Depending, on the dimensionality of the local phase space or the available libraries' performance [30], one could choose among the multiple ways to carry out the projection onto the constitutive law. The mathematical theory of alternating projections goes beyond the simple two-projection setting we have been using [31]. Some preliminary tests point to adding an intermediate third projection, consisting simply of evaluating the constitutive law, as a direct means to speed convergence up. We also point out the possibility of splitting the system in two regions: one that remains linear-elastic and another one where inelastic behavior manifests. A classic linear-elastic solver for the former can be combined with phase-space iterations in the latter [32]; this could lead to even faster simulations of that kind of systems where non-linear material behavior is localized in space. Going one more step forward to an incremental-load scenario, we can envision localizing non-linearity both in space and time, where the parts of the domain that are solved in the traditional way or with phase-space iterations evolve dynamically with the load [33]. We also envision PSI solvers contributing to the adoption of neural-network based constitutive laws. In recent years, there has been a continuous stream [34, 35, 36, 37, 38] of excellent work that has proved the capacity of deep learning to represent complex, history-depedent, non-linear material behavior. However, the computation of gradients with respect to the value of the entries (strain components) via "automatic differentiation" [39] can be onerous and poorly-defined at times [40, 41, 42], e.g., since neural networks tend to feature activation functions with limited continuity [43] (for instance,"ReLu"). This logically hinders the teaming up of neural-network constitutive laws with Newton-Raphson solvers. Our new solver can work without ever having to compute derivatives of the constitutive law (see exercise in Section 3), thus avoiding the issue altogether. This renders the PSI solver as an ideal candidate to partner with this new generation of material constitutive models to pursue non-linear simulations, provided that the condition of the convexity of the convex law can be relaxed in some way. ## 6 Final remarks We have introduced the concept of phase-space solvers, a new kind of iterative algorithm for solving problems in non-linear mechanics, and have shown its capacity to outperform traditional solvers when it comes to solve large systems with severe non-linearity. The method relies on projecting onto convex sets whose frontier is defined either by the material constitutive law or Newton's second law. We have shown that, while the latter is achieved through solving a linear system of equations, the projection onto the constitutive law demands the element-wise resolution of either a non-linear optimization problem or a non-linear algebraic equation, but this step can be trivially parallelized, what renders the algorithm very efficient. It remains to formally characterize errors [44] and convergence rates [45, 46, 47], but this seems a feasible task since much inspiration can be drawn from previous work on the method of alternating projections or the method of projections onto convex sets. Logical future practical work avenues include dynamics [48] and finite kinematics [3, 4], apart from the aforementioned inelastic problems with complex loading paths. Finally, we highlight that this approach can also be used to solve non-linear problems in other branches of physics, as long as they feature a constitutive law and some physical balance conditions (Newton's second law, first law of thermodynamics...). As examples, let us mention heat transfer problems in complex media and flow of non-Newtonian fluids. ## Acknowledgment The author is thankful to Dr. Trenton Kirchdoerfer for sharing information about the homonymous truss. The financial support of EPFL and useful discussions with Prof. J.-F. Molinari are gratefully acknowledged.
2309.04222
Offline Recommender System Evaluation under Unobserved Confounding
Off-Policy Estimation (OPE) methods allow us to learn and evaluate decision-making policies from logged data. This makes them an attractive choice for the offline evaluation of recommender systems, and several recent works have reported successful adoption of OPE methods to this end. An important assumption that makes this work is the absence of unobserved confounders: random variables that influence both actions and rewards at data collection time. Because the data collection policy is typically under the practitioner's control, the unconfoundedness assumption is often left implicit, and its violations are rarely dealt with in the existing literature. This work aims to highlight the problems that arise when performing off-policy estimation in the presence of unobserved confounders, specifically focusing on a recommendation use-case. We focus on policy-based estimators, where the logging propensities are learned from logged data. We characterise the statistical bias that arises due to confounding, and show how existing diagnostics are unable to uncover such cases. Because the bias depends directly on the true and unobserved logging propensities, it is non-identifiable. As the unconfoundedness assumption is famously untestable, this becomes especially problematic. This paper emphasises this common, yet often overlooked issue. Through synthetic data, we empirically show how na\"ive propensity estimation under confounding can lead to severely biased metric estimates that are allowed to fly under the radar. We aim to cultivate an awareness among researchers and practitioners of this important problem, and touch upon potential research directions towards mitigating its effects.
Olivier Jeunen, Ben London
2023-09-08T09:11:26Z
http://arxiv.org/abs/2309.04222v1
# Offline Recommender System Evaluation under Unobserved Confounding ###### Abstract. Off-Policy Estimation (OPE) methods allow us to learn and evaluate decision-making policies from logged data. This makes them an attractive choice for the offline evaluation of recommender systems, and several recent works have reported successful adoption of OPE methods to this end. An important assumption that makes this work is the absence of unobserved confounders: random variables that influence both actions and rewards at data collection time. Because the data collection policy is typically under the practitioner's control, the unconfoundedness assumption is often left implicit, and its violations are rarely dealt with in the existing literature. This work aims to highlight the problems that arise when performing off-policy estimation in the presence of unobserved confounders, specifically focusing on a recommendation use-case. We focus on policy-based estimators, where the logging propensities are learned from logged data. We characterise the statistical bias that arises due to confounding, and show how existing diagnostics are unable to uncover such cases. Because the bias depends directly on the _true_ and unobserved logging propensities, it is non-identifiable. As the unconfoundedness assumption is famously untestable, this becomes especially problematic. This paper emphasises this common, yet often overlooked issue. Through synthetic data, we empirically show how naive propensity estimation under confounding can lead to severely biased metric estimates that are allowed to fly under the radar. We aim to cultivate an awareness among researchers and practitioners of this important problem, and touch upon potential research directions towards mitigating its effects. [name=Section = 1, leftmargin=*]mybox[name=Section = 2, leftmargin=*]mybox[name=Section = 3, rightmargin=*]mybox[name=Section = 4, rightmargin=*]mybox[name=Section = 5, leftmargin=*]mybox[name=Section = 6, rightmargin=*]mybox[name=Section = 7, rightmargin=*]mybox[name=Section = 8, rightmargin=*]mybox[name=Section = 9, rightmargin=*]mybox[name=Section = 10, rightmargin=*]mybox[name=Section = 11, leftmargin=*]mybox[name=Section = 12, rightmargin=*]mybox[name=Section 13, rightmargin=*]mybox[name=Section 14, rightmargin=*]mybox[name=Section 15, rightmargin=*]mybox[name=Section 16, rightmargin=*]mybox[name=Section 17, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 17, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 18, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name=Section 19, rightmargin=*]mybox[name 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmarginmargin=*]mybox[name= 19, rightmargin=*]mybox[name= 19, rightmarginmargin=*]mybox[name= 19, or instrumental variables (Koskov and Koskov, 2017)) or make assumptions about the nature of confounding variables (Koskov and Koskov, 2017; Koskov, 2017; Koskov, 2017). Analogously, instrumental variables have been leveraged to test for the unconfoundedness assumption (Koskov and Koskov, 2017), and other statistical methods have been proposed to assess the sensitivity of results to potential confounders (Koskov, 2017). Nevertheless, in the absence of additional tools, unconfoundedness is a famously _untestable_ assumption, rendering its effects especially troublesome. Focusing on the off-policy bandit setting with a guiding example in recommendation, we aim to answer: "_Can we reliably select the optimal policy from a set of competing policies, under unobserved confounding?_" ## 2. Off-policy estimation in the presence of unobserved confounders Throughout this work, we denote random variables as \(X\), with specific instances as \(x\in\mathcal{X}\). A contextual bandit problem consists of _contexts_\(X\) (e.g. user and context features), _actions_\(A\) (e.g. item recommendations), and _rewards_\(R\) (e.g. clicks, streams, revenue). Rewards are causally influenced by both contexts and actions, as illustrated by the edges in the causal graph shown in Figure 1. A contextual _policy_\(\pi\) determines which actions are selected (or sampled), thereby inducing a probability distribution over \(A\), which is often denoted with the shorthand \(\pi(a|x)\coloneqq\mathrm{P}(A=a|X=x;\Pi=\pi)\). A policy's effectiveness is measured by the expected reward obtained when selecting actions according to that policy: \(\mathbb{E}_{x}\mathbb{E}_{a\sim\pi(\cdot|x)}\left[R\right]\). In a recommendation application, this value can be estimated by deploying the policy in an online experiment. However, since such experiments are typically costly, and we may have many policies to evaluate, we would rather obtain reward estimates by other means. Suppose there is an existing deployed policy, called the _logging policy_\(\pi_{0}\), with which we collect a dataset \(\mathcal{D}_{0}\coloneqq\{(a_{i},r_{i})_{i=1}^{N}\}\). We will assume, as is often the case, that the logging policy, and the contextual covariates it uses, are unobservable, as indicated by the dashed nodes and edges in Figure 1. Our goal is to leverage data logged under \(\pi_{0}\) to estimate the expected reward under \(\pi-\)a problem often referred to as _off-policy_ estimation (Koskov and Koskov, 2017; Koskov, 2017). One simple off-policy estimator is the _Direct Method_ (DM). DM computes the expected reward under \(\pi\) (Eq. 1) using a model \(\widehat{R}^{A}_{\mathrm{DM}}(a)\) that estimates the reward for every available action. Since we assume that contextual covariates are unavailable, the best we can do for \(\widehat{R}^{A}_{\mathrm{DM}}(a)\) is to naively count the observed rewards for every action (Eq. 2). \[\widehat{R}_{\mathrm{DM}}(\pi)=\sum_{a\in\mathcal{A}}\widehat{R}^{A}_{\mathrm{ DM}}(a)\pi(a)\qquad\qquad(1)\qquad\qquad\qquad\widehat{R}^{A}_{\mathrm{DM}}(a)= \frac{\sum\left(a_{i}r_{i}\right)\in\mathcal{D}_{0}\ \mathbb{1}\left\{a_{i}=a\right\}\cdot r_{i}}{\sum_{(a_{i},r_{i})\in \mathcal{D}_{0}}\ \mathbb{1}\left\{a_{i}=a\right\}} \tag{2}\] Unfortunately, this estimator is biased, for two reasons: (a) it does not take into account the covariates \(X\) (i.e., the model is mis-specified), and (b) it ignores the selection bias from the logging policy \(\pi_{0}\), influencing the estimates in Eq. 2. In theory, we can bypass both the model mis-specification and selection bias problems by leveraging the _ideal_ IPS estimator (Eq. 3), which is provably unbiased. Importantly, ideal IPS requires access to both the contextual covariates and the exact action probabilities (_propensities_) under \(\pi_{0}-\)which we assume are unavailable. Accordingly, we will adopt the common practice of using estimated logging propensities \(\widehat{\pi}_{0}\) for IPS (Eq. 4). As the estimated propensities cannot properly consider all covariates, this leads to unobserved confounding. \[\widehat{R}_{\mathrm{ideal-IPS}}(\pi)=\frac{1}{|\mathcal{D}_{0}|}\sum_{(x_{i},a_{i},r_{i})\in\mathcal{D}_{0}}r_{i}\frac{\pi(a_{i})}{\pi_{0}(a_{i}|x_{i})} \quad(3)\qquad\qquad\qquad\widehat{R}_{\mathrm{estim-IPS}}(\pi)=\frac{1}{| \mathcal{D}_{0}|}\sum_{(a_{i},r_{i})\in\mathcal{D}_{0}}r_{i}\frac{\pi(a_{i})} {\widehat{\pi}_{0}(a_{i})} \tag{3}\] Using the fact that the ideal IPS estimator is unbiased, we can quantify the bias of the estimated IPS estimator as: \[\mathbb{E}[\widehat{R}_{\mathrm{estim-IPS}}(\pi)]-\mathbb{E}_{a\sim\pi}[R] =\mathbb{E}[\widehat{R}_{\mathrm{estim-IPS}}(\pi)]-\mathbb{E}[ \widehat{R}_{\mathrm{ideal-IPS}}(\pi)]=\mathbb{E}\left[R\pi(A|X)\left(\frac{1 }{\widehat{\pi}_{0}(A)}-\frac{1}{\pi_{0}(A|X)}\right)\right]. \tag{4}\] Figure 1. Probabilistic Graphical Model (PGM) for our setup. To further illustrate our point, we resort to Pearl's do-calculus framework (Pearl, 1974). What OPE methods wish to estimate is the expected value of the reward given that a new policy _intervenes_ on the action distribution. When unobserved confounders are present, this _interventional_ quantity is _not_ equal to the _observational_ quantity we can estimate from logged data: \(\mathbb{E}\left[R|A=a\right]\neq\mathbb{E}\left[R|\text{do}(A=a)\right]\). Instead, we would require the "backdoor adjustment" to obtain: \[\mathbb{E}\left[R|\text{do}(A=a)\right]=\sum_{x\in X}\mathbb{E}\left[R|A=a,X=x \right]. \tag{6}\] It should be clear that without access to \(X\), this estimand is non-identifiable, and this problem is not easily solved. ## 3. Existing Diagnostics for Logging Propensities do Not Uncover Confounding Bias Several diagnostics have been proposed in the literature to detect data quality issues with logged bandit feedback. In particular, they try to uncover cases where the two classical assumptions of the IPS estimator do not hold (Han and Recht, 1974; Han and Recht, 1974): (1) either the empirical action frequencies in the data do not match those implied by the logged propensities, or (2) the logging policy does not have full support over the action space. Note that the presence of unobserved confounders does _not_ automatically violate these assumptions. As a result, the diagnostics that were proposed will _not_ detect confounding bias. Logging propensities can be estimated by empirically counting logged actions, as shown in Eq. 7. In doing so, we obtain unbiased estimates of the true marginal action probabilities. Indeed, \(\lim_{N\to\infty}\widehat{n_{0}}(a)=\text{P}(A=a|\Pi=\pi_{0})\). Li et al. propose the use of _arithmetic_ and _harmonic_ mean tests to compare empirical action frequencies with the logging propensities (Han and Recht, 1974). As we _define_ the logging propensities to be equal to the empirical action frequencies, it should be clear that this test will trivially pass. Alternatively, London and Joachims propose to use the average importance weight as a control variate, whose expectation should equal 1 for any target policy \(\pi\)(Han and Recht, 1974). Here as well, because the marginal propensities are unbiased (Eq. 7), we can show that the control variate remains unbiased as well (Eq. 8). \[\widehat{n}_{0}(a)=\frac{1}{|\mathcal{D}_{0}|}\sum_{(a_{i},r_{i})\in\mathcal{ D}_{0}}\mathbb{1}\{a_{i}=a\}\underset{\lim}{\lim}\,\text{P}(A=a|\Pi=\pi_{0})= \sum_{x\in X}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x). \tag{7}\] Theorem 3.1 ().: _When unobserved confounders are present and logging propensities are estimated from empirical data (thus ignoring the confounders), the expected value of the importance weights equals 1 for any target policy:_ \[\underset{\begin{subarray}{c}x\sim\text{P}(X)\\ a\sim\text{P}(A|X=x,\Pi=\pi_{0})\end{subarray}}{\mathbb{E}}\left[\frac{\pi(a) }{\widehat{\pi}_{0}(a)}\right]=1.\] Proof.: \[\underset{\begin{subarray}{c}x\sim\text{P}(X)\\ a\sim\text{P}(A|X=x,\Pi=\pi_{0})\end{subarray}}{\mathbb{E}}\left[\frac{\pi(a) }{\widehat{\pi}_{0}(a)}\right] =\sum_{a\in\mathcal{A}}\sum_{x\in X}\frac{\pi(a)}{\widehat{\pi }_{0}(a)}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)\] \[=\sum_{a\in\mathcal{A}}\frac{\pi(a)}{\widehat{\pi}_{0}(a)}\sum_{ x\in X}\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)\] (8) \[\underset{\begin{subarray}{c}\lim\\ N\to\infty\end{subarray}}{\lim}\sum_{a\in\mathcal{A}}\pi(a)\frac{\sum_{x\in X }\text{P}(A=a|X=x,\Pi=\pi_{0})\text{P}(X=x)}{\sum_{x\in X}\text{P}(A=a|X=x, \Pi=\pi_{0})\text{P}(X=x)}=\sum_{a\in\mathcal{A}}\pi(a)=1\quad\Box\] As such, existing diagnostics are unable to detect issues of unobserved confounding. This implies that the self-normalised IPS (SNIPS) estimator and its extensions that adopt the above control variate to reduce the variance of the IPS estimator, would exhibit the same bias as estimated IPS when unobserved confounders are present (Han and Recht, 1974; Han and Recht, 1974). ## 4. Empirical validation of the effects of unobserved confounding on synthetic data We now describe a guiding example, and provide a notebook that implements the methods described earlier at github.com/olivierjeunen/confounding-consequences-2023/. Consider a setting with two possible actions and a binary covariate \(X=\{x_{0},x_{1}\}\), following the distribution in Table 0(a) (parameterised with \(\alpha\in\left[\frac{1}{2},1\right]\)). Rewards are Bernoulli-distributed (Table 0(b)). The logging policy is contextual, taking a suboptimal action with probability \(\epsilon\in[0,1]\) (Table 0(c)). We can map this to an intuitive setting: action \(a_{1}\) is of general appeal to the entire population (i.e. \(R\perp\!\!\!\perp X|A=a_{1}\)); whereas action \(a_{0}\), on the other hand, is specifically appealing to a more niche user-base (i.e. \(\mathbb{E}[R|X=x_{0},A=a_{0}]>\mathbb{E}[R|X=x_{0},A=a_{1}]\), but \(\mathsf{P}(X=x_{0})<\mathsf{P}(X=x_{1})\)). Estimates for logging propensities can be obtained by empirical counting, as in Eq. 7. The expected value for these estimated context-independent propensities is shown in Table 0(d). _Naive propensity estimation methods suffer from confounding bias._ We simulate an off-policy estimation setup where we wish to evaluate deterministic policies \(\pi_{\mathsf{a}}(a)\equiv 1\). We obtain \(N=2\cdot 10^{6}\) samples from the synthetic distribution described in Table 1, and compute the confounded estimate \(\widehat{R}_{\text{estim-IPS}}\), as well as the unobservable ideal IPS estimate \(\widehat{R}_{\text{ideal-IPS}}\). We vary both the level of selection bias \(\epsilon\) (over the x-axis), and the confounding distribution \(\alpha\) (over columns) in Fig. 2, where the y-axis shows the estimated difference in rewards from policies \(\pi_{a_{1}}\) and \(\pi_{a_{0}}\). We shade the positive and negative regions in the plot to clearly visualise when an off-policy estimator allows us to _correctly_ identify the optimal policy, or when it does not. We observe that the IPS estimator with estimated propensities fails considerably, in that it will incorrectly identify \(\pi_{a_{0}}\) as the reward-maximising policy. Only when \(\epsilon\) is sufficiently high (i.e. approaching a uniform logging policy for \(\epsilon=0.5\), and hence no confounding is present), \(\widehat{R}_{\text{estim-IPS}}\) is able to correctly identify \(\pi_{a_{1}}\). This shows that, even in simplified settings, the estimates we obtain from IPS with confounded estimates lead to misleading conclusions. Furthermore, existing diagnostics cannot detect these problems when they occur. ## 5. Conclusions & outlook Unobserved confounders lead to biased estimates, both for DM- and IPS-based methods. This problem has received considerable attention in the research literature for general offline reinforcement learning use-cases, but the literature dealing with these issues in recommendation settings remains scarce. Our work highlights that this is problematic-especially in cases where propensities are estimated under simplifying independence assumptions. In doing so, we add to the literature identifying problematic practices that might hamper progress in the field (Bouquet et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). \begin{table} \begin{tabular}{c
2309.16626
Chemical evolution of local post-starburst galaxies: Implications for the mass-metallicity relation
We use the stellar fossil record to constrain the stellar metallicity evolution and star-formation histories of the post-starburst (PSB) regions within 45 local post-starburst galaxies from the MaNGA survey. The direct measurement of the regions' stellar metallicity evolution is achieved by a new two-step metallicity model that allows for stellar metallicity to change at the peak of the starburst. We also employ a Gaussian process noise model that accounts for correlated errors introduced by the observational data reduction or inaccuracies in the models. We find that a majority of PSB regions (69% at $>1\sigma$ significance) increased in stellar metallicity during the recent starburst, with an average increase of 0.8 dex and a standard deviation of 0.4 dex. A much smaller fraction of PSBs are found to have remained constant (22%) or declined in metallicity (9%, average decrease 0.4 dex, standard deviation 0.3 dex). The pre-burst metallicities of the PSB galaxies are in good agreement with the mass-metallicity relation of local star-forming galaxies. These results are consistent with hydrodynamic simulations, which suggest that mergers between gas-rich galaxies are the primary formation mechanism of local PSBs, and rapid metal recycling during the starburst outweighs the impact of dilution by any gas inflows. The final mass-weighted metallicities of the PSB galaxies are consistent with the mass-metallicity relation of local passive galaxies. Our results suggest that rapid quenching following a merger-driven starburst is entirely consistent with the observed gap between the stellar mass-metallicity relations of local star-forming and passive galaxies.
Ho-Hin Leung, Vivienne Wild, Michail Papathomas, Adam Carnall, Yirui Zheng, Nicholas Boardman, Cara Wang, Peter H. Johansson
2023-09-28T17:30:59Z
http://arxiv.org/abs/2309.16626v3
# Chemical evolution of local post-starburst galaxies: Implications for the mass-metallicity relation ###### Abstract We use the stellar fossil record to constrain the stellar metallicity evolution and star-formation histories of the post-starburst regions within 45 local post-starburst galaxies from the MaNGA survey. The direct measurement of the regions' stellar metallicity evolution is achieved by a new two-step metallicity model that allows for stellar metallicity to change at the peak of the starburst. We also employ a Gaussian process noise model that accounts for correlated errors introduced by the observational data reduction or inaccuracies in the models. We find that a majority of post-starburst regions (69% at \(>1\sigma\) significance) increased in stellar metallicity during the recent starburst, with an average increase of 0.8 dex and a standard deviation of 0.4 dex. A much smaller fraction of PSBs are found to have remained constant (22%) or declined in metallicity (9%, average decrease 0.4 dex, standard deviation 0.3 dex). The pre-burst metallicities of the post-starburst galaxies are in good agreement with the mass-metallicity relation of local star-forming galaxies. These results are consistent with hydrodynamic simulations, which suggest that mergers between gas-rich galaxies are the primary formation mechanism of local PSBs, and rapid metal recycling during the starburst outweighs the impact of dilution by any gas inflows. The final mass-weighted metallicities of the post-starburst galaxies are consistent with the mass-metallicity relation of local passive galaxies. Our results suggest that rapid quenching following a merger-driven starburst is entirely consistent with the observed gap between the stellar mass-metallicity relations of local star-forming and passive galaxies. keywords: galaxies: evolution - galaxies: abundances - galaxies: starburst - galaxies: stellar content - methods: statistical ## 1 Introduction Since the advent of the first large-scale galaxy surveys such as the 2dF Galaxy Redshift Survey (Colless et al., 2001) and the Sloan Digital Sky Survey (York et al., 2000), galaxies have been observed to fall into a bimodal distribution in photometric colours in the local Universe (Strateva et al., 2001; Baldry et al., 2004; Bell et al., 2004; Gavazzi et al., 2010). The two sub-populations are found to exhibit different distributions across many other properties, including total stellar mass (Vulcani et al., 2013), star-formation history (SFH) (Kauffmann et al., 2003), kinematics (Graham et al., 2018), stellar metallicity (Gallazzi et al., 2005; Peng et al., 2015), radial concentration (Hogg et al., 2002), and environment (Balogh et al., 2004; Gavazzi et al., 2010). The red sequence consists of quenched, mostly dispersion-dominated galaxies, whilst the blue cloud consists of star-forming, mostly rotationally-supported galaxies. The former also have higher stellar metallicity at a given stellar mass than the latter, which can be used to understand the origin of galaxy bimodality by probing the mechanisms of galaxy formation and quenching (Peng et al., 2015; Trussler et al., 2020). Metallicity is the measurement of the mass of all elements heavier than hydrogen and helium, relative to the total mass of baryons. The vast majority of metals are produced through stellar processes, including a combination of stellar nucleosynthesis, type Ia and core collapse supernovae (for a review, see Nomoto et al., 2013 and more recently Maiolino and Mannucci, 2019). These metals are then released into a galaxy's inter-stellar medium (ISM) through mass loss during the red giant phase in lower mass stars (\(\approx 2-8\)M\({}_{\odot}\)) and supernovae in higher mass stars (\(\gtrsim 8\)M\({}_{\odot}\)). In a closed box system (e.g. Tinsley, 1980) the recycling of this gas into new stars leads to the next generation of stars formed having a higher stellar metallicity than the previous. However, the closed box model is an unrealistic approximation of galaxies, as interactions with the medium outside the galaxy through inflows and outflows are omitted. Inflows from the galaxy's circum-galactic medium (CGM) bring in metal-poor gas, diluting the gas reservoir and lowering both the gas-phase and subsequently stellar metallicity. Outflows remove gas, slowing down star formation to produce fewer metals. Additionally, outflows that originate from stellar feedback might preferentially remove high metallicity ISM gas from systems, further strengthening the role of outflows in lowering metallicity, particularly in lower mass galaxies (Chisholm et al., 2018). Therefore, the stellar metallicity of a galaxy is a result of the net sum of three processes: enrichment through stellar processes, inflows, and outflows. These processes are key components of the baryonic cycle in galaxies, which is intrinsically linked to mechanisms that cause galaxy properties to vary with time, including the quenching of star formation. A key piece of the puzzle to understand the baryonic cycle and the evolution of galaxies is provided by higher redshift galaxy surveys such as UltraVISTA (McCracken et al., 2012). The surveys found that red quiescent galaxies grow in both number and total stellar mass since \(z=4\)(Ilbert et al., 2013; Muzzin et al., 2013), implying star-forming blue cloud galaxies must shut down (quench) their star formation to form quiescent red-sequence galaxies. However, the demographics of red and blue galaxy populations alone are unable to inform on the timescales of these quenching events: the steady growth in quenched galaxies could arise from the average over many individual galaxies with a wide range of different quenching timescales. As stars form in molecular clouds, the quenching of star formation can be achieved in two ways. The first is the complete consumption of gas following the (likely gradual) termination of the supply of cold gas into the regions of star formation. The second is the sudden heating and/or disruption of the molecular clouds due to disruptive events originating from either within or outwith the galaxy. These two processes are expected to act on different timescales (e.g. Schawinski et al., 2014), which is consistent with observational findings that quenching of star formation occurs over varying timescales, ranging from \(>5\) Gyr to \(<1\) Gyr (Heavens et al., 2004; Pacifici et al., 2016; Rowlands et al., 2018; Carnall et al., 2018). Mechanisms proposed for the slow termination of star formation include natural depletion of gas reservoirs over time through the gradual locking up of gas into stars, the "maintenance" of hot gas reservoirs by active galactic nucleus (AGN) feedback preventing cooling of the CGM (Croton et al., 2006), morphological quenching due to the stabilising force of a central spheroid (Martig et al., 2009; Ceverino et al., 2010), shock heating of higher mass halo gas preventing cooling of gas onto galaxies (Dekel and Birnboim, 2006), the inhibition of radial inflows of cold gas by the increase in angular momentum of accreted gas due to disc growth (Renzini et al., 2018; Peng and Renzini, 2020) and the restriction and/or stripping of galaxy gaseous envelopes by tidal forces in clusters (Balogh et al., 2000; Boselli and Gavazzi, 2006). Peng et al. (2015) and Trussler et al. (2020) have argued that slow quenching mechanisms are the main driver of intermediate and low stellar mass (\(M_{*}<10^{11}M_{\odot}\)) galaxy quenching at \(z<1\) due to the higher metallicity of quenched galaxies compared to star-forming galaxies in the local Universe. In this model, the slow decrease in cold gas supply leads to gradual quenching, which allows for star formation to continue with the remaining gas in the galaxy while a lack of continued inflow of low metallicity CGM gas brings reduced dilution effects. The combined effect enhances the metallicity of quenched galaxies with respect to star-forming galaxies. Trussler et al. (2020) further concluded that, although the decrease in gas supply is the main driver for quenching, a continuous secondary contribution from gas ejection through outflows is required to match the star-formation rates (SFRs) of local passive galaxies particularly at lower stellar masses. On the other hand, studies that analysed large scale cosmological hydrodynamical simulations have found an important contribution to the build up of the red sequence from rapidly-quenched galaxies (SIMBA, \(\approx 50\%\) contribution of total stellar mass at \(z\sim 1\): Rodriguez Montero et al., 2019; Zheng et al., 2022; IllustrisTNG, \(\approx 40\%\) of galaxies over all redshifts: Walters et al., 2022). Suggested mechanisms that could lead to this rapid quenching of star formation include feedback in the form of violent ejection of gas from the central regions of a galaxy powered by AGN outflows (Feruglio et al., 2010; Cicone et al., 2014). Stellar sources such as supernovae and stellar winds could similarly provide substantial feedback, particularly in dense starburst regions (Martin, 1998, 2005; Bolatto et al., 2013; Molero et al., 2023). In clusters, infalling star-forming satellites can experience processes such as ram pressure stripping, thermal evaporation and viscous stripping, which may be powerful enough to remove cold gas directly from star-forming regions (Boselli and Gavazzi, 2006). Several approaches have been used to measure the relative importance of various quenching mechanisms observationally. This includes, but is not limited to, fitting for the SFHs of quiescent galaxies to obtain their quenching timescales (e.g. Pacifici et al., 2016), identifying star-forming galaxies with unusually low molecular gas fractions and short depletion times (e.g. Gomez-Guijarro et al., 2022), and the aforementioned difference in mass-metallicity (MZ) relations between star-forming and quenched galaxies (Peng et al., 2015; Trussler et al., 2020). Despite the substantial work in recent years, the various approaches lead to conflicting results in the relative importance of fast and slow quenching mechanisms. One promising avenue towards resolving this confusion in the literature is the study of post-starburst (PSB) galaxies, which have experienced a recent (\(<2\) Gyr), rapid drop in star formation activity (e.g. Wild et al., 2020). Studying the prevalence and properties of such objects has the potential to constrain both the contribution of rapid quenching to the growth of the red sequence, as well as the physical mechanisms responsible for such rapid quenching events (e.g. Wild et al., 2009; Rowlands et al., 2018; Davis et al., 2019; Li et al., 2019; Zheng et al., 2020). Historically these were first identified as "E+A" or "K+A" galaxies due to their strong Balmer absorption lines and a lack of nebular emission lines (Dressler and Gunn, 1983). As a result of their SFH, PSSB exhibit an abundance of A and F type stars, while the shorter-lived O and B stars are largely absent, allowing the pre-burst stellar population to not be heavily outshone (see French, 2021, for a recent review). PSBs typically display compact morphologies, in both the local Universe and at higher redshifts (e.g. Almaini et al., 2017; Chen et al., 2022). Some studies have suggested that high redshift starburst galaxies such as sub-millimetre galaxies are progenitors of high-redshift PSBs (Toft et al., 2014; Wild et al., 2016, 2020; Wilkinson et al., 2021) and that low redshift starburst or ultraluminous infrared galaxies (ULIRGs) are progenitors of low-redshift PSBs (Hopkins et al., 2008; Cales et al., 2011; French et al., 2015; Pawlik et al., 2018). The initial quenching that transitions PSBs away from the starburst phase is expected to be mainly driven by stellar feedback (see e.g. Wild et al., 2009), but current-generation simulations require AGN mechanical feedback (outflows) to completely halt star formation and sustain the reduced SFR after the starburst (e.g. Zheng et al., 2020). Although PSBs account for only a minor \(<1\%\) of the galaxy population at redshift \(z\sim 0\)(Pawlik et al., 2016), the short visibility window of the spectral features means that a considerable fraction of all quenched galaxies could have passed through a PSB phase, particularly at higher redshift (Wild et al., 2009, 2016, 2020; Whitaker et al., 2012; Belli et al., 2019; Taylor et al., 2023). Therefore, PSBs provide a key testing ground to study the effects of fast quenching mechanisms. Measuring the gas-phase metallicity of PSBs is challenging due to the weakness of nebula emission lines and contamination with AGN, shock or diffuse interstellar excitation mechanisms, and can only be achieved in some cases (see Rowlands et al., 2018; Boardman et al. submitted). However, we might expect substantial chemical evolution to occur during such extreme changes in star formation rate. Given the negative radial metallicity gradients of star forming galaxies (e.g. Matteucci and Francois, 1989; Zaritsky et al., 1994), the inflow of gas required to drive the centralised starburst common to many PSBs might be expected to pull in lower metallicity gas from the outskirts of the galaxies, reducing metallicity. On the other hand, the very high star formation rates over a short period of time will lead to repeated recycling of gas released from evolved stars and a rapid build up in metals. Given the higher metallicity of quiescent galaxies than star-forming galaxies at given stellar mass (Gallazzi et al., 2005; Peng et al., 2015), which of these processes dominate in reality has important implications for how significantly post-starburst galaxies, and rapid quenching more generally, can contribute to the build-up of the quiescent population. A systematic characterisation of the stellar metallicity evolution of PSBs has not been attempted previously to our knowledge. In this study, we aim to measure this by taking advantage of the fact that both the pre-burst and starburst stellar population are visible in PSBs' integrated light spectrum. To draw a more direct comparison with simulations that focus on the chemical evolution in the cores of starburst galaxies, we focus this study on analysing galaxies with PSB-like centres. In Section 2, we describe our data and sample selection. In Section 3, we present our method of spectral fitting of the optical continuum through stellar population synthesis models. We test the method with both "self-consistent" and simulation-based parameter recovery in Section 4, to verify we can recover the SFH and chemical history of PSBs. We then apply the method to MaNGA galaxies, present the results in Section 5, and discuss them in Section 6. Where necessary, we assume a cosmology with \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\) and \(h=0.7\). All magnitudes are in the AB system (Oke and Gunn, 1983). We assume a Kroupa (2001) stellar initial mass function (IMF), and take solar metallicity \(Z_{\odot}=0.0142\)(Asplund et al., 2009). We re-scale all metallicity measurements quoted from the literature to this solar metallicity for direct comparison. Throughout, we denote lookback time as \(t\) and ages of the Universe as \(t^{\prime}\), such that \(t^{\prime}=t_{H}-t\) where \(t_{H}\) is the age of the Universe. ## 2 Data MaNGA (Bundy et al., 2015) is an integral field spectrograph (IFS) survey of \(\approx 10000\)\(M_{*}>10^{9}M_{\odot}\) galaxies (11273 datacubes) in the local \(z<0.2\) neighbourhood, a part of the fourth-generation Sloan Digital Sky Survey (SDSS-IV, Blanton et al., 2017) that ran from 2014 to 2020. It used the Sloan Foundation Telescope at Apache Point Observatory (Gunn et al., 2006) to collect spatially-resolved spectra by using hexagonal bundles of 19 to 127 optical fibres, depending on the apparent size of the target. The output BOSS spectrographs (Smee et al., 2013) provide high quality spectra in the wavelength range \(3622-10354\)A at a spectral resolution of \(R\sim 2000\)1. We access MaNGA data through both the web interface and the python package Marvin(Cherinka et al., 2019). Footnote 1: \(R=\lambda/\Delta\lambda_{\rm FWHM}\) For all MaNGA galaxies in the full data release DR17 (Abdurro'uf et al., 2022), we obtain redshift from the MaNGA data reduction pipeline (Law et al., 2016, 2021) and galaxy stellar mass from the NASA-Sloan Atlas (NSA_ELPETRO_MASS, a K-correction fit to elliptical Petrosian fluxes, see Blanton et al., 2011). We obtain spectral indices along with other necessary properties from the MaNGA data analysis pipeline (Westfall et al., 2019; Belfiore et al., 2019). We adjust the stellar masses from NSA for Hubble constant \(h=0.7\). Other stellar mass estimates from SDSS-MPA/JHU2 and the Wisconsin method (Chen et al., 2012) were also considered, but provided no qualitative changes to the conclusions. We limit the sample to \(z<0.06\) in favour of local PSBs with good spatial resolution, leaving 7971 galaxies. Footnote 2: J. Brinchmann: [http://www.mpa-garching.mpg.de/SDSS](http://www.mpa-garching.mpg.de/SDSS) and [http://home.strw.leidenuniv.nl/~jarle/SDSS/](http://home.strw.leidenuniv.nl/~jarle/SDSS/) Within each MaNGA galaxy's datacube3, spaxels4 marked with NOCOV, LOUCOV or DONOTUSE flags are removed. To identify PSB spaxels, we broadly follow the methods in Chen et al. (2019), specifically requiring the spaxels' median spectral SNR \(>8\) per pixel, strength of the H\(\delta\) Balmer absorption line after accounting for emission infilling H\(\delta_{A}>3\)A (Worthey and Otaviani, 1997), equivalent width of the H\(\alpha\) nebular emission line after accounting for underlying absorption W(H\(\alpha\)) \(<10\)A5, and \(\log{\rm W(H\alpha)}<0.23\times{\rm H}\delta_{A}-0.46\). Footnote 3: The main derived data product of the MaNGA survey; a 3D array with two spatial dimensions and one wavelength dimension. See Law et al. (2016) for details. Selecting only galaxies with a PSB spaxel fraction \(>0.05\) among all classifiable spaxels (spaxels not marked with the previous flags nor the SNR threshold that we impose), we sliced the galaxies into 3 elliptical annuli with \(0<R/R_{e}<0.5\), \(0.5<R/R_{e}<1\) and \(1<R/R_{e}<1.5\), where \(R_{e}\) is the \(r\)-band elliptical-Petrosian effective radius, using the elliptical polar distance of each spaxel from the galaxy centre. Our galaxy sample is selected to have \(>50\%\) of the inner annulus spaxels classifiable, and \(>50\%\) of these spaxels to be classified as a PSB, yielding 54 candidates. This sample selection is qualitatively similar to the Chen et al. (2019) selection of MaNGA galaxies with "central" post-starburst regions. After the removal of candidates with faulty MaNGA observations (e.g. mismatched redshift and obvious foreground stars, removed 2: 8248-6104, 8601-12703), active galactic nuclei (AGN) broad emission (removed 1: 11004-6104) and datacubes flagged as BADFLUX by the MaNGA DRP (removed 1: 8944-1902, spectrum also appears to be faulty upon visual inspection), the final sample contains 50 PSBs. They span a total stellar mass range of \(8.55<\log_{10}M_{*}/M_{\odot}<10.63\), as listed in Table 1 together with other properties. We form a stacked PSB-only spectrum for each galaxy, only including spaxel contributions from the PSB-classified spaxels. To ensure spaxel quality, we remove spaxels marked with quality flags DEADFIBER or FORESTAR in MaNGA's H\(\alpha\) emission line maps. Spectra are summed unweighted, while uncertainties are summed in quadrature. The stacking of many spaxels for each galaxy allows for a very high SNR to be reached across the full MaNGA wavelength range, with the mean SNR per pixel ranging from 95 to \(>1200\). The SNR of the sample is listed in Table 1. After correcting for Milky Way dust reddening, we further mask major nebular emission lines, residuals of strong skylines (central flux \(>5\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) in Hanuschik, 2003) and Balmer infilling. Since Pawik et al. (2018) have previously shown that stellar population synthesis models based on the MILES stellar library (Falcon-Barroso et al., 2011) showed improved recovery of the SFH of PSBs compared to models based on other libraries, we limit the spectra to rest frame \(\lambda<7500\)A to be fully within the MILES range when fitting. Figure 1 demonstrates the stacking process with two PSBs as examples. Limiting the spectra to rest frame \(\lambda<7500\)A potentially loses valuable constraining power on the older stellar population. Spectral information from longer wavelengths can form a longer wavelength baseline to minimise any age-dust-metallicity degeneracy (see Sections 5.1 and 6.1 of Conroy, 2013). Hence, the flux in the portions with observed frame wavelength \(7500<\lambda<9500\)A was summed into a single, artificial photometric data point, passed jointly with the trimmed spectra to the fitting framework (Section 3). However, no significant differences in the estimated stellar and dust properties were observed with or without the photometric data point, therefore, we limit our analysis to the trimmed spectra for the rest of this study. ## 3 Optical continuum spectral fitting To fully utilise the fossil record stored in the high-quality MaNGA spectra, we employ the fully Bayesian spectral energy density (SED) fitting code Bagpipes(Carnall et al., 2018, 2019). In this section, we describe in detail our spectral fitting procedure of the optical continuum, which includes the assumed parametric SFH model for PSBs from Wild et al. (2020) (Section 3.1). Motivated by a suite of gas-rich binary major emissions that create PSB signatures (Zheng et al., 2020), we introduce a novel two-step metallicity model which decouples metallicity before and after the starburst, allowing for any change in stellar metallicity during the starburst to be recovered (Section 3.2). Additionally, we employ a Gaussian process correlated noise model as an additive term to the physical model's predicted spectrum to account for correlated observational uncertainties and imperfect spectral models (Section 3.3). The sampling of the posterior surface is done using the MultiNest nested sampling algorithm (Feroz and Hobson, 2008) and its python interface (Buchner et al., 2014). As shown in Section 4 below, our two-step metallicity model also recovers SFH-related parameters more accurately. Within Bagpipes, we utilise the Bruzual and Charlot (2003) stellar population synthesis models (2016 version), and assume the initial mass function from Kroupa (2001). We apply the two-component dust attenuation law from Wild et al. (2007) and da Cunha et al. (2008), with a fixed power-law exponent \(n=0.7\) for the interstellar medium (ISM). The dust law asserts that stars younger than 10 Myr have a steeper power-law exponent \(n=1.3\) and are more attenuated than older stars by a factor \(\eta\) (\(=1/\mu\) in Wild et al., 2007; da Cunha et al., 2008), as they are assumed to be surrounded by their birth clouds. Overall, our model has 18 parameters as listed in Table 2, 3 fixed and 15 free to be estimated. As we follow the Bayesian paradigm, prior distributions are placed on the 15 parameters. It is important to also be aware of the imposed prior probability densities on derived physical properties, for example, specific SFR (sSFR) and mass-weighted formation age (\(t_{\rm M}\)), as they can impact the estimated galaxy properties and their uncertainties (Carnall et al., 2019). These are shown alongside SFH draws from the SFH prior in Figure 3 of Wild et al. (2020). ### The star-formation history model The star-formation history traces the rate of star formation in a galaxy and all of its progenitors back in time, typically expressed in lookback time. To model both the recent starburst and the underlying older stellar population expected in most local PSBs, we adopt the two-component parametric SFHs of Wild et al. (2020), which provides a good fit to combined spectra and photometry of \(z\sim 1\) PSBs: \[{\rm SFR}(t)\propto\frac{1-f_{\rm burst}}{\int\psi_{e}{\rm d}t}\times\psi_{e} (t)\big{|}_{t_{\rm form}>t>t_{\rm burst}}+\frac{f_{\rm burst}}{\int\psi_{\rm burst }{\rm d}t}\times\psi_{\rm burst}(t). \tag{1}\] This is made up of the older, exponential decay component \(\psi_{e}\) and the double power-law starburst component \(\psi_{\rm burst}\), both a function of lookback time \(t\). The lookback time when the older population began to form is denoted as \(t_{\rm form}\), while the time since the peak of the starburst is denoted as \(t_{\rm burst}\). The fraction \(f_{\rm burst}\) controls the proportion of mass formed during the starburst. The two components have the forms: \[\psi_{e}(t^{\prime}) =\exp^{\frac{t^{\prime}}{t_{e}}} \tag{2}\] \[\psi_{\rm bursts}(t^{\prime}) =\left[\left(\frac{t^{\prime}}{t^{\prime}_{\rm burst}}\right)^{ \alpha}+\left(\frac{t^{\prime}}{t^{\prime}_{\rm burst}}\right)^{-\beta}\right] ^{-1}. \tag{3}\] All times in Equations 2 and 3 are in ages of the Universe, therefore unlike \(t_{\rm burst}\), \(t^{\prime}_{\rm burst}\) in the starburst component's function represents the age of the Universe at the peak of the starburst. \(\tau_{e}\) is the older population's exponential decay timescale, while \(\alpha\) and \(\beta\) control the declining and increasing timescales of the burst respectively, with larger values corresponding to steeper slopes. The usage of the fraction \(f_{\rm burst}\) instead of parameterizing the stellar mass formed in the components individually allows for an easier application of a flat prior over \(f_{\rm burst}\). This allows for not only SFH shapes with a strong starburst, but also rapid quenching events of the existing star formation when \(f_{\rm burst}\sim 0\). \begin{table} \begin{tabular}{l l l l l l l l l} \hline Plate-IFU & MaNGA ID & RA (degrees) & Dec. (degrees) & Redshift & \(\log_{10}\mathbf{M}_{\star}\) & \multicolumn{2}{c}{PSB spach} & \multicolumn{1}{c}{\multirow{2}{*}{Number of stacked}} & \multicolumn{1}{c}{\multirow{2}{*}{Stacked mean}} \\ (1) & (2) & (3) & (4) & (5) & & (6) & (7) & \\ \hline 7961-1901 & 1-178035 & 259.53275 & 30.12902 & 0.0296 & 9.68 & 0.45 & 162 & 335.8 \\ 7964-1902 & 1-179682 & 317.42261 & 0.62777 & 0.0242 & 9.42 & 0.07 & 28 & 169.6 \\ 7965-1902 & 1-653485 & 318.50227 & 0.53509 & 0.0269 & 10.10 & 0.91 & 356 & 843.5 \\ 8080-3702 & 1-38062 & 49.22887 & -0.04201 & 0.0231 & 9.88 & 0.39 & 285 & 553.1 \\ 8081-3702 & 1-38166 & 49.94685 & 0.62382 & 0.0247 & 9.14 & 0.12 & 92 & 112.0 \\ \hline \end{tabular} \end{table} Table 1: List of 50 studied post-starburst galaxies and their properties: (1) MaNGA Plate-IFU identifier; (2) MaNGA identifier; (3) R.A. (J2000); (4) Declination (J2000); (5) Redshift; (6) \(\log_{10}\) total stellar mass fitted from K-corrected elliptical Petrosian photometric fluxes in GALEX/SDSS _FNugriz_ bands from the NSA catalogue, adjusted for \(h=0.7\); (7) Number fraction of classified PSB spacks among all spacks not marked with the \(\rm 30COV\) or \(\rm LO0K0V\) flags in MaNGA datacubes; (8) Final number of stacked spacks, after excluding spacks marked with \(\rm DEAPFIBER\) or FOREST; (9) Mean SNR of the stacked optical spectrum over the full MaNGA wavelength range. The full table is available as supplementary online material. We investigated allowing the rising starburst slope \(\beta\) to vary freely, with a similar prior to \(\alpha\). However, parameter recovery tests performed using \(\mathrm{SNR}=100\), at the lower end of our observations, showed that \(\beta\) is poorly constrained in older starbursts (\(t_{\mathrm{burst}}>1\) Gyr). Therefore, we fix \(\beta=250\), consistent with the typical value found from fits to younger starbursts. A common alternative SED fitting method avoids assuming parametric forms for the star-formation history and instead allowing stars to form in fixed or variable bins in time (e.g. Cid Fernandes et al., 2005; Tojeiro et al., 2007; Iyer and Gawiser, 2017; Johnson et al., 2021). In general these models do well with smooth SFHs, but are less well suited to galaxies which undergo a rapid change in SFR, due to the need for adaptive variability of the number of time bins. However, both Pawlik et al. (2018) and Suess et al. (2022) have successfully employed such methods, often referred to as non-parametric, to fit PSBs. Suess et al. (2022) increased the number of time bins around \begin{table} \begin{tabular}{l l l l l} \hline Type & Parameter & Form & Min & Max \\ \hline SFH & \(\log_{10}(M_{*}/M_{\odot})\) & Uniform & 6 & 13 \\ & \(t_{\mathrm{form}}\) / Gyr & Uniform & 4 & 14 \\ & \(r_{e}\) / Gyr & Uniform & 0.3 & 10 \\ & \(t_{\mathrm{burst}}\) / Gyr & Uniform & 0 & 4 \\ & \(\alpha\) & \(\log_{10}\) Uniform & 0.01 & 1000 \\ & \(\beta\) & Fixed = 250 & - & - \\ & \(f_{\mathrm{burst}}\) & Uniform & 0 & 1 \\ Metallicity & \(Z_{\mathrm{old}}/Z_{\odot}\) & \(\log_{10}\) Uniform & 0.014 & 3.52 \\ & \(Z_{\mathrm{burst}}/Z_{\odot}\) & \(\log_{10}\) Uniform & 0.014 & 3.52 \\ Dust & \(AV\) / mag & Uniform & 0 & 2 \\ & birthcloud factor \(\eta\) & Uniform & 1 & 5 \\ & \(h_{\mathrm{influcloud}}\) / Gyr & Fixed = 0.01 & - & - \\ GP noise & uncorrelated amplitude \(s\) & \(\log_{10}\) Uniform & 0.1 & 10 \\ & correlated amplitude \(\sigma\) & \(\log_{10}\) Uniform & \(10^{-4}\) & 1 \\ & period/length scale \(\rho\) & \(\log_{10}\) Uniform & 0.04 & 1.0 \\ & dampening quality factor \(Q\) & Fixed = 0.49 & - & - \\ Miscellaneous & redshift & Uniform & 0.8 z & 1.2 z \\ & \(\sigma_{\mathrm{dip}}\) / km/s & \(\log_{10}\) Uniform & 40 & 4000 \\ \hline \end{tabular} \end{table} Table 2: Model priors used for fitting PSB SEDs. The parameter symbols are described in Sections 3 to 3.3, or otherwise have their usual meanings. Some parameters have prior shape \(\log_{10}\) uniform, which indicates a flat prior in uniform space \(\log(X)\sim U(\log(min),\log(max))\). Redshift is given a uniform prior ranging from 80% to 120% of the target’s MaNGA redshift (\(z\)). Note that \(\sigma_{\mathrm{dip}}\) is not the intrinsic velocity dispersion of the galaxy, as it does not account for the finite resolution of the spectral templates or observational data. Figure 1: Two typical PSBs from our sample. The top represents PSBs with the vast majority of classifiable spaxels classified as PSB, while the bottom represents PSBs with only a core PSB region. The left panels show the SDSS 3-colour image with the galaxy’s Plate-IFU marked on the top right corner. The MaNGA field of view is marked as the pink hexagon. The middle panels show the spaxel selection (broadly following Chen et al., 2019), displaying regions with no/faulty observations (transparent), with median spectral \(\mathrm{SNR}<5\) too low to be classified (grey), classified as PSB (blue) and classified as non-PSB (red). The right panels show the stacked observed-frame spectrum of the PSB classified spaxels (black), the stacked \(1\sigma\) observational uncertainty (red, multiplied by \(10\times\) to make visible) and spectral ranges masked during the fitting process (grey bands), including major nebular emission lines, skyline residuals and Balmer infilling. The resulting stacked spectra have a mean SNR of 274 and 482 respectively. the time of the starburst, successfully recovering the rapid rise and fall in SFR of mock PSBs. While this can provide more flexibility in theory, in practice the need to define time bins and in some cases the inclusion of some form of regularisation to smooth between time bins makes the method more model dependent than it first seems. Additionally, no code currently exists which can implement both non parametric SFHs and a Gaussian process (GP) model to account for correlated noise, which we found crucial for our fitting (see Section 3.3). Therefore, we opt for a parametric SFH approach, noting that the GP noise component is able to account for any slight imperfections in the assumed SFH. ### Two-step metallicity: insight from PSB merger simulations During integrated light SED fitting, stellar metallicity is often assumed to be constant (e.g. Onodera et al., 2012; Gallazzi et al., 2014; Carnall et al., 2018; French et al., 2018; Wild et al., 2020; Suess et al., 2022). This is done mainly to limit the dimensionality of the problem, by sacrificing the second-order effects of chemical evolution on observations when compared to that from varying SFH, especially for broad-band photometry. This work aims to explore whether this simplification can be removed, and the chemical evolution of PSBs recovered. To propose a simple yet representative metallicity evolution model for PSBs, we consult the suite of gas-rich binary major merger smoothed-particle hydrodynamics (SPH) simulations that create PSB signatures in Zheng et al. (2020). The simulations were performed using the SPH code SPHGal (Hu et al., 2014; Eisenreich et al., 2017), which is an updated version of the Gadget-3 code (Springel, 2005). SPHGal implements sub-resolution astrophysics models from Scannapieco et al. (2005, 2006), updated by Aumer et al. (2013), and includes gas cooling rates following Wiersma et al. (2009). Chemical evolution and stellar feedback from type Ia and type II supernovae, and AGB stars are accounted for (for details, see Section 3.1 of Zheng et al., 2020). The merger progenitor galaxies were set up following Johansson et al. (2009) with modifications in the SFR adapted from Lahen et al. (2018), and initial orbital configurations following Naab & Burkert (2003). The AGN feedback models are from Choi et al. (2012) and Choi et al. (2014). The galaxy models have a baryonic particle mass of \(1.4\times 10^{5}\)M\({}_{\odot}\) for both gas and stars, and a gravitational softening length of 28 pc for all baryonic particles. For our fiducial model we use the retrograde-prograde orbit merger simulation of two identical progenitor galaxies with initial gas mass fractions of \(f_{\rm gas}=0.22\) (\(2\)xS\({}_{\rm c}\)\(\sim\)\(0.07\)), simulated with mechanical black hole feedback but no radiative feedback, because it results in strong PSB spectral features. Figure 2 plots the stellar metallicity of simulation particles against their lookback times of formation, together with the simulated SFH. When the merger-triggered starburst occurs at \(\sim 550\) Myr in lookback time, the newly formed stars have significantly higher stellar metallicity than previous star formation due to rapid recycling of gas to form many generations of stars, and the trend settles on more than twice the pre-burst metallicity after the starburst ends. Similar patterns are seen in other gas-rich merger simulations (Perez et al., 2011; Torrey et al., 2012). We approximate the rapid metallicity increase with a step function and introduce a two-step metallicity model with the time of transition fixed at the peak of the starburst \(t_{\rm burst}\): \[Z(t)=\begin{cases}Z_{\rm old}&t>t_{\rm burst}\\ Z_{\rm burst}&t\leq t_{\rm burst}\end{cases}. \tag{4}\] Both \(t\) and \(t_{\rm burst}\) are in lookback times. The two metallicity levels \(Z_{\rm old}\) and \(Z_{\rm burst}\) are independent and have identical priors, to ensure the model is equally able to fit an increase, decrease or no change in stellar metallicity during the starburst. We experimented with several more complex metallicity evolution models: a three-step model (pre-burst, during burst, after burst); a gradual increase in metallicity prior to the burst; a two-step metallicity with scatter in the metallicity of coeval stars, following a log-normal or exponential distribution. None provided significantly improved parameter recovery, and given that we do not expect the simulations to be a perfect representation of the real Universe, we felt that any additional model complexity was not justifiable. ### Treatment of correlated errors When fitting photometric data, it is safe to assume the observational uncertainties in the bands are uncorrelated, due to individual photometric bands being observed at different time points, with different instrument set ups. However, when working with spectra consecutive pixels are not independent, due to the many processing steps involved in translating the raw spectroscopic observations into 1D spectral arrays. Following the methods in Carnall et al. (2019a, see Section 4 for a detailed discussion regarding the treatment of spectroscopic uncertainties), we introduce an additive, Gaussian process (GP) correlated noise component. As well as allowing for correlated uncertainties that stem from the standard data reduction of spectra, this component also serves to account for model-data mismatch that originates from assumptions and approximations involved at all stages of stellar population synthesis: isochrones, stellar spectral templates, SFH, chemical evolution and dust models (see Conroy, 2013, for a review). A Gaussian process (GP) can be visualised as a series of random variables along one or more continuous axes that represents some physical property. It is a general technique, that has been used to model data in various sub-fields of astronomy, including light curves of X-ray binaries and AGNs (Kelly et al., 2014), asteroseismic data of stars (Brewer & Stello, 2009; Foreman-Mackey et al., 2017), exoplanets (Barclay et al., 2015; Foreman-Mackey et al., 2017; Chakrabarty & Sengupta, 2019) and radial velocity measurements (Czekala et al., 2017), and the cosmic microwave background (Bond et al., 1999). In the case of spectral fitting, the random variables model a series of spectral flux densities along an array of wavelengths, which forms an SED. Each variable is modelled with a Gaussian distribution, such that for a dataset with \(N\) values, an N-dimensional Gaussian distribution is constructed. Before the variables are conditioned on the observed data, the prior mean of the Gaussian distributions is typically set as a vector of zeros. This is also adopted in this study. The covariance matrix describes the relationship between each one of the random variables with all other random variables. Each covariance is described by a kernel function that depends on the separation between two observations considering their physical properties. For an in-depth description of GP modelling, see Rasmussen & Williams (2006). For the fitting of spectra, the GP's covariance matrix allows us to quantify the correlated noise between the measured flux density of any wavelength bin with all other bins. This is useful since it can account for correlated noise on different wavelength scales, where measurements at close-by wavelength bins are expected to correlate more strongly than measurements separated by longer distances. Hence, the close-to-diagonal terms of the covariance matrix will likely have a larger magnitude than off-diagonal terms. To reduce computational time, we replace the squared exponential kernel used in Carnall et al. (2019a) with a stochastically-driven damped simple harmonic oscillator (SHOTerm), implemented through the celerite2 python package (Foreman-Mackey et al., 2017; Foreman-Mackey, 2018). The GP model of Carnall et al. (2019) used a covariance matrix describing the covariance between two wavelength bins \(j\) and \(k\): \[\mathrm{C}_{jk}(\mathbf{\Phi})=s^{2}\sigma_{j}\sigma_{k}\delta_{jk}+b^{2}\exp\left( -\frac{(\lambda_{j}-\lambda_{k})^{2}}{2l^{2}}\right)\, \tag{5}\] with parameters \(\mathbf{\Phi}=(s,b,l)\), where \(s\) scales the observational uncertainties \(\sigma_{j,k}\) on the SED fluxes, \(b\) is the amplitude of the correlated noise and \(l\) is the lengthscale of the squared exponential kernel in units of wavelength. \(\lambda_{j}\) and \(\lambda_{k}\) are the wavelengths at indices \(j\) and \(k\), and \(\delta_{jk}\) is the Kronecker delta function. The first term allows for scaling of the uncorrelated input observational noise while the second term is the GP kernel function for correlated noise. In this study, we replace the second term with the celerite SHOTerm kernel function \(K\), which is a sum of exponential terms: \[K_{\alpha}(|\lambda_{j}-\lambda_{k}|)=\sum_{m=1}^{M}a_{m}\exp\left(-c_{m}(| \lambda_{j}-\lambda_{k}|)\right)\, \tag{6}\] where \(\alpha=(\mathbf{a},\mathbf{c})\), with \(\mathbf{a}\) and \(\mathbf{c}\) vectors with elements \(a_{m}\) and \(c_{m}\) respectively. For a single exponential term of this form, the corresponding inverse covariance matrix is tri-diagonal, which can be computed with a small number of evaluations (Rybicki and Press, 1992; Kelly et al., 2011), facilitating a reduction in computation time. To allow for easier usage of the kernel function, we follow Foreman-Mackey et al. (2017) to take the Fourier transform of equation (6), with the power spectral density \[S(\omega)=\sqrt{\frac{2}{\pi}}\frac{S_{0}\omega_{0}^{4}}{(\omega^{2}-\omega_{ 0}^{2})^{2}+\omega_{0}^{2}\omega^{2}/Q^{2}} \tag{7}\] where \(\omega_{0}\) is the frequency of the undamped oscillator, \(Q\) is the quality factor of the oscillator, and \(S_{0}\) is proportional to the power of the oscillator at \(\omega=\omega_{0}\): \(S(\omega_{0})=\sqrt{2/\pi}S_{0}Q^{2}\). To make the function more intuitive, celerite2 allows for the option to swap frequency \(\omega_{0}\) with period \(\rho\) via the relationship \(\rho=2\pi/\omega_{0}\), such that period \(\rho\) is proportional to a typical lengthscale at which fluxes at \(\lambda_{j}\) and \(\lambda_{k}\) correlate by a standard degree. celerite2 also allows for swapping \(S_{0}\) with the standard deviation of the GP realisations \(\sigma\) via the relationship \(\sigma=\sqrt{S_{0}\omega_{0}Q}\), such that this amplitude parameter is independent of the other parameters \(\omega_{0}\) and \(Q\). Aiming to emulate the behaviour of the squared exponential kernel, which works well on spectral fitting problems, we match its autocorrelation curves with those from the SHOTerm kernel. This process is described in Appendix A. This replacement of kernels allowed for a \(\sim 100\) fold reduction in computational time. ## 4 Testing of fitting methods In this section we demonstrate that the combination of the 2-component SFH model with the two-step metallicity model can recover relevant galaxy parameters when presented with spectra of PSBs, with low systematic biases. We perform two types of parameter recovery tests: a "self-consistent" test is described in Section 4.1, and a smooth particle hydrodynamic (SPH) simulation test is described in Section 4.2. ### Self-consistent parameter recovery The "self-consistent" test involves generating mock PSB spectra using the functional forms for all parameters, including SFH, metallicity evolution and dust, then fitting the mock spectra with the same functional forms and spectral templates used for mock spectra generation. This setup ensures there is no model-data mismatch nor correlated errors. If the parameter recovery is successful across a large range of input values, it indicates the fitting process can recover the required parameters with minimal degeneracies when the model can perfectly describe the data. We generate spectra using Bagpipes, with identical wavelength range and spectral resolution as our real MaNGA spectra, perturbing the generated spectra with Gaussian uncorrelated errors assuming \(\mathrm{SNR}=100\) similar to the minimum SNR of our observed spectra. Typical dust and dispersion values were assumed, based on the results Figure 2: Star-formation history (bottom) and stellar metallicities (\(Z_{\star}/Z_{\odot}\)) of the stars formed (top) in the binary gas-rich major merger simulation 2xSc07 that creates PSB signatures in Zheng et al. (2020). The full SFH is shown in the inset panel, where the shaded region indicates the assumed SFH of the progenitor galaxies. Stellar metallicity increases from –solar levels to more than twice solar not long after the peak of the starburst at \(\sim 550\) Myr lookback time. from our observed sample (\(A_{V}=0.6\), \(\eta=3\), \(\sigma_{\rm disp}=80\) km/s). Since we do not inject correlated errors, there is no need to include the GP noise component during fitting. Figure 3 shows the recovery performance of a self-consistent test with mock input parameter values similar to the SPH-simulated PSB in Figure 2, using the two-step metallicity model. The left panels demonstrate that we are able to recover the input SFH to within \(1\sigma\) for nearly all lookback times. In the top left panel, the apparent mismatch between the posterior median SFH (solid orange) and input SFH (solid black) before \(z=1\) is partly a plotting artefact. Since each posterior sample is an exponential decay function with an abrupt increase in SFR at \(t_{\rm form}\), the median SFR includes a steadily decreasing fraction of models with no star formation. Hence we also plot the SFHs of 15 posterior samples in the same panel, and show the cumulative fraction of the total stellar mass formed against the age of the Universe in the bottom left panel as an alternative visualisation. In the cumulative SFH, it is easier to see that the discrepancy between the fitted median and input curves is \(1<1\sigma\). The right panels of Figure 3 show violin representations of the posterior distributions for seven key parameters, demonstrating they are recovered to within \(2\sigma\) of the input truths (solid bars). Particularly, we are able to recover the difference between the pre-burst and starburst stellar metallicities, with a high degree of confidence. #### 4.1.1 Sample of self-consistent model tests To understand whether the offsets observed between true values posterior median estimates in Figure 3 are systematic, we repeated the recovery test 100 times with randomly drawn input values6 from the priors in Table 2. Variations in dust and velocity dispersion are omitted due to computational time limitations (although see below for a comparison when these are included). We only fit mock spectra that are classified as PSBs under our selection criteria using H\(\delta_{\rm A}\) and W(H\(\alpha\)) (Section 2). The mean offset (median of posterior - input truth) and fitting uncertainty for all tests are listed in Table 3. Identifying parameters where the mean offset is greater than the mean uncertainty, we find a very slight average overestimation in burst age. However, this is two orders of magnitude smaller than the range of our sample's estimated burst ages (Section 5), thus this does not impact our main results. Footnote 6: The total stellar mass formed and redshift are fixed, as these do not alter the shape of the spectrum, and varying them will not provide additional insight. In addition to the test shown in Figure 3, five self-consistent parameter recovery tests with dust and velocity dispersion are performed based on randomly drawn input values, including \(A_{V}\), \(\eta\) and \(\sigma_{\rm disp}\). Comparing the five test results to the 100 above that did not have the dust component and added velocity dispersion, a \(\sim 40\%\) increase in estimation uncertainty is seen across individual SFH properties \begin{table} \begin{tabular}{l l l} \hline Parameter & Mean offset (\(\overline{\Delta}\)) & Mean \(1\sigma\) uncertainty \\ \hline \(\log_{10}({\rm M}_{*}/{\rm M}_{\odot})\) & -0.002 & 0.011 \\ \(t_{\rm form}\) / Gyr & 0.20 & 1.33 \\ \(t_{\rm burst}\) / Gyr & 0.049 & 0.037 \\ \(f_{\rm burst}\) & 0.017 & 0.022 \\ \(Z_{\rm old}\) / \(Z_{\odot}\) & -0.001 & 0.019 \\ \(Z_{\rm burst}\) / \(Z_{\odot}\) & -0.023 & 0.038 \\ \hline \end{tabular} \end{table} Table 3: The offsets (\(\Delta\), median of posterior – input truth) and mean uncertainties from 100 self-consistent parameter recovery tests using the two-step metallicity model. The input values are randomly drawn from the priors given in Table 2, but were then checked to ensure the resulting system satisfied our PSB selection criteria. We list here the mean offset and fitting uncertainty averaged across all 100 tests. All symbols follow the definitions in Section 3. Figure 3: Self-consistent parameter recovery test with the two-step metallicity model. **Left**: SFH (top) and fractional cumulative SFH (bottom), showing the input truth (black line), posterior median (solid orange line) and its \(1\sigma\) region (shaded), and 15 random draws from the posterior fit (dashed lines). **Right**: Violin plots showing posterior distributions of total stellar mass formed (\(\log_{10}M_{*}/M_{\odot}\)), extinction in V band (\(A_{V}\)), velocity dispersion (\(\sigma_{\rm disp}\)), burst mass fraction (\(f_{\rm burst}\)), age of the burst (\(t_{\rm burst}\)) and metallicity levels. The height corresponds to the distribution’s density, while the central box plot marks its median (white dot), \(1\sigma\) region (thick brown bar) and \(2\sigma\) region (thin brown line). The vertical black lines indicate the input truths. In the lower right panel, the lighter and darker shaded violins correspond to the posterior of the burst and older metallicities, respectively. All parameters are estimated with accuracy within \(2\sigma\), and the metallicity change is recovered. and metallicity. Despite this, with dust and velocity dispersion, the recovered values remain within \(2\sigma\) of the input truths (see Figure 3). These tests show that, in the absence of model-data mismatch and correlated noise, we can recover the input parameters of the two-step metallicity model, via integrated light MaNGA-like spectra, for a wide range of PSBs with varying stellar and metallicity histories. ### SPH Simulation parameter recovery The second parameter recovery test involved generating mock PSB spectra from the stellar particles of the SPH simulations in Zheng et al. (2020), and fitting them with our assumed models to see whether we can recover the underlying galaxy properties. Unlike in the "self-consistent" tests above, the star formation and chemical evolution history of the SPH simulations is complex, and cannot be perfectly described by the simple functional forms of our model. Additionally, stars formed coevally in the simulation can have a range of metallicities, which is not possible in our model. Thus, the mock spectra are created from galaxy properties that do not exist within the prior model space, so parameters can only be approximately recovered. Any inaccuracies and biases found during this test allows for conclusions to be drawn concerning the models' performances when tasked with real data, which will exhibit similar characteristics. While investigating the cause of the small number of self-consistent parameter recovery tests with large discrepancies between estimated and true values, we discovered that many occur as the mock galaxy's \(t_{\rm form}\) approaches the age of the Universe. The rate of change in the flux of a galaxy spectrum with time decreases with increasing stellar age, hence, errors on \(t_{\rm form}\) increase for the oldest galaxies. This issue is not due to the changing metallicity, as it is seen in the parameter recovery tests of both constant and two-step metallicity models. Unfortunately, all the SPH simulations in Zheng et al. (2020) were initialised with analytic templates that began their star formation at age of the Universe \(t=0.5\) Gyr. Therefore, to enable a better insight into the recovery of star formation and metallicity histories in PSBs, we scale the age of all simulated stellar particles down by 15%, preserving the total stellar mass formed and the shape and chronology of the SFH. The shift away from simulated SFHs that began at very low age of the Universe does not impact our results, as the typical estimated age of the Universe when star formation began for our sample (\(t^{\prime}_{\rm form}\)) is \(>3\) Gyr. Mock spectra are generated exactly as described in Section 4.1, with the same dust properties, velocity dispersion, SNR and uncorrelated Gaussian noise. Due to model-data mismatch caused by the simulations' parameters lying outside of the prior space, the GP noise component is included when fitting. Figure 4 compares the results of fitting spectra constructed from the binary merger simulation shown in Figure 2 with the constant and two-step metallicity models. Due to the model no-longer existing within the model prior space, we no longer expect perfect parameter recovery. The top left panels show that the two-step metallicity model outperforms the constant model when recovering the SFH of simulated PSBs that underwent changes in stellar metallicity. The bottom left panel shows the fitted two-step metallicity model closely follows the simulation's metallicity evolution7. Footnote 7: The sudden drop in metallicity of the simulation at 10.5 Gyr is a result of the switch from analytic progenitor galaxies to SPH simulation and is not physical. As there is no direct input "truth" corresponding to many of the fitted parameters, in the right hand violin plots we instead compare the fraction of mass formed within \(t<1.5\) Gyr (\(f_{\rm young}\)), mass-weighted mean stellar age within lookback time \(t<1.5\) Gyr (\(t_{\rm M,young}\)), and the mass-weighted mean stellar metallicity throughout the entire simulation, as well as before and after the peak SFR of the starburst. In all cases, the two-step metallicity model substantially outperforms the constant metallicity model in recovering the underlying galaxy properties. In the bottom right panel, we see that the fitted metallicity of the constant metallicity model is \(>5\sigma\) higher than the true overall mass-weighted metallicity. The over-estimation of the older stellar population's metallicity results in a redder old-star spectrum, leading to a younger fitted \(t_{\rm form}\). The failure to recover the light from old stars (formed before 6 Gyr), leads to an underestimation of total stellar mass formed by \(>0.2\) dex. On the other hand, the underestimation of the burst population's metallicity results in a bluer young-star spectrum, leading to an overestimation of the burst age to compensate. The flexibility of the two metallicity levels allows for these problems to be mitigated. As a result, the violin plots show a significantly more accurate recovery of all parameters displayed. To verify that the two-step metallicity model also enables good recovery of input true values when metallicity declines during the starburst, we artificially flip the metallicity of stellar particles in the simulation, to simulate a decrease in metallicity. We found the two-step model again results in a superior recovery of the SFH compared to the constant model. #### 4.2.1 Sample of simulation recovery tests To investigate the possible bias on recovered parameters, we expand the simulation parameter recovery test to a suite of 46 tests performed on PSB spectra predicted from the simulations in Zheng et al. (2020). All tests assumed the same dust properties, velocity dispersion, SNR and perturbation of the generated spectrum as previous tests. We only use the simulation runs that resulted in an obvious starburst followed by rapid quenching i.e. prograde-prograde and retrograde-prograde simulations, more gas rich progenitors, and those with mechanical black hole feedback, but without radiative feedback. The inclusion of radiative feedback was found to be too effective at suppressing the increased star formation after the merger, leading to no/very weak PSB signatures in the resulting galaxy (see Zheng et al., 2020). Specifically, these were simulations Sa\({}_{\rm Sc}\)\({}_{\rm 00}\), Sa\({}_{\rm S}\)\({}_{\rm S}\)\({}_{\rm 00}\), 2xSc\({}_{\rm 00}\), Sc\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 00}\), Sa\({}_{\rm S}\)\({}_{\rm C}\)\({}_{\rm 07}\), Sa\({}_{\rm S}\)\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 07}\), 2xSc\({}_{\rm 07}\), Sc\({}_{\rm S}\)\({}_{\rm d}\)\({}_{\rm 07}\) and 2xSc\({}_{\rm d}\)\({}_{\rm 07}\). The initial gas mass fractions of the progenitors are 0.17, 0.22 and 0.31 for Sa, Sc and Sd, respectively. From each of the 10 SPH simulations, we extract 10 post-burst spectra equally spaced in time from the peak of the starburst to the end of the simulation. We do this by discarding star particles formed after each time point, input the remaining particles' stellar metallicity and ages (shifted to the new time of observation), into Ba\({}_{\rm dep}\)s which then constructs the integrated spectrum from SSPs in the same way as our models. We then measure the H\(\delta_{\rm A}\) and W(H\(\alpha\)) from the integrated spectra, and check whether they pass our selection criteria (Section 2). This results in 46 simulated PSB spectra at \(0.11-0.71\) Gyr since the peak SFR of the starburst. Similar to the example shown in Figure 2, all chosen simulations exhibit rapid stellar metallicity increase during the starburst, leading to a much higher recent (\(t<1.5\) Gyr) mass-weighted metallicity than before the starburst (\(t>1.5\) Gyr). The mean offset and fitted uncertainty for both constant and two-step metallicity models are presented in Table 4. The two-step metallicity model achieves less bias in SFH-related parameters (total mass formed, \(t_{\rm M,young}\) and \(f_{\rm young}\)), \(A_{V}\) and \(\sigma_{\rm disp}\) than the constant model, for a wide range of PSBs with varying SFHs and chemical evolution, ages and scatter. The two-step metallicity model returns a small mean offset in both metallicity measurements, which indicates the model is able to accurately recover the metallicity change for a broad range of simulated PSBs. We note that among the suite of recovery tests there are several outliers with larger offsets in metallicity estimated with the two-step metallicity model than with the constant model. These are limited to models with a starburst peaking recently (0.2 - 0.4 Gyr in lookback time). For these to be selected as PSBs, they have a correspondingly rapid quenching time (e-folding timescale \(\tau\sim 50\) Myr). In this case, the two-step metallicity model can suffer from a degeneracy between the true solution and a slightly older starburst with higher burst mass, longer quenching timescales and a declining stellar metallicity. In most cases where the two-step model fails to recover the correct metallicity evolution, the constant model also suffers from a similar older, more massive starburst degeneracy, albeit to a less severe degree. PSBs with such recent starbursts are not found in our observed sample (Table 5). Therefore, we do not expect this to be a significant concern for our results. To understand the spectral features that drive the better parame \begin{table} \begin{tabular}{l l l l l} \hline & \multicolumn{2}{c}{Constant metallicity model} & \multicolumn{2}{c}{Two-step metallicity model} \\ \cline{2-5} Parameter & Mean offset (\(\Delta\)) & Mean \(1\sigma\) uncertainty & Mean offset (\(\Delta\)) & Mean \(1\sigma\) uncertainty \\ \hline \(\log_{10}(\mathrm{M_{*}/M_{\odot}})\) (1) & -0.186 & 0.013 & -0.048 & 0.012 \\ \(\mathrm{M_{*,old}}\) / Gyr (2) & -3.26 & 0.19 & -1.55 & 0.34 \\ \(\mathrm{M_{*,young}}\) / Gyr (3) & 0.153 & 0.009 & 0.027 & 0.015 \\ \(f_{young}\) (4) & 0.108 & 0.008 & 0.019 & 0.005 \\ \(\mathrm{Z_{M,before\ peak}}\) / \(Z_{\odot}\) (5) & 0.463 & 0.041 & 0.007 & 0.020 \\ \(\mathrm{Z_{M,after\ peak}}\) / \(Z_{\odot}\) (6) & -0.940 & 0.041 & -0.276 & 0.040 \\ \(\mathrm{A_{V}}\) / mag (7) & -0.067 & 0.021 & 0.025 & 0.009 \\ \(\sigma_{disp}\) / km/s (8) & 1.48 & 0.64 & 0.21 & 0.60 \\ \hline \end{tabular} \end{table} Table 4: The offsets (\(\Delta\), median of posterior - input truth) from 46 parameter recovery tests fitting SPH-derived PSB spectra with the constant and two-step metallicity models. We list here the mean offset and fitting uncertainty averaged across all 46 simulated spectra. Parameters (1), (7) and (8) follow the meanings in Section 3, while the other parameters are (2) mass-weighted mean stellar age before lookback time \(t=1.5\) Gyr, (3) mass-weighted mean stellar age after lookback time \(t=1.5\) Gyr, (4) fraction of mass formed within \(t<1.5\) Gyr, (5) mass-weighted mean stellar metallicity before the corresponding simulation’s peak SFR of the starburst, and (6) mass-weighted mean stellar metallicity after the corresponding simulation’s peak SFR of the starburst. For a wide range of SFHs, metallicity evolution and dust properties, the two-step metallicity model shows smaller offsets than the constant model. Figure 4: Simulation-based parameter recovery test, comparing the results of fitting the constant (blue) and two-step (orange) metallicity models to star particles generated from an SPH merger simulation (from Zheng et al. 2020, see Figure 2). Most panels are as per Figure 3. Panel (C) additionally presents the stellar metallicity evolution; lines and shaded regions have the same meaning as for the SFH panels. The vertical magenta line marks a lookback time of 1.5 Gyr, before and after which we calculate the fraction of mass formed within \(t<1.5\) Gyr (\(f_{\mathrm{young}}\)) and the mass-weighted mean stellar age within \(t<1.5\) Gyr (\(\mathrm{\mu_{M,young}}\)) shown in panels (G) and (H). In panel (I), the top-most black vertical line indicates the mass-weighted metallicity throughout the simulation, while the following two are the mass-weighted metallicity of stars formed before and after the peak of the starburst. The two-step metallicity model out-performs the constant model in the recovery of all parameters. ter recovery by the two-step metallicity model, Figure 5 compares the fitted spectra from the two metallicity models. The lower three plots in each panel of Figure 5 show the fitting residuals, the residuals smoothed by a mean boxcar of width 11 pixels and the fitted spectra's GP noise contributions. Vertical coloured regions mark the wavelengths of the Lick indices (Worthey et al., 1994; Worthey & Ottaviani, 1997) and indices CNB and H+K from Brodie & Hanes (1986). A goodness of fit measurement, the reduced chi-squared value \((\chi^{2}_{\nu})^{8}\) for both models is noted in the lower panels. At first sight the fitted spectra from both metallicity models appear to be very well matched with the mock spectrum, both with reduced chi-squared values close to unity. However, the smoothed residual reveals differences at all wavelengths. The two-step model's smoothed residual is smaller particularly within the iron and magnesium metallicity indices and the calcium H+K lines. This is consistent with the better fit to stellar metallicities obtained when using the two-step model. The constant metallicity model's GP noise component led to a best fit model with a prominent slope (the red-end is \(2.51^{+0.38}_{-0.36}\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) higher than the blue-end), while the physical spectrum is significantly bluer than the input mock spectrum. This could arise from a combination of incorrectly estimated dust attenuation curve, incorrect metallicity or SFH properties, leading to the larger offsets for all properties in Table 4. The two-step metallicity model's GP noise component has a much smaller amplitude at all wavelengths (amplitude \(RMS=4.2^{+8.4}_{-2.8}\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\), small enough to be hidden behind the dashed line), and no overall slope. This indicates that the two-step model's higher degree of flexibility allows for a better approximation of the mock spectrum generated from simulation, and only minor corrections from the GP noise are required. To summarise, the two-step metallicity model allows for metallicity evolution during the starburst to be traced without significant estimation biases or reduction in fitting precision due to the increased complexity, while allowing for better recovery of the SFH and its parameters of the galaxy. ## 5 Results We fit all 50 PSBs with the two-step metallicity model described in Section 3, including the new GP correlated noise model. 5/50 (10%) resulted in fitted GP noise components showing obvious trends across the fitted spectral range, with amplitudes much larger than the observational uncertainty scaled by the posterior median uncorrelated noise amplitude (\(s\) in Equation 5). This can potentially indicate additional sources, such as AGN or foreground/background sources in their fitted spectra, or complex stellar/dust properties that cannot be adequately fit with the model. Due to these considerations, their results are excluded from further analysis (Plate-IFU: 7965-1902, 8080-3702, 9494-3701, 9495-3702, 9507-12704). All remaining 45 were found to have clear PSB SFHs, where rapid quenching follows an episode of strong starburst. In general, the PSB regions underwent starburst \(\sim 1\) Gyr before observation, with the youngest starburst occurring \(\approx 0.45\) Gyr ago. The starbursts have a wide range in mass fraction (\(\approx 0.06-0.94\)). The fitted SFH properties, metallicity levels, \(A_{V}\) and reduced chi-squared of the maximum likelihood posterior sample spectrum are reported in Table 5 and discussed in the following sections. All fitted SFHs and metallicity evolution are plotted in Appendix B. Figure 6 shows an example fit. In the top panel, the posterior best fit spectrum (orange) provides a visibly good fit to the observed PSB-region-stacked MaNGA spectrum (black), as is also seen by the small and normally distributed fitting residuals in the middle panel and the near unity reduced chi-squared. The posterior spectrum contains the median physical model spectrum (blue) and the additive GP noise component (bottom panel). The latter does not exhibit an overall slope and has amplitude comparable to the observational uncertainty scaled by the posterior median uncorrelated noise amplitude (dark blue dashed line), which are signs of a well behaved model. In the top panel, the physical spectrum is further separated to show dust-attenuated light contributions from the starburst (lime) and the older components (red). The split is placed at the time of minimum SFR between the old population and the starburst. This PSB has a burst mass fraction of \(f_{\rm burst}=0.24^{+0.05}_{-0.03}\), and burst age of \(t_{\rm burst}=0.61^{+0.12}_{-0.03}\) Gyr, leading to a light contribution from the starburst that dominates marginally over the more massive older population in the red end of the optical spectrum, but more significantly at bluer wavelengths. ### Most PSBs increase in stellar metallicity during starbursts In Figure 7 we present the fitted posterior median metallicity levels before and after the starburst with 1\(\sigma\) contours to indicate posterior uncertainties. Most PSBs (31/45, 69%) lie above the diagonal constant metallicity line at \(>1\sigma\) (20/45, 44% at \(>3\sigma\)), indicating these PSBs experienced a significant increase in stellar metallicity during the starburst, many of which increased to 5% the original metallicity (galaxies lying above the upper dotted line). A smaller fraction (44/45, 9%) of PSBs are found to instead drop in metallicity at \(>1\sigma\) (1/45, 2% at \(>3\sigma\)), while the remaining portion (10/45, 22%) have constant metallicity within the 1\(\sigma\) errors (24/45, 53% within \(3\sigma\)). Since estimating properties of the older stellar population is usually more challenging, the pre-burst stellar metallicity tends to have larger uncertainty. This uncertainty further increases where PSBs are found to have high burst light fractions (\(f_{\rm burst,L}\)) due to heavy outshining of the older population's light contribution. We have therefore separated the sample at \(f_{\rm burst,L}=0.9\) (calculated by integrating over the full fitted wavelength range) in Figure 7. The objects with high burst light fraction are seen to cluster around the line of constant metallicity (thick dashed line) but with large uncertainty, consistent with no change in metallicity being the a priori assumption given by the two-step metallicity model with independent and identical priors on metallicities. Excluding these 13, the proportion of PSBs that experience a net positive change in stellar metallicity at \(>1\sigma\) is 82% (27/33). ### A recovered mass-metallicity relation, both before and after starburst As reviewed in Section 1, the mass-metallicity (MZ) relation has been used in the literature to infer the dominant quenching timescales that build up the red-sequence. It is interesting to compare the stellar mass and metallicity properties of our sample of PSBs to the MZ relation of local star-forming and quiescent galaxies, as the nature of PSBs being found soon after rapid quenching can provide insight into the impact of these quenching processes on chemical evolution. The top left panel of Figure 8 shows our PSB sample's mass-weighted stellar mass-metallicity relation before the starburst occurred. Also shown are the MZ relations from three studies of local galaxies: star-forming SDSS galaxies, mass-weighted (Panter et al., 2008); SDSS, all types, light-weighted (Gallazzi et al., 2005); star-forming and passive SDSS galaxies, light-weighted (Peng et al., 2015). Our PSBs broadly follow the known MZ relation where metallicity increases with mass, especially when we consider only the more reliable lower burst light fraction galaxies (magenta dots). This indicates that prior to the starburst, the PSB progenitors are consistent with being drawn from the underlying star-forming population, exhibiting no atypical chemical properties. The top right panel of Figure 8 shows our PSB sample's mass-weighted stellar metallicity during the starburst, which shows no observable correlations with total stellar mass, suggesting starbursts disrupt the MZ relation. In the bottom left panel, we show the overall mass-weighted stellar metallicity of the PSBs, which exhibit a remarkable agreement with the light-weighted mass-metallicity relation of passive galaxies from Peng et al. (2015). It is remarkable since local PSBs, being only recently quenched quiescent galaxies, might not be expected to be representative of the galaxy population at \(z=0\). The difference between mass-weighted and light-weighted Figure 5: Comparing the spectral fitting performance of the constant (blue) and the two-step (orange) metallicity models applied to a mock PSB spectrum generated from an SPH merger simulation (Figure 4). The spectrum is split into two rows for clarity. In each row, the main panel shows the input mock spectrum (black) and the best-fit spectra from the two metallicity models (blue and orange). The lower panels show the fitting residuals, residuals smoothed by a mean boxcar of width 11 pixels, the GP noise contribution to each metallicity models’ best-fit spectral model, respectively. The orange GP noise curve has a much smaller amplitude compared to the blue curve and largely lies along the black dashed line. All \(y\)-axes have the same units. The vertical grey bars indicate masked regions where emission lines would appear in the MaNGA data. Coloured bars mark the wavelengths of common spectral indices. Reduced chi-squared values of the maximum likelihood posterior sample spectrum (including GP noise) for both fits, \(\chi^{2}_{\nu}\), are also shown. The two-step model achieves a better fit of the SPH PSB mock, particularly in the spectral regions of iron and magnesium metallicity indices and the calcium H+K lines. MZ relations should not affect our conclusions, since the difference is minor for quiescent galaxies (Trussler et al., 2020). In the bottom right panel of Figure 8 we compare the difference between overall mass-weighted metallicity and the pre-burst metallicity, with the difference between passive and star-forming galaxies from Peng et al. (2015). In both cases the differences decrease with increasing \(M_{\star}\). The matching trends point towards the PSB phase as a valid process that can create the large gap found between the star-forming and passive MZ relations reported in the literature. The implications are discussed in Section 6. ## 6 Discussion Our results show that most PSBs in our sample underwent an increase in stellar metallicity during the starburst phase, some very substantially. This indicates that the effect of stellar enrichment from the rapid recycling of gas during multiple rapid generations of star formation usually outweighs the combined effects of metal dilution from gas inflow and metal removal via outflow. In this section, we draw together studies from the literature to explain the metallicity changes we observe in PSB galaxies, and discuss the implications of our results for the role of post-starburst galaxies in galaxy evolution more generally. Figure 6: Example of a fitted stacked MaNGA spectrum and the contributing components (10838-1902, the top galaxy in Figure 1). **Top:** The stacked observed spectrum created by combining only spavels classified as PSB (black), the posterior best fit spectrum and its distribution (orange line and orange shaded region), which includes contribution both from the physical model and GP noise. The physical model (blue) can be separated to show light contributions from the starburst (line and line shaded region) and the older components (red and red shaded region). The reduced chi-squared value of the maximum likelihood posterior sample spectrum (including GP noise), \(\chi^{2}_{\nu}\), is shown. **Middle:** The fitting residual (orange), defined as the observed stacked spectrum (black curve) - posterior best-fit spectrum (orange curve). The light blue line and the blue dashed line show the input observational uncertainty before and after scaling by the fitted noise scaling factor \(s\), respectively. An increase of around \(\times 3-5\) is typically required. **Bottom:** The fitted GP noise component and its distribution in orange, with blue curves as above. The majority of the fitted GP noise flux lies below the scaled observational uncertainty (blue dashed) and there is no obvious global trend, thus, this galaxy is recognised as a good fit. Note that y-axes have the same units, but the three panels vary in scaling. In all panels, vertical grey bands indicate regions masked due to skyline residuals, strong nebular emission lines or Balmer infilling. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline Plate-IFU & \(\log_{10}\) & \(A_{V}\) (mag) & \(\log_{10}\) SFR\({}_{100\mathrm{Myr}}\) & \(t_{\mathrm{burst}}\) (Gyr) & \(\tau_{1/2}\) (Myr) & \(Z_{\mathrm{cold}}/Z_{0}\) & \(Z_{\mathrm{burst}}/Z_{0}\) & \(Z_{\mathrm{diff}}/Z_{0}\) & \(\chi^{2}_{\nu}\) \\ (1) & \(M_{\star,\mathrm{PSB}/M_{\odot}}\) (2) & (3) & (M\({}_{0}\) yr\({}^{-1}\)) (4) & (5) & \(f_{\mathrm{burst}}\) (6) & (7) & (8) & (9) & (10) & (11) \\ \hline 7961-1901 & \(9.75^{+0.03}_{-0.03}\) & \(0.85^{+0.03}_{-0.03}\) & \(-0.65^{+0.07}_{-0.06}\) & \(1.69^{+0.09}_{-0.09}\) & \(0.81^{+0.11}_{-0.12}\) & \(483^{+56}_{-35}\) & \(1.59^{+1.07}_{-0.26}\) & \(0.95^{+0.14}_{-0.15}\) & \(-0.66^{+0.92}_{-0.15}\) & \(0.950\) \\ 7964-1902 & \(9.19^{+0.08}_{-0.04}\) & \(1.27^{+0.04}_{-0.04}\) & \(-1.45^{+0.12}_{-0.14}\) & \(1.86^{+0.29}_{-0.23}\) & \(0.61^{+0.26}_{-0.19}\) & \(499^{+12}_{-72}\) & \(1.00^{+0.72}_{-0.40}\) & \(1.27^{+0.26}_{-0.22}\) & \(0.27^{+0.61}_{-0.92}\) & \(0.965\) \\ 7965-1902 & – & – & – & – & – & – & – & – & – & – & 0.878 \\ 8080-3702 & – & – & – & – & – & – & – & – & – & 0.928 \\ 8081-3702 & \(8.76^{+0.06}_{-0.07}\) & \(0.45^{+0.10}_{-0.08}\) & \(-1.02^{+0.08}_{-0.08}\) & \(0.93^{+0.14}_{-0.12}\) & \(0.35^{+0.10}_{-0.08}\) & \(535^{+167}_{-110}\) & \(0.22^{+0.06}_{-0.09}\) & \(3.19^{+0.22}_{-0.40}\) & \(2.96^{+0.22}_{-0.35}\) & \(0.971\) \\ \hline \end{tabular} \end{table} Table 5: Posterior estimated properties of 50 PSBs from the spectral fitting of stacked MaNGA spavels. 5 PSBs marked by dashes were poorly fit and are not considered in further analysis. Columns are (1) MaNGA Plate-IFU, (2) stellar mass within the stacked PSB spavels, (3) ISM dust attenuation at 5500Å (\(V\) band), (4) the \(\log_{10}\) SFR within the stacked PSB spavels averaged over the last 100 Myr, (5) time since the peak of the starburst, (6) fraction of mass formed during the starburst, (7) SFR halving timescale of the starburst, (8) stellar metallicity before the burst, (9) stellar metallicity during and after the burst, (10) change in metallicity, and (11) reduced chi-squared value of the maximum likelihood posterior sample spectrum. The full table is available as supplementary online material. ### On the implications of our results for the origin of PSBs Galaxy mergers have been a popular suggested trigger for local PSBs, supported by the large fraction of faint tidal features or companion galaxies (Zabludoff et al., 1996; Chang et al., 2001), high shape asymmetry (Pawlik et al., 2016; Wilkinson et al., 2022), neural network determined post-merger classification (Wilkinson et al., 2022) and unusual features in high spatial resolution images (Sazonova et al., 2021). Recent mergers also exhibit a higher fractions of PSBs than non-merging galaxies (Ellison et al., 2022; Li et al., 2023). Observationally, a lowering of gas-phase metallicity is observed in both starbursts and pairs of interacting galaxies (Kewley et al., 2006; Rupke et al., 2008; Ellison et al., 2008), apparently in contradiction to the results in this study. We explore this further below. Simulations demonstrate how the disruption of the gravitational potential caused by a galaxy merger leads to strong torques that can drive a rapid gas inflow to the central regions, compress it and fuel a strong starburst (Barnes & Hernquist, 1991, 1996). Forward modelling of these simulations have consistently shown this to be a reliable way to create galaxies with post-starburst spectral features and morphologies (Bekki et al., 2005; Wild et al., 2009; Snyder et al., 2011; Davis et al., 2019; Pawlik et al., 2019; Zheng et al., 2020). Comparatively fewer studies have focused on the chemical evolution of galaxies during gas-rich merger-induced starbursts. Since the outer regions of a star-forming galaxy are typically more metal-poor (e.g. Matteucci & Francois, 1989; Zaritsky et al., 1994), the inflow of substantial gas driven by the disrupted gravitational potential would lead to a net decrease in central stellar metallicity, so long as the impact of stellar enrichment from the starburst is sufficiently weak. Initial hydrodynamic simulation studies found that gas funnelling events initially decrease the central gas-phase metallicity by diluting the existing relatively high metallicity gas, smoothing the negative radial metallicity gradients common to most star-forming galaxies (Perez et al., 2006; Rupke et al., 2010; Lahen et al., 2018) in agreement with the lowered gas-phase metallicity observed in local interacting and starburst galaxies. Torrey et al. (2012) conducted ensembles of SPH simulations of major merging pairs of star-forming galaxies, finding that the change in stellar metallicity during the resulting starburst depends on the gas fractions of the progenitor galaxies: progenitors with low gas mass fractions tend to decrease stellar metallicity due to strong dilution from inflowing metal-poor gas during the merger, while progenitors with higher gas mass fractions tend to increase in stellar metallicity due to the stronger starburst and greater stellar enrichment. Perez et al. (2011) found that gas-rich mergers drive a net increase in gas-phase metallicity of comparable magnitude to Torrey et al. (2012), though mainly caused by rapid increases in SFR due to fragmentation of the discs before merger-driven inflow occurs. On the other hand, the more modern simulated major mergers from Zheng et al. (2020) used in this work produce metallicity increases with only a weak trend with gas mass fractions. Orbits that produce stronger starbursts induce stronger metallicity enhancements, and minor mergers require higher gas fractions to achieve the same strength of starburst and therefore metallicity enhancement. Results are sensitive to the AGN feedback prescription used, as this impacts the strength of the starburst. Evidently there is scope for further simulation work on the chemical evolution of galaxies during gas-rich merger induced starbursts, to understand the impact of resolution, code type, AGN and chemical evolution modelling on the final properties of the descendants. The further development of semi-analytic models (e.g. Molero et al., 2023), for the specific case of low redshift elliptical formation, may also prove fruitful. Gas-rich galaxy mergers are not the only plausible cause of post-starburst galaxies in the local Universe. Ram-pressure stripping in dense galaxy clusters can compress the cold interstellar gas reservoir (Lee et al., 2017), potentially leading to enhanced star formation in affected galaxies (Vulcani et al., 2020; Roberts et al., 2022), followed by rapid quenching (Poggianti et al., 2019; Werle et al., 2022). Although initially identified in clusters (Dressler & Gunn, 1983), it is important Figure 7: Pre-burst and post-burst median posterior stellar metallicities of PSB regions in MaNGA galaxies. The right panel is a zoomed-in view of the region bound by red dashed lines in the left panel. PSBs with a less dominating burst light fraction (posterior median \(f_{\rm burst,L}<0.9\)) are shown in magenta dots, otherwise in dark green triangles. The contours correspond to 1\(\sigma\) regions (enclosing the top 39.3% of marginalised posterior probability), highlighting estimation uncertainties and degeneracies. The dashed black diagonal line marks constant stellar metallicity, while the dotted lines mark a 5\(\times\) and 0.2\(\times\) change in metallicity. Most PSB regions are found to increase in stellar metallicity during the starbursts. to note that PSBs are predominantly located in field environments (e.g. Quintero et al., 2004; Blake et al., 2004; Goto, 2005; Wild et al., 2009; Pawlik et al., 2018). The precise enhancement in the fraction of PSBs in dense clusters is still debated and may depend critically on redshift, stellar mass, and cluster and PSB selection methods (e.g. Poggianti et al., 2009; Vergani et al., 2010; von der Linden et al., 2010; Socolovsky et al., 2018; Paccagnella et al., 2019; Wilkinson et al., 2021). Interestingly, lower stellar mass galaxies that are undergoing ram-pressure stripping are found to have elevated gas-phase metallicities compared to galaxies in both field and cluster environments of the same mass (Franchetto et al., 2020), which might be a result of the increased stellar enrichment from ram-pressure compression without a significant dilution effect from metal-poor gas inflow. This would produce a rise in metallicity after a starburst, at least qualitatively similar to the metallicity increase seen in the majority of our PSB sample. To investigate further, we cross-matched our final sample of 45 well-fitted PSBs with the GEMA-VAC cluster catalogue (Argudo-Fernandez et al., 2015, version DR17), finding 11/45 to be members of rich clusters (member galaxies \(>100\)). This is around twice as high as a control sample (12.9%) of galaxies from MaNGA DR17 matched in total stellar mass and D\({}_{4000}\) stellar index at 1R\({}_{\rm e}\) (MaNGA-Pipe3D, Lacerda et al., 2022; Sanchez et al., 2022; for D\({}_{4000}\), see Bruzual, 1983). However, we do not find any observable difference in the metallicity change and post-burst metallicity distributions of PSBs that are within rich clusters, compared to those that are not. Additionally, the PSBs and controls showed no significant difference in their distribution of local density as defined by the projected density to the fifth nearest neighbour (2-sample KS test \(p=0.77\)). Therefore, the Figure 8: Stellar mass-metallicity relations of PSB regions both before (upper left), during the starburst (upper right) and overall mass-weighted (bottom left). PSBs with a less dominating burst light fraction (posterior median \(f_{\rm burst,L}<0.9\)) and a higher burst light fraction (posterior median \(f_{\rm burst,L}\geq 0.9\)) are marked with magenta dots and dark green triangles, respectively. All values are plotted against the estimated stellar mass of the whole galaxy from the NSA catalogue. Stellar mass-metallicity relations from the literature are also plotted for comparison, as indicated in the legends. The dashed black lines mark the 16th and 84th percentiles from Gallazzi et al. (2005). The bottom right panel compares the difference between overall mass-weighted and pre-burst metallicity and the difference between passive and star-forming relations from the literature. The PSB pre-burst metallicity are found to agree with the Peng et al. (2015) star-forming relation, while the overall mass-weighted metallicity agrees with the Peng et al. (2015) passive relation, suggesting the PSB phase can explain the wide gap between the two literature relations. importance of environmental processes are unclear from our present sample. While we find that the majority of PSBs in our sample have undergone significant increases in metallicity, a small number have experienced a metallicity drop. The most straightforward cause of a metallicity drop is strong inflow by metal-poor gas, with the inflow triggering a starburst that either produces not enough metals to counteract the effects of dilution, or the metals produced were preferentially expelled by outflows (Chisholm et al., 2018). We verified that there was no systematic correlation between the change in metallicity and size of the PSB regions, either in absolute size or relative to \(R_{e}\), as might occur if the different patterns in metallicity evolution were caused by notably different merger types, or different processes entirely. The uncertainty in the simulations means that we cannot rule out mergers as a plausible trigger for either sets of PSBs. ### On the implications of our results for quenching pathways The evolution of stellar metallicity of galaxies has the potential to provide insight into the relative importance of different proposed quenching mechanisms. Peng et al. (2015) measured the mass-metallicity (MZ) relation of star-forming and passive galaxies from SDSS, finding passive galaxies to be significantly more metal rich than star-forming galaxies with the same total stellar mass, with the gap widening for lower mass galaxies (see Fig. 8, lower right). The authors conclude the large MZ gap rejects quenching mechanisms that act on short timescales as a major contributor in quenching. They argue this is because rapid quenching would prevent significant increase in stellar metallicity as galaxies become passive, predicting little to no MZ gap, which is inconsistent with their observations. Instead, they favour slower mechanisms such as the strangulation of gas inflow, which allows for quenching galaxies to increase their metallicity from the star-forming MZ to the passive MZ through stellar enrichment, given enough time. Trussler et al. (2020) largely agrees with Peng et al. (2015), but additionally proposes ejection of gas through outflows to play a minor role in quenching. However, Figure 8 shows a good agreement between the mass-weighted metallicity evolution of our sample of PSBs and the star-forming-passive MZ gap. This indicates that the PSB phase, with relatively short starburst and rapid quenching that follows, is sufficient to provide the observed metallicity enhancement as galaxies move from the blue cloud onto the red sequence. Our results suggest long-term processes such as starvation are not the only viable pathways to explain the MZ gap as has previously been suggested. This result has global implications for galaxy evolution, because both observations (e.g. Wild et al., 2009, 2016, 2020; Whitaker et al., 2012; Belli et al., 2019; Taylor et al., 2023) and simulations (e.g. Rodriguez Montero et al., 2019; Zheng et al., 2022; Walters et al., 2022) have found that PSBs and rapidly-quenched galaxies could contribute significantly to the growth of the red-sequence at \(z>0.5\). Studies of the evolving stellar mass function of the red sequence have found it to grow rapidly at least until \(z=1\) (e.g. Ilbert et al., 2013; Muzzin et al., 2013), with growth slowing or stalling by \(z=1\) for \(\log_{10}(M_{*}/M_{\odot})>10.5\) galaxies (Ilbert et al., 2013; Rowlands et al., 2018). Therefore, a large fraction of the present-day red sequence likely quenched prior to \(z=1\), meaning a significant fraction of the local red sequence may have arrived there following a PSB phase. Making the lape that our local PSBs are likely undergoing similar chemical evolution to that experienced by PSBs at \(z>1\), we thus conclude that short lived starbursts followed by rapid quenching might be a significant contributor to the observed MZ gap in local galaxies. ### On the implications of our results for the chemical evolution of starbursts Previous detailed theoretical work on the impact of bursty star formation on metallicity, and chemical abundance patterns more generally, has focused on local and relatively weak fluctuations in star formation rate, as might have occurred within regions of the Milky Way. On small scales, such as within the solar neighbourhood or within dwarf galaxies such as the Magellanic clouds, periods of increased efficiency of star formation (i.e. \(\epsilon=\rm SFR/M_{gas}\)) will lead to an increase in metallicity due to gas recycling and stellar enrichment (e.g. Weinberg et al., 2017; Johnson and Weinberg, 2020). However, on global galaxy-wide scales, evidence for substantial enhancements in star formation efficiency in starbursts is still unclear, with inferred differences potentially driven by uncertainties in which CO-to-H\({}_{2}\) conversion factor to assume in different galactic environments (see Kennicutt and Evans, 2012 for a review, and Tacconi et al., 2018 for a summary of relevant results). We might expect the super-solar metallicity starbursts that we infer occurred in the recent past history of our PSB sample to be visible in analyses of the gas-phase mass-metallicity relation. The SFR dependence of the mass-metallicity relation for star-forming galaxies has been much debated in the past decade. Barrera-Ballesteros et al. (2017) use a sample of spatially resolved MaNGA galaxies to argue for no dependence of the MZ relation on star formation rate, and in particular there is no noticeable increase in metallicity at high sSFR in their data. However, our sample will represent \(<1\%\) of the star forming population so will not be captured in large numbers in blind surveys. Previous studies have suggested extreme LIRGs or ULIRGs in the local Universe as progenitors to local PSBs, with LIRGs and ULIRGs having similarly low number densities (Hopkins et al., 2008; Cales et al., 2011; French et al., 2015; Pawlik et al., 2018). The metallicity of such extreme starbursts is very difficult to estimate due to dust obscuration. A recent study by Chartab et al. (2022) used mid IR strong line metallicity diagnostics to show that gas in local ULIRGs is not metal deficient as previously reported using standard optical line diagnostics. The difference arises due to dust obscuring the more metal enhanced star forming regions, and places ULIRGs firmly on the local MZ relation. Further work is clearly needed to verify whether super solar gas can be identified robustly in extreme starburst regions of local galaxies. We searched for correlations between the stellar metallicity evolution and SFH in our PSB sample, which could further elucidate any relations between starburst properties and chemical evolution. However, the potential for sample selection effects to impact observed relations made it difficult to draw firm conclusions, and we therefore leave this to future work. ### Caveats There are a number of caveats that are worth keeping in mind with regards to our study. The most important are the fact that we fit only the spatially resolved pre-selected PSB regions of the galaxies, and the lack of alpha enhanced stellar population models. We chose to fit only the PSB regions in the galaxies in order to simplify the SFH of the integrated spectrum, improving the accuracy of our results for these regions. By selection these regions are centrally located, and therefore represent the majority of light and mass in the galaxy, but some are more inclusive of the entire galaxy than others. Systematic correlations between the spatial extent of the PSB regions and a number of fitting galaxy properties are found in our sample. In particular, galaxies with larger PSB regions tend to have lower burst mass fractions but more rapid quenching, while those with smaller PSB regions have an even spread across the whole prior range. Further work is needed to explore the relation of these regions to the wider galaxy, and whether there are correlations between chemical evolution and the PSB spatial extent. While alpha enhanced SSPs are available for older stellar populations (e.g. ALF, Conroy and van Dokkum, 2012; Conroy et al., 2018), and have been directly compared to Bagpipes (Carnall et al., 2022), these models are not suitable for young or intermediate age (\(\lesssim 1\) Gyr) stellar populations as found in our galaxies. Our tests on mock PSBs did not investigate the possibility of systematic uncertainties in the stellar population models, relying on the GP noise component to model them out. Further work could be done to explore the ability of the GP noise to model such rest-frame uncertainties, for example via the creation of models using one set of SSPs and fitting with a different set. Given that there are a wide variety of different spectral features that are better fit by the two-step metallicity model in Figure 5, and we do not see any evidence for the alpha elements being worse fit than the non-alpha elements, we do not believe alpha enhancement could be driving any of the results observed here. To consider selection effects introduced by our PSB classification scheme (Section 2), we calculate the theoretical selection fraction of galaxy models within our prior space as a function of parameters of interest. This is done by first randomly drawing \(10^{6}\) mock galaxy models from the assumed SFH and metallicity priors, constructing mock spectra and measuring the spectral features H\(\delta_{\rm A}\) and W(H\(\alpha\)). The fraction of mocks classified as PSB through our classification scheme is taken as the theoretical selection fraction. Although we found slight selection trends in a variety of physical properties (e.g. both older, weaker bursts and younger, slower decay bursts are less likely to be selected as PSBs), they were not consistent with causing any of the metallicity results presented here. As shown in Figure 1, we do not apply any cut on inclination during sample selection and both edge-on and face-on PSBs are included in our sample. To verify the effect of inclination on our results, we extract the 2D Sersic fit axis ratio (b/a) from the NSA catalogue (Blanton et al., 2011), and found insignificant systematic correlations with our fitted galaxy properties in all cases (\(p>0.05\), Spearman ranked correlation test). ## 7 Summary and conclusions Through selecting and stacking the post-starburst regions of 50 central PSB galaxies from the MaNGA IFU survey, we fit the resulting high SNR \(>100\) stacked spectra with the Bayesian spectral energy density fitting code Bagpipes. Taking inspiration from a suite of binary gas-rich merger simulations that created mock PSBs, we implemented a two-step metallicity evolution model where stars formed before and during the starburst are allowed independent metallicities. We reduced the computational time to fit the high SNR spectra by a factor of 100, by replacing the original Gaussian process kernel used in Bagpipes with a stochastically-driven damped simple harmonic oscillator (SHOterm), implemented through the celerite2 code. After careful verification of our fitting procedure through ensembles of "self-consistent" and simulation-based parameter recovery tests, we applied our model to the stacked spectra of MaNGA PSB regions to obtain 45 well-fitted results, where for the first time, the metallicity evolution of PSB galaxies with rapid SFH changes can be directly measured. Our results lead to the following main conclusions: 1. A majority (\(31/45,69\%\)) of the PSB regions of galaxies formed significantly more metal-rich stars during the starburst than before (average increase = 0.8 dex with standard deviation = 0.4 dex), while a smaller number of PSB regions formed stars of equal or lower metallicity (Figure 7). This suggests mechanisms that substantially raise stellar metallicity play important roles in the origin of PSBs: the effects of metal enrichment through stellar recycling outweigh those from dilution by gas inflow and metal removal by outflows. 2. This rise in metallicity during the starburst is consistent with simulations of gas rich mergers, agreeing with previous results that mergers are the leading cause of low redshift PSBs. However, we note that there is some disagreement on the impact of mergers on chemical enrichment in simulations, and more work needs to be done to corroborate the results from the Zheng et al. (2020) simulations used here. 3. A good agreement is found between the PSBs' pre-burst metallicity and star-forming mass-metallicity relations from the literature (Figure 8, top left). This is consistent with PSBs being drawn from the underlying population of star-forming disk galaxies as expected. 4. The PSBs' final mass-weighted mass-metallicity relation matches the local passive mass-metallicity relation. This suggests that the stellar metallicity evolution caused by rapid quenching following a starburst is entirely consistent with the observed gap in the stellar mass-metallicity relations between local star-forming and passive galaxies. Our results further validate the idea that rapid quenching following a starburst phase may be an important contributing pathway to the formation of the local quiescent galaxy population. In this study we have focused on galaxies with central PSB features. Further work will be required to understand the importance of these features' spatial extent and how they compare to galaxies with other PSB spatial distributions (e.g. ringed and irregular PSBs, Chen et al., 2019). The measurement of alpha enhancement in PSBs can allow for more precise timing of their starburst and quenching. Although difficult to obtain for recently quenched systems, alpha enhancement might be detectable in PSBs with older starbursts, for instance through the methods of Conroy et al. (2018). Lastly, further simulation work on the chemical evolution of galaxies during starbursts and rapid quenching is required, to understand the effects of AGN, shocks, stellar feedback, mergers/interactions and environments on chemical evolution. ## Acknowledgements We thank Dan Foreman-Mackey, Kartheik Iyer and Joel Leja for their assistance in using celerite2, Dense Basis, and Prospector, respectively. We thank Justus Neumann and Yingjie Peng for providing data. We also thank Justin Otter, Kate Rowlands and Omar Almaini and others in the UDS/PSB collaboration for useful discussions and insightful feedback. We thank Natalia Lahen for feedback on the manuscript. We thank Lath Taj Adlecen for verification of fitting methods. VW and NFB acknowledge support from STFC consolidated grant ST/V000861/1. P.H.J. acknowledges support from the European Research Council via ERC Consolidator Grant KETJU (no. 818930). H-H.L thanks Alfie Russell and Sahyadri Krishna for assistance in language and phrasing. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. _Software: Astropy_ (Astropy Collaboration et al., 2013), Bagpipes (Camal et al., 2018, 2019a), CLERTE (Forman-Mackey et al., 2017; Foreman-Mackey, 2018), Dense Basis (Iyer et al., 2019), Marvin (Cherinka et al., 2019), Marplotlib (Hunter, 2007), MultiNest (Feroz & Hobson, 2008), Numpy (Harris et al., 2020), pipes_vis (Leung et al., 2021), Prospector (Johnson et al., 2021), pyMultiNest (Buchner et al., 2014), Scipy (Virtanen et al., 2020), Seaborn (Waskom, 2021) For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. ## Data Availability All utilised MaNGA data are publicly available at the SDSS database [https://www.sdss4.org/dr17/](https://www.sdss4.org/dr17/) or through Marvin at [https://dr17.sdss.org/marvin/](https://dr17.sdss.org/marvin/). The stacked spectra and posterior samples of all 50 fitted galaxies are available at [https://doi.org/10.17630/acb496c-1c59-416e-8b73-026799a@c1ca](https://doi.org/10.17630/acb496c-1c59-416e-8b73-026799a@c1ca). Python scripts to recreate the figures in Section 5 are available at [https://github.com/HinLeung622/chemical_evolution_of_PSBs_scripts](https://github.com/HinLeung622/chemical_evolution_of_PSBs_scripts).
2309.09706
Dislocations with corners in an elastic body with applications to fault detection
This paper focuses on an elastic dislocation problem that is motivated by applications in the geophysical and seismological communities. In our model, the displacement satisfies the Lam\'e system in a bounded domain with a mixed homogeneous boundary condition. We also allow the occurrence of discontinuities in both the displacement and traction fields on the fault curve/surface. By the variational approach, we first prove the well-posedness of the direct dislocation problem in a rather general setting with the Lam\'e parameters being real-valued $L^\infty$ functions and satisfy the strong convexity condition. Next, by considering the scenario that the Lam\'e parameters are constant and the fault curve/surface possesses certain corner singularities, we establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface. In our study the dislocation is geometrically rather general and may be open or closed. For both cases, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips.
Huaian Diao, Hongyu Liu, Qingle Meng
2023-09-18T12:18:55Z
http://arxiv.org/abs/2309.09706v2
# Dislocations with corners in an elastic body with applications to fault detection ###### Abstract. This paper focuses on an elastic dislocation problem that is motivated by applications in the geophysical and seismological communities. In our model, the displacement satisfies the Lame system in a bounded domain with a mixed homogeneous boundary condition. We also allow the occurrence of discontinuities in both the displacement and traction fields on the fault curve/surface. By the variational approach, we first prove the well-posedness of the direct dislocation problem in a rather general setting with the Lame parameters being real-valued \(L^{\infty}\) functions and satisfy the strong convexity condition. Next, by considering the scenario that the Lame parameters are constant and the fault curve/surface possesses certain corner singularities, we establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface. In our study the dislocation is geometrically rather general and may be open or closed. For both cases, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips. **Keywords:** dislocations, elasticity, corners, slips, well-possdness, inverse problem, uniqueness. ## 1. Introduction In this study, our focus lies on the phenomenon known as elastic dislocation. An elastic dislocation refers to a surface or a crack within an elastic solid across which there are discontinuities of the elastic displacement fields. It may arise in various practical scenarios, such as a fault plane undergoing slip for a limited duration or the sliding of faces in a crack. The modeling and comprehension of interior elastic dislocations hold significant importance in the geophysical and seismological communities. Specifically, the study can finds important applications in monitoring, understanding, and mitigating earthquakes and landslides. For further details and additional references on this subject, we refer to [7, 9, 14, 16, 22, 23, 27] and the related literature cited therein. Though the problem has been extensively and intensively studied in the physical literature, there is only limited theoretical understanding. Recently, Aspri et al. [2] investigated the direct and inverse problems for elastic dislocation by modeling the Earth's crust as an infinite half-space. The authors demonstrated the well-posedness of the direct problem by assuming that the elastic coefficients are Lipschitz continuous and the surface is also Lipschitz, and established the uniqueness of the fault and slip from a single measurement of surface displacement on an open set. Additional assumptions were made that the fault, with at least one corner singularity, must be a graph concerning a given coordinate system, and the slip must be tangential to the fault. Subsequently, Aspri et al. [1, 3] considered the dislocation problem on bounded domains in 2D and 3D, respectively, where [1] considered dislocation models in anisotropic and inhomogeneous elastic elastic media in 2D and [3] studied the dislocations in a layered isotropic elastic medium in 3D. The elastic dislocations were modeled as open, oriented fault curves/surfaces within an elastostatic system, with discontinuity in the displacement field across such fault curves/surfaces. The uniqueness of both the fault curves/surfaces and the slip vectors can be obtained by a single passive measurement of the elastostatic field on part of the elastic solid boundary in a general scenario. Compared with [2], the results in [1, 3] do not require additional constraints on the fault and slip, except for a fixed coordinate system and the conditions that the slip field belongs to a suitable space with good extension properties and has full support in the closure of the fault surface/curve. It is point out that in anisotropic elastic materials, in order to guarantee that the unique continuation property exists, additional assumptions are needed for the elastic coefficients (cf. [1]). Lastly, Elena et al. [4] also studied the crack problem in a bounded domain. Motivated by the practical and theoretical studies mentioned above, we shall propose and study the elastic dislocation problem in more sophisticated and challenging setups. We confine ourselves to the elastic dislocation problem in an extremely general form, which allows the occurrence of discontinuities in both the displacement and traction fields across the fault curve/surface. Moreover, the dislocation is geometrically rather general. In fact, the fault curve/surface for describing the dislocation can be an open or closed curve/surface in our study. In this paper, we investigate both the direct and inverse dislocation problems. Our mathematical setup allows the presence of discontinuities in both the displacement and traction fields across the fault curve/surface. The direct problem is to determine the elastic displacements of the elastic transmission problem with mixed homogeneous boundary conditions (see Problem (2.6)), assuming that the dislocation curve/surface \(\mathcal{S}\), the elastic stiffness tensor \(\mathbb{C}(\mathbf{x})\), and the slips \(\mathbf{f}\) and \(\mathbf{g}\) over \(\mathcal{S}\) are known. The inverse problem is to determine the dislocation curve/surface \(\mathcal{S}\) and the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) from the measurement of the displacement field. We focus on a single measurement of displacement to study the inverse dislocation problems, which means that we only need a pair of the Cauchy datum of the corresponding elastic displacement to determine \(\mathcal{S}\), \(\mathbf{f}\) and \(\mathbf{g}\). Actually, various mature technologies, such as Synthetic Aperture Radar (SAR) and Global Positioning System (GPS) arrays, can be used to measure surface displacement measurements. For the direct problem of elastic dislocations, it is a common practice to employ a lift of the jumps to establish a variational framework to prove its well-posedness. This approach has been extensively discussed for dislocations (cf. [28]). In this paper, we mainly consider the dislocation problems with corner singularities in a bounded domain. There is currently a lot of work on the singularity of the corner transmission problems (cf. [10, 11, 17, 21]). In addition, one also used suited weighted spaces to study transmission problems (cf. [19]). In order to guarantee the self-contained content of the paper, using the variational approach, we prove the well-posedness of Problem (2.6), where there exist jumps in both the displacement and traction fields across the fault curve/surface in general scenarios. It includes a general setting where the Lame parameters are real-valued \(L^{\infty}\) functions and satisfies the strong convexity condition. Moreover, we consider both open and closed fault curves/surfaces. As for the inverse dislocation problems, it is worth noting that there is an alternative approach and additional numerical analysis, particularly for polyhedral domains(cf. [26]). However, in the context of our study, we specifically focus on the scenario where the Lame parameters are constant, and the fault curve/surface exhibits specific corner singularities. We first establish a local characterisation of the slip vectors at the corner points over the dislocation curve/surface by analyzing the singularity formation of the elastic field locally around certain abnormal points on the fault surface in a microlocal way. In fact we utilize the so-called CGO (complex geometric optic) solutions for the underlying elastic system to achieve the corresponding characterisation of the slip vectors at the corner points, where subtle and delicate analysis is developed in our study. For both cases that the dislocation may be open or closed, we establish the uniqueness results for the inverse problem of determining the dislocation curve/surface and the slips. The paper is structured as follows. In Section 2, we introduce the mathematical setup of our study and establish the well-posedness of the dislocation problem. In Section 3, we propose some admissible assumptions about \((\mathcal{S};\mathbf{f},\mathbf{g})\) for the scenario that \(\mathcal{S}\) may be closed or open. For both cases, assuming that Lame parameters are constants and the fault curve/surface posses corner singularities, we establish a local characterization of the slip vectors at the corner points over the dislocation curve/surface. Furthermore, we also establish global uniqueness results for the inverse dislocation problem for determining the dislocation curve/surface \(\mathcal{S}\) and the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) in these two cases with additional geometrical assumption about the dislocation curve/surface. In Section 4, we derive several local results for the slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at the corner points along \(\mathcal{S}\). Section 5 is devoted to proving the uniqueness of the inverse problem presented in Section 3. ## 2. Mathematical setup and the direct problem In this section, we pay our attention to the mathematical setup of the dislocation problem and the study of the well-posedness of the direct problem. ### Mathematical setup We first introduce a geometric and mathematical setup for our study; see Fig. 1 for a schematic illustration in 2D. Let \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\), \(\mathbf{x}=(x_{j})_{j=1}^{n}\in\Omega\), be real-valued \(L^{\infty}\) functions, which are referred to as the Lame parameters of the elastic solid \(\Omega\). We define \(\mathbb{C}(\mathbf{x})=(C_{ijkl}(\mathbf{x}))_{i,j,k,l=1}^{n}\), \(\mathbf{x}\in\Omega\), as a four-rank tensor given by: \[\mathbb{C}(\mathbf{x}):=\lambda(\mathbf{x})\mathbf{I}\otimes\mathbf{I}+2\mu( \mathbf{x})\mathbb{I},\ \ \text{where}\ \ C_{ijkl}(\mathbf{x})=\lambda(\mathbf{x})\delta_{ij}\delta_{kl}+\mu( \mathbf{x})(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}), \tag{2.1}\] Here, \(\mathbf{I}\) and \(\mathbb{I}\) respectively represent the identity operators of the second- and fourth-rank tensors, and \(\delta_{ij}\) is the Kronecker delta. We assume that the Lame parameters \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) satisfy the strong convexity conditions: \[\mu(\mathbf{x})>0,\quad 2\mu(\mathbf{x})+n\lambda(\mathbf{x})>0,\quad\forall \mathbf{x}\in\Omega, \tag{2.2}\] These conditions ensure the uniform strong convexity of the elastic stiffness tensor \(\mathbb{C}(\mathbf{x})\). For \(\mathbf{A}=(a_{ij})_{i,j=1}^{n}\), we define the operation ":" as: \[\mathbb{C}:\mathbf{A}=\big{(}C_{ijkl}:\mathbf{A}\big{)}_{ij}=(\Pi)_{ij},\quad \text{where}\quad\Pi_{ij}:=\sum_{k,l=1}^{n}C_{ijkl}a_{kl}. \tag{2.3}\] Let \(\mathbf{u}(\mathbf{x})=(u_{j}(\mathbf{x}))_{j=1}^{n}\), \(\mathbf{x}\in\Omega\), with each \(u_{j}(\mathbf{x})\) being a complex-valued function. We introduce the Lame operator \(\mathcal{L}\) as: \[\mathcal{L}\mathbf{u}:=\nabla\cdot(\mathbb{C}:\nabla\mathbf{u})=\mu\Delta \mathbf{u}+(\lambda+\mu)\nabla(\nabla\cdot\mathbf{u}), \tag{2.4}\] where \(\nabla\mathbf{u}:=(\partial_{j}u_{i})_{i,j=1}^{n-1}\), \(\nabla\cdot\mathbf{u}:=\sum_{j=1}^{n}\partial_{j}u_{j}\), and \(\partial_{j}u_{j}=\partial u_{j}/\partial x_{j}\). Furthermore, let \(\boldsymbol{\nu}\in\mathbb{S}^{n-1}\) be the unit normal vector, and we define the traction operator as: \[\mathcal{T}_{\boldsymbol{\nu}}(\mathbf{u})=\boldsymbol{\nu}\cdot(\mathbb{C}: \nabla\mathbf{u}). \tag{2.5}\] Let \(\mathcal{S}\subset\Omega\) be an oriented Lipschitz curve/surface, which can be open or closed. We define \(\mathcal{S}^{\pm}\) as the two sides of \(\mathcal{S}\), with \(\mathcal{S}^{+}\) representing the side where \(\boldsymbol{\nu}\) points outward. We denote the jump of a function or tensor field \(\mathbf{p}\) across \(\mathcal{S}\) as \([\mathbf{p}]_{\mathcal{S}}:=\mathbf{p}|_{\mathcal{S}}^{+}-\mathbf{p}|_{ \mathcal{S}}^{-}\), where \(\mathbf{p}|_{\mathcal{S}}^{\pm}\) represent the non-tangential limits of \(\mathbf{p}\) on \(\mathcal{S}^{\pm}\), respectively. The elastic dislocation problem that we consider allows the occurrence of discontinuities in both the displacement and traction fields, denoted by \(\mathbf{f}\) and \(\mathbf{g}\), respectively. ### Mathematical model In this paper, our main focus is on the following elastostatic system for \(\mathbf{u}\in H^{1}(\Omega\backslash\overline{\mathcal{S}})^{n}\): \[\begin{cases}\mathcal{L}\mathbf{u}(\mathbf{x})=\mathbf{0},&\mathbf{x}\in \Omega\backslash\overline{\mathcal{S}},\\ \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}\big{|}_{\Sigma_{N}}=\mathbf{0},& \mathbf{u}\big{|}_{\Sigma_{D}}=\mathbf{0},\\ \mathbf{[u]}_{\mathcal{S}}=\mathbf{f},&[\mathcal{T}_{\boldsymbol{\nu}} \mathbf{u}]_{\mathcal{S}}=\mathbf{g}.\end{cases} \tag{2.6}\] Here, \(\partial\Omega=\Sigma_{D}\cup\Sigma_{N}\) represents a Lipschitz partition of \(\partial\Omega\). To investigate both the direct and inverse dislocation problems, we introduce some relevant function spaces for the slips \(\mathbf{f}\) and \(\mathbf{g}\). Noting that the function spaces differ depending on whether \(\mathcal{S}\) is closed or open. **Class 1:** When \(\mathcal{S}\) is closed, we consider slips \(\mathbf{f}\) and \(\mathbf{g}\) that satisfy \[\mathbf{f}\in H^{\frac{1}{2}}(\mathcal{S})^{n}\quad\text{and}\quad\mathbf{g} \in H^{-\frac{1}{2}}(\mathcal{S})^{n}.\] **Class 2:** When \(\mathcal{S}\) is open, we assume that \(\mathbf{f}\) and \(\mathbf{g}\) belong to appropriate weighted spaces with a favorable extension property on such a curve/surface (see, for example, [8, 18, 20, 25]). Following [8, 20, 25], we introduce the following space: \[H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}:=\Big{\{}\mathbf{u}\in H^{\frac{1}{2}}_ {0}(\mathcal{S})^{n};\ \varrho^{-\frac{1}{2}}\mathbf{u}\in L^{2}(\mathcal{S})^{n}\Big{\}}\] associated with the following norms \[\|\mathbf{f}\|_{H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}}:=\|\mathbf{f}\|_{H^{ \frac{1}{2}}_{2}(\mathcal{S})^{n}}+\|\varrho^{-\frac{1}{2}}\mathbf{f}\|_{L^{2} (\mathcal{S})^{n}}\quad\text{ for }\mathbf{f}\,\in H^{\frac{1}{2}}_{00}( \mathcal{S})^{n}.\] Here, \(H_{0}^{\frac{1}{2}}(\mathcal{S})^{n}\) is the closure of the space of smooth functions with compact support in \(\mathcal{S}\) with respect to the \(H^{\frac{1}{2}}(\mathcal{S})^{n}\) norm. In addition, \(\mathbf{g}\) belongs to the space \(H_{0}^{-\frac{1}{2}}(\mathcal{S})^{n}\) which is the dual space of \(H_{00}^{\frac{1}{2}}(\mathcal{S})^{n}\). Let \(\varrho\in C^{\infty}(\overline{\mathcal{S}})\) denote a weight function which possesses the certain properties: 1. \(\varrho\) has the same order as the distance to the boundary, that is to say \[\lim_{\mathbf{x}\to\mathbf{x}_{0}}\frac{\varrho(\mathbf{x})}{\mathrm{d}( \mathbf{x},\partial\mathcal{S})}=d\neq 0,\,\forall\,\mathbf{x}_{0}\,\in \partial\mathcal{S},\] 2. \(\varrho(\mathbf{x})\) is positive in \(\mathcal{S}\) and \(\varrho\) vanishes on \(\partial\mathcal{S}\). We want to note that when \(\mathcal{S}\) is a curve in the 2D case, \(\partial\mathcal{S}\) corresponds to the set of endpoints of \(\mathcal{S}\). Similarly, when \(\mathcal{S}\) is a surface in the 3D case, \(\partial\mathcal{S}\) corresponds to the boundary curve of \(\mathcal{S}\). Assume that \(\mathcal{S}\) is open. Let \(\mathcal{S}\) be extended to a closed Lipschitz curve/surface \(\Gamma=\overline{\mathcal{S}}\cup\Gamma_{0}\) satisfying \(\Gamma\cap\partial\Omega=\emptyset\), where \(\Gamma_{0}\) is a curve or a surface linking with the boundary \(\partial\mathcal{S}\) satisfying \(\Gamma_{0}\cap(\mathcal{S}\backslash\partial\mathcal{S})=\emptyset\). Hence, the Lipschitz domain \(\Omega\) can be partitioned into two connected subdomains \(\Omega_{1}\) and \(\Omega_{1}^{c}=\Omega\backslash\overline{\Omega}_{1}\), where \(\partial\Omega_{1}=\Gamma\) and \(\partial\Omega_{1}^{c}=\Gamma\cup\partial\Omega\). Let \(\mathbf{f}\) be continuously extended to \(\widetilde{\mathbf{f}}\in H^{1/2}(\Gamma)^{n}\) by zero on \(\Gamma\backslash\mathcal{S}\) and \(\mathbf{g}\) be continuously extended to \(\widetilde{\mathbf{g}}\in H^{-\frac{1}{2}}(\Gamma)^{n}\) by zero on \(\Gamma\backslash\mathcal{S}\). That is to say \[\widetilde{\mathbf{f}}=\begin{cases}\mathbf{f},&\mathbf{x}\in \mathcal{S},\\ \mathbf{0},&\mathbf{x}\in\Gamma\backslash\overline{\mathcal{S}}\end{cases} \qquad\text{ and }\qquad\widetilde{\mathbf{g}}=\begin{cases}\mathbf{g},& \mathbf{x}\in\mathcal{S},\\ \mathbf{0},&\mathbf{x}\in\Gamma\backslash\overline{\mathcal{S}}.\end{cases} \tag{2.7}\] In particular, when \(\mathcal{S}\) is closed, \(\Gamma=\mathcal{S}\), \(\widetilde{\mathbf{f}}=\mathbf{f}\) and \(\widetilde{\mathbf{g}}=\mathbf{g}\). Regardless of whether \(\mathcal{S}\) is open or closed, we use \[\Omega_{1}:=\mathrm{enclose}(\mathcal{S}) \tag{2.8}\] to denote the aforementioned domain satisfying \(\partial\Omega_{1}=\Gamma\). Similarly, when \(\mathcal{S}\) is closed, let the domain \(\mathrm{enclose}(\mathcal{S})\) satisfy that \(\partial\left(\mathrm{enclose}(\mathcal{S})\right)=\mathcal{S}\). We now are in the position to show the existence of a unique weak solution \(\mathbf{u}\in H_{\Sigma_{D},\Sigma_{N}}^{1}(\Omega\backslash\overline{ \mathcal{S}})^{n}\) to Problem (2.6) corresponding with the boundary data on \(\Sigma_{D}\) and \(\Sigma_{N}\). **Theorem 2.1**.: _There exists a unique solution \(\mathbf{u}\in H_{\Sigma_{D},\Sigma_{N}}^{1}(\Omega\backslash\overline{ \mathcal{S}})^{n}\) to Problem (2.6)._ Proof.: In the following we first prove the well-posedness of Problem (2.6) for the case that \(\mathcal{S}\) is open. When \(\mathcal{S}\) is closed, the corresponding proof can be obtained in a similar way. We shall use the variational technique (cf. [15]) to verify that there exists a unique solution to this problem. From [1, Lemma 3.2] and [3, Remark 3.3], Problem (2.6) can be recast equivalently as the following PDE system for \(\mathbf{u}_{1}\in H^{1}(\Omega_{1})^{n}\) and \(\mathbf{u}_{2}\in H^{1}(\Omega_{1}^{c})^{n}\) such that \[\begin{cases}\mathcal{L}\mathbf{u}_{1}(\mathbf{x})=\mathbf{0}, \quad\mathbf{x}\in\Omega_{1},\\ \mathcal{L}\mathbf{u}_{2}(\mathbf{x})=\mathbf{0},\quad\mathbf{x}\in\Omega_{1} ^{c},\\ \mathcal{T}_{\nu}\mathbf{u}_{2}\big{|}_{\Sigma_{N}}=\mathbf{0},\,\mathbf{u}_{2} \big{|}_{\Sigma_{D}}=\mathbf{0},\\ \mathbf{u}_{2}\big{|}_{\Gamma}-\mathbf{u}_{1}\big{|}_{\Gamma}=\widetilde{ \mathbf{f}},\\ \mathcal{T}_{\nu}\mathbf{u}_{2}\big{|}_{\Gamma}-\mathcal{T}_{\nu}\mathbf{u}_{1} \big{|}_{\Gamma}=\widetilde{\mathbf{g}},\end{cases} \tag{2.9}\] where \(\widetilde{\mathbf{f}}\in H^{1/2}(\Gamma)^{n}\) and \(\widetilde{\mathbf{g}}\in H^{-1/2}(\Gamma)^{n}\) defined by (2.7) are given. Let \(\mathbf{u}_{\widetilde{\mathbf{f}}}\) be the unique solution to the following Dirichlet boundary value problem \[\mathcal{L}\,\mathbf{u}_{\widetilde{\mathbf{f}}}=\mathbf{0}\quad\text{in}\quad \Omega_{1}^{c},\quad\mathbf{u}_{\widetilde{\mathbf{f}}}=\widetilde{\mathbf{f} }\quad\text{on}\quad\Gamma,\quad\mathbf{u}_{\widetilde{\mathbf{f}}}=\mathbf{0} \quad\text{on}\quad\partial\Omega.\] Let us next consider an equivalent variational formulation of Problem (2.9): Find \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) satisfying \[\int_{\Omega_{1}}(\mathbb{C}:\nabla\mathbf{w}):\nabla\overline{ \boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+\int_{\Omega_{1}^{c}}(\mathbb{C}: \nabla\mathbf{w}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+ \int_{\Gamma}\widetilde{\mathbf{g}}\cdot\overline{\boldsymbol{\phi}}\, \mathrm{d}\sigma \tag{2.10}\] \[=\int_{\Sigma_{N}}\mathcal{T}_{\boldsymbol{\nu}}(\mathbf{u}_{ \widetilde{\mathbf{f}}})\cdot\overline{\boldsymbol{\phi}}\,\mathrm{d}\sigma- \int_{\Omega_{1}^{c}}(\mathbb{C}:\nabla\mathbf{u}_{\widetilde{\mathbf{f}}}): \nabla\overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x},\qquad\qquad\forall \boldsymbol{\phi}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}.\] With the help of the first Betti identity, it's clear to show that \(\mathbf{u}_{1}:=\mathbf{w}\big{|}_{\Omega_{1}}\) and \(\mathbf{u}_{2}:=\mathbf{w}\big{|}_{\Omega_{1}^{c}}+\mathbf{u}_{\widetilde{ \mathbf{f}}}\) satisfies Problem (2.9). Conversely, multiplying these equations in (2.9) by a test function and using transmission conditions, denote \(\mathbf{w}:=\mathbf{u}_{1}\) in \(\Omega_{1}\) and \(\mathbf{w}:=\mathbf{u}_{2}-\mathbf{u}_{\widetilde{\mathbf{f}}}\) in \(\Omega_{1}^{c}\), we can directly show that \(\mathbf{w}\) belongs to \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) and fulfils (2.10), where \((\mathbf{u}_{1},\mathbf{u}_{2})\) is a solution to Problem (2.9). Let \(a(\cdot,\cdot)\) and \(\mathcal{F}(\cdot)\) be defined as follows: \[a(\mathbf{w},\boldsymbol{\phi}) :=\int_{\Omega_{1}}(\mathbb{C}:\nabla\mathbf{w}):\nabla \overline{\boldsymbol{\phi}}\,\mathrm{d}\mathbf{x}+\int_{\Omega_{1}^{c}}( \mathbb{C}:\nabla\mathbf{w}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d} \mathbf{x},\] \[\mathcal{F}(\boldsymbol{\phi}) :=-\int_{\Gamma}\widetilde{\mathbf{g}}\cdot\overline{\boldsymbol {\phi}}\,\mathrm{d}\sigma+\int_{\Sigma_{N}}\mathcal{T}_{\boldsymbol{\nu}}( \mathbf{u}_{\widetilde{\mathbf{f}}})\cdot\overline{\boldsymbol{\phi}}\, \mathrm{d}\sigma-\int_{\Omega_{1}^{c}}(\mathbb{C}:\nabla\mathbf{u}_{ \widetilde{\mathbf{f}}}):\nabla\overline{\boldsymbol{\phi}}\,\mathrm{d} \mathbf{x}.\] Hence, we can rewrite (2.10) as the problem of finding \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) such that \[a(\mathbf{w},\boldsymbol{\phi})=\mathcal{F}(\boldsymbol{\phi})\qquad\text{for all}\quad\boldsymbol{\phi}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}.\] It is easy to see that \(\mathcal{F}\) is a linear continuous operator from \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) to \(\mathbb{C}\). Combining Korn's inequality (cf. [18]) with the uniform strong convexity of \(\mathbb{C}\), it directly leads to the fact that the bilinear form \(a(\cdot,\cdot)\) is strictly coercive and bounded. As a consequence of the Lax-Milgram Lemma, there exists a unique solution \(\mathbf{w}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) of (2.10) such that \[\|\mathbf{w}\|_{H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}} \leq\Big{\|}\mathbf{w}\big{|}_{\Omega_{1}}\Big{\|}_{H^{1}(\Omega _{1})^{n}}+\Big{\|}\mathbf{w}\big{|}_{\Omega_{1}^{c}}\Big{\|}_{H^{1}(\Omega_{1} ^{c})^{n}}\leq\|\mathcal{F}\|\leq C\big{(}\|\widetilde{\mathbf{f}}\|_{H^{\frac {1}{2}}(\Gamma)^{n}}+\|\widetilde{\mathbf{g}}\|_{H^{-\frac{1}{2}}(\Gamma)^{n}} \big{)}\] \[\leq C\big{(}\|\mathbf{f}\|_{H^{\frac{1}{2}}_{00}(\mathcal{S})^{n }}+\|\mathbf{g}\|_{H^{-\frac{1}{2}}_{0}(\mathcal{S})^{n}}\big{)}.\] The above results implies immediately that there exists a unique solution \(\mathbf{u}\in H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega)^{n}\) to Problem (2.6) and \(\mathbf{u}\) can be estimated by \(\mathbf{f}\) and \(\mathbf{g}\) with respect to \(H^{\frac{1}{2}}_{00}(\mathcal{S})^{n}\) norm and \(H^{-\frac{1}{2}}_{0}(\mathcal{S})^{n}\) norm, respectively. The proof is complete. ## 3. The inverse problem and main results In this section, we are devoted to studying the uniqueness results of the inverse dislocation problem, which is composed of identifying the dislocations \(\mathcal{S}\) and slips \(\mathbf{f}\) and \(\mathbf{g}\) over \(\mathcal{S}\) by observation data on an open set \(\Sigma_{0}\subset\Sigma_{N}\). For our study, we formulate the inverse problem as \[\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}=\mathbf{u}\big{|}_{\Sigma_{0}},\] where \(\mathbf{u}\) is the solution to Problem (2.6). That is, \(\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}\) contains the elastic deformation data caused by the dislocation \((\mathcal{S};\mathbf{f},\mathbf{g})\) and observed on \(\Sigma_{0}\subset\Sigma_{N}\). The inverse problem we are devoted to can be formulated by \[\Lambda_{\mathcal{S};\mathbf{f},\mathbf{g}}(\mathbf{u}\big{|}_{\Sigma_{0}, \Sigma_{N}})\to\mathcal{S},\mathbf{f},\mathbf{g}. \tag{3.1}\] The uniqueness results can be proved under some assumptions about the geometry of the dislocation \(\mathcal{S}\) and a _priori_ information about Lame parameters \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) of the elastic solid \(\Omega\), where \(\lambda(\mathbf{x})\) and \(\mu(\mathbf{x})\) are real constants and satisfy the strong convexity condition (2.2). In order to describe the geometry of \(\mathcal{S}\), we next introduce some notations for the geometric setup; see Fig. 3 for a schematic illustration. Given \(\mathbf{x}_{c}\in\mathbb{R}^{2}\) and constants \(\theta_{m}\), \(\theta_{M}\in(-\pi,\pi)\) such that \(\theta_{M}-\theta_{m}\in(0,\pi)\), we consider the following open sector \[\mathcal{K}_{\mathbf{x}_{c}}\,=\Big{\{}\mathbf{x}\in\mathbb{R}^{2}\big{|}\, \mathbf{0}<\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta,r\sin\theta)^{\top},\ \theta_{\min}<\theta<\theta_{\max},r>0\Big{\}} \tag{3.2}\] with boundaries \[\Gamma^{+}_{\mathbf{x}_{c}}=\Big{\{}\mathbf{x}\in\mathbb{R}^{2} \big{|}\,\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta_{M},r\sin\theta_{M})^{\top},\, r>0\Big{\}}\,,\] \[\Gamma^{-}_{\mathbf{x}_{c}}=\Big{\{}\mathbf{x}\in\mathbb{R}^{2} \big{|}\,\mathbf{x}=\mathbf{x}_{c}+(r\cos\theta_{m},r\sin\theta_{m})^{\top},\, r>0\Big{\}}\,.\] The point \(\mathbf{x}_{c}\) is said to be a planar corner point with opening angle \(\theta_{M}-\theta_{m}\) and boundaries \(\Gamma^{\pm}_{\mathbf{x}_{c}}\). Let \[\mathcal{C}_{\mathbf{x}_{c},h} :=\mathcal{K}_{\mathbf{x}_{c}}\cap B_{h}(\mathbf{x}_{c}),\ \ \ \ \Gamma^{\pm}_{\mathbf{x}_{c},h}:=\Gamma^{\pm}_{\mathbf{x}_{c}}\cap B_{h}( \mathbf{x}_{c}),\] \[\Lambda^{\mathbf{x}_{c}}_{h} :=\mathcal{K}_{\mathbf{x}_{0}}\cap\partial B_{h}(\mathbf{x}_{c}), \ \ \ \Sigma_{\mathbf{x}_{c}}:=\mathcal{C}_{\mathbf{x}_{c},h}\backslash\mathcal{C}_{ \mathbf{x}_{c},h/2}, \tag{3.3}\] where \(B_{h}(\mathbf{x}_{c})\) denotes an open disk centered at \(\mathbf{x}_{c}\) of radius \(h\in\mathbb{R}_{+}\). For the sake of brevity, we use \(B_{h}\), \(\mathcal{K}\), \(\Gamma^{\pm}\), \(\mathcal{C}_{h}\), \(\Gamma^{\pm}_{h}\), \(\Lambda_{h}\) and \(\Sigma\) to represent the corresponding notations at the origin. Figure 2. Schematic illustration of a 2D corner/3D edge corner. ### Main uniqueness results Before giving the uniqueness results in Theorem 3.1-Theorem 3.3, we introduce some admissible conditions about the dislocation for our subsequent study. **Definition 3.1**.: Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^{n}(n=2,3)\). We say that \((\mathcal{S};\mathbf{f},\mathbf{g})\) belongs to the admissible class \(\mathcal{T}\) if the following conditions are fulfilled: 1. In \(\mathbb{R}^{2}\), \(\mathcal{S}\subset\mathbb{R}^{2}\) is an oriented Lipschitz curve. There exists at least one planar corner point \(\mathbf{x}_{c}\) on \(\mathcal{S}\) such that \(\Gamma^{\pm}_{\mathbf{x}_{c},h}\subset\mathcal{S}\), where \(\Gamma^{\pm}_{\mathbf{x}_{c},h}=\partial\mathcal{K}_{\mathbf{x}_{0}}\cap B_{h} (\mathbf{x}_{c})\) and \(\mathcal{C}_{\mathbf{x}_{c},h}=B_{h}(\mathbf{x}_{c})\cap\mathcal{K}_{\mathbf{ x}_{c}}=B_{h}(\mathbf{x}_{c})\cap\Omega_{1}\). Here, \(\mathcal{K}_{\mathbf{x}_{c}}\) and \(B_{h}(\mathbf{x}_{c})\) are defined in (3.2) and \(\Omega_{1}=\mathrm{enclose}(\mathcal{S})\) is given in (2.8). 2. In \(\mathbb{R}^{3}\), \(\mathcal{S}\subset\mathbb{R}^{3}\) is an oriented Lipschitz surface. Suppose that \(\mathcal{S}\) possesses at least one 3D edge corner \(\mathbf{x}_{c}=(\mathbf{x}_{c}^{\prime},x_{3})^{\top}\in\mathbb{R}^{3}\), where \(\mathbf{x}_{c}^{\prime}\in\mathbb{R}^{2}\) is a planner corner point. In other words, for sufficient small positive numbers \(h\) and \(M\), we have that \(\Gamma^{\pm}_{\mathbf{x}_{c}^{\prime},h}\times(-M,M)\subset\mathcal{S}\) and \(B^{\prime}_{h}(\mathbf{x}_{c}^{\prime})\times(-M,M)\cap\Omega_{1}=\mathcal{C}^ {\prime}_{\mathbf{x}_{c},h}\times(-M,M)\), where \(\Gamma^{\pm}_{\mathbf{x}_{c}^{\prime},h}\) are two edges of a sectorial corner at \(\mathbf{x}_{c}^{\prime}\) and \(B^{\prime}_{h}(\mathbf{x}_{c}^{\prime})\) is an open disk centered at \(\mathbf{x}_{c}^{\prime}\) with radius \(h\), which are defined in (3.3). The opening angle of the sectorial corner at \(\mathbf{x}_{c}^{\prime}\) is referred to be the opening angle of the corresponding 3D edge corner. 3. In \(\mathbb{R}^{2}\), let \(\mathbf{f}_{j}:=\mathbf{f}\big{|}_{\Gamma^{j}_{\mathbf{x}_{c},h}}\) and \(\mathbf{g}_{j}:=\mathbf{g}\big{|}_{\Gamma^{j}_{\mathbf{x}_{c},h}}\) satisfy \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma^{j}_{\mathbf{x}_{c},h})^{2}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma^{j}_{\mathbf{x}_{c},h})^{2}\) with \(\alpha_{j},\beta_{j}\) being in \((0,1)\) and \(j=+,-\), where \(\mathbf{x}_{c}\) and \(\Gamma^{\pm}_{\mathbf{x}_{c},h}\) are just the ones in (1). 4. In \(\mathbb{R}^{3}\), let \(\mathbf{f}_{j}:=\mathbf{f}\big{|}_{\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime},h }\times(-M,M)}\) and \(\mathbf{g}_{j}=\mathbf{g}\big{|}_{\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime}, h}\times(-M,M)}\) fulfill that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime}, h}\times(-M,M))^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma^{\prime j}_{\mathbf{x}_{c}^{\prime},h} \times(-M,M))^{3}\) with \(\alpha_{j},\beta_{j}\) being in \((0,1)\) and \(j=+,-\), where \(\mathbf{x}_{c}^{\prime}\) and \(\Gamma^{\prime\pm}_{\mathbf{x}_{c}^{\prime},h}\) are just the ones in (2). Furthermore, \(\mathbf{f}_{j}\) and \(\mathbf{g}_{j}\) are are independent of \(x_{3}\). 5. Let \(\mathcal{V}_{\mathcal{S}}\) signify the set of 2D corners/3D edge corners of \(\mathcal{S}\). In \(\mathbb{R}^{3}\), denote \(\mathbf{g}=(\mathbf{g}^{(1,2)},g^{3})\). Either the following assumptions \(\mathcal{A}_{1}\) or \(\mathcal{A}_{2}\) is satisfied, Assumption \(\mathcal{A}_{1}:\forall\mathbf{x}_{c}\in\mathcal{V}_{\mathcal{S}}\), \(\mathbf{f}_{-}(\mathbf{x}_{c})\neq\mathbf{f}_{+}(\mathbf{x}_{c})\), Assumption \(\mathcal{A}_{2}:\forall\mathbf{x}_{c}\in\mathcal{V}_{\mathcal{S}}\), \(\mathbf{g}_{+}(\mathbf{x}_{c})\neq W_{\mathbf{x}_{c}^{\prime}}\mathbf{g}_{-}( \mathbf{x}_{c})\), if \(n=2\), \(\mathbf{g}_{+}^{(1,2)}(\mathbf{x}_{c})\neq W_{\mathbf{x}_{c}^{\prime}}\mathbf{g}_ {-}^{(1,2)}(\mathbf{x}_{c})\) if \(n=3\), or \(g_{+}^{3}(\mathbf{x}_{c})\neq 0\) and \(g_{-}^{3}(\mathbf{x}_{c})\neq 0\), where \(W_{\mathbf{x}_{c}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{c}}&-\sin\theta_{ \mathbf{x}_{c}}\\ -\sin\theta_{\mathbf{x}_{c}}&\cos\theta_{\mathbf{x}_{c}}\end{bmatrix}\) and \(W_{\mathbf{x}_{c}^{\prime}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{c}^{\prime}}&- \sin\theta_{\mathbf{x}_{c}^{\prime}}\\ -\sin\theta_{\mathbf{x}_{c}^{\prime}}&\cos\theta_{\mathbf{x}_{c}^{\prime}}\end{bmatrix}\). Here, \(\theta_{\mathbf{x}_{c}}\) and \(\theta_{\mathbf{x}_{c}^{\prime}}\) denote the opening angle at the 2D corner/3D edge corner point \(\mathbf{x}_{c}\) of \(\mathcal{S}\). _Remark 3.1_.: The admissible conditions that \(\mathcal{S}\) possesses at least one planar corner/3D edge corner in Definitions 3.1 can be easily fulfilled in generic physical scenarios. For example, \(\mathcal{S}\) is a piecewise linear fault curve/surface. In what follows, \((\mathcal{S};\mathbf{f},\mathbf{g})\) is said to be an admissible dislocation with slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) if it fulfils the conditions in Definitions 3.1. The uniqueness results in [1, 2] for determining \(\mathcal{S}\) only focused on the case that \(\mathcal{S}\) is open and there is only the displacement discontinuity on the fault curve/surface. Those methodology developed in [1, 2] cannot deal with the case that \(\mathcal{S}\) is closed. In our study, we can handle these two situations and also allow the occurrence of discontinuities in the both displacement and traction fields on the dislocation. In Theorem 3.1, we obtain a local uniqueness result for the inverse dislocation problem (3.1) by using a single displacement measurement on \(\Sigma_{0}\) whose proof is postponed in Section 5. **Theorem 3.1**.: _Let \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\). Assume that \(\operatorname{supp}(\mathbf{g}^{i})=\operatorname{supp}(\mathbf{f}^{i})= \overline{\mathcal{S}}_{i}\) and \(\mathbf{u}_{i}\) is the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}_{i}})\) with respect with to \(\mathbf{g}=\mathbf{g}^{i}\), \(\mathbf{f}=\mathbf{f}^{i}\) and \(\mathcal{S}=\mathcal{S}_{i}\), respectively, for \(i=1,2\). If \(\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\), then \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}:=(\mathcal{S}_{1}\backslash\mathcal{S}_{ 2})\cup(\mathcal{S}_{2}\backslash\mathcal{S}_{1})\) cannot contain a planar corner/3D edge corner \(\mathbf{x}_{c}\)._ In Theorem 3.2 and 3.3 we derive the global uniqueness results for the inverse dislocation problem (3.1) on the determination of the dislocation surface \(\mathcal{S}\) and \(\mathbf{f}\), \(\mathbf{g}\) from a given single displacement measurement on \(\Sigma_{0}\). The proofs of Theorems 3.2 and 3.3 are postponed in Section 5. We first consider the case that \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are closed. **Theorem 3.2**.: _Let \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\), where \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are closed. Assume that \(\Omega_{1}=\operatorname{enclose}(\mathcal{S}_{1})\) and \(\Omega_{2}=\operatorname{enclose}(\mathcal{S}_{2})\) are two convex polygons in \(\mathbb{R}^{2}\) or two convex polyhedrons in \(\mathbb{R}^{3}\), where \(\mathcal{S}_{i}=\bigcup_{k=1}^{m_{i}}\Pi_{i,k}\)\((i=1,2)\). Here \(\Pi_{i,k}\) is the \(k\)-th edge or surface of the polygon or polyhedron \(\Omega_{i}\). Let \(\mathbf{u}_{i}\) be the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}})\) with respect with to \(\mathbf{g}^{i}\) and \(\mathbf{f}^{i}\), respectively, for \(i=1,2\). If \(\left.\mathbf{u}_{1}\right|_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\), then \(\mathcal{S}_{1}=\mathcal{S}_{2}\), namely \(m_{1}=m_{2}:=m,\ \Pi_{1,k}=\Pi_{2,k}:=\Pi_{k}\)\((k=1,\dots,m)\). Furthermore, assume that \(\mathbf{f}^{i}\) and \(\mathbf{g}^{i}\) are piecewise-constant functions on \(\Pi_{k}\). Then we have_ \[\left.(\mathbf{f}^{1}-\mathbf{f}^{2})\right|_{\Pi_{k+1}}=\left(\mathbf{f}^{1}- \mathbf{f}^{2}\right)\big{|}_{\Pi_{k}},\ (\mathbf{g}^{1}-\mathbf{g}^{2})\big{|}_{\Pi_{k+1}}=W_{\mathbf{x}_{k}}( \mathbf{g}^{1}-\mathbf{g}^{2})\big{|}_{\Pi_{k}},\,k=1,\cdots,m-1 \tag{3.4}\] _and_ \[\left(\mathbf{f}^{1}-\mathbf{f}^{2}\right)\big{|}_{\Pi_{1}}=\left(\mathbf{f}^ {1}-\mathbf{f}^{2}\right)\big{|}_{\Pi_{m}},\quad\left(\mathbf{g}^{1}-\mathbf{ g}^{2}\right)\big{|}_{\Pi_{1}}=W_{\mathbf{x}_{m}}(\mathbf{g}^{1}-\mathbf{g}^{2}) \big{|}_{\Pi_{m}}, \tag{3.5}\] _where \(W_{\mathbf{x}_{k}}=\begin{bmatrix}-\cos\theta_{\mathbf{x}_{k}}&-\sin\theta_{ \mathbf{x}_{k}}\\ -\sin\theta_{\mathbf{x}_{k}}&\cos\theta_{\mathbf{x}_{k}}\end{bmatrix}\) is similarly defined as \(W_{\mathbf{x}_{c}}\) in Definition 3.1 and \(\theta_{\mathbf{x}_{k}}\) corresponds to the opening angle for the 2D corners/3D edge corners \(\mathbf{x}_{k}\), \(k=1,2,\cdots,m\)._ In Theorem 3.3, we investigate the unique determination of a piecewise curve or a piecewise surface \(\mathcal{S}\), where \(\mathcal{S}\) is open. Before that we introduce the corresponding definition. **Definition 3.2**.: Suppose that \(\mathcal{S}\subset\mathbb{R}^{n}\)\((n=2,3)\) is open. Under rigid motion, let \(\mathcal{S}\subset\mathbb{R}^{2}\) be the graph of a function \(f(x_{1})\), where \(x_{1}\in[a,b]\). If \([a,b]=\cup_{i=1}^{\ell}[a_{i},a_{i+1}]\) with \(l\geq 3\), \(a_{i}<a_{i+1}\), \(a_{1}=a\) and \(a_{\ell}=b\), which is piecewise linear polynomial on each piece \([a_{i},a_{i+1}]\), then \(\mathcal{S}\subset\mathbb{R}^{2}\) is referred as a piecewise curve. Under rigid motion, let \(\mathcal{S}\subset\mathbb{R}^{3}\) be the graph of a function \(f(x_{1},x_{3})\), where \((x_{1},x_{3})\in[a_{1},a_{2}]\times[b_{1},b_{2}]\). If \(f(x_{1},c)\) with \(c\in[b_{1},b_{2}]\) being fixed satisfies \(f(x_{1},c)=g(x_{1})\), where the graph of \(g(x_{1})\) is a piecewise curve like the ones in \(\mathbb{R}^{2}\), then \(\mathcal{S}\subset\mathbb{R}^{3}\) is referred as a piecewise surface. See Fig. 3 for a schematic illustration. **Theorem 3.3**.: _Assume that \((\mathcal{S}_{1};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S}_{2};\mathbf{f}^{2},\mathbf{g}^{2})\) belong to \(\mathcal{T}\), where \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are open. Let \(\mathbf{u}_{i}\) be the unique solution to Problem (2.6) in \(H^{1}_{\Sigma_{D},\Sigma_{N}}(\Omega\backslash\overline{\mathcal{S}_{i}})\) with respect to \(\mathbf{g}^{i}\), \(\mathbf{f}^{i}\) and \(\mathcal{S}_{i}\), where \(\mathbf{f}^{i}\in H^{\frac{1}{2}}_{00}(\mathcal{S}_{i})\) and \(\mathbf{g}^{i}\in H^{-\frac{1}{2}}_{0}(\mathcal{S}_{i})\) with \(\operatorname{supp}(\mathbf{g}^{i})=\operatorname{supp}(\mathbf{f}^{i})= \overline{\mathcal{S}}_{i}\), \(i=1,2\). Suppose that the curves/surfaces \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are piecewise curves in \(\mathbb{R}^{2}\) or piecewise surfaces in \(\mathbb{R}^{3}\). If_ \[\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}, \tag{3.6}\] _then_ \[\mathcal{S}_{1}=\mathcal{S}_{2},\quad\mathbf{f}^{1}=\mathbf{f}^{2}\,\text{ and }\,\mathbf{g}^{1}=\mathbf{g}^{2}.\] ## 4. Local results of slips \(\mathbf{f}\) and \(\mathbf{g}\) at the corners on \(\mathcal{S}\) In this section, we shall proceed to derive several auxiliary propositions which describe the local results of slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at the corner points of \(\mathcal{S}\). These auxiliary results play a key role in establishing our main results in Theorems 3.1, 3.2 and 3.3. To derive these propositions, we shall introduce two kinds of the so-called CGO (complex geometrical optics) solutions satisfying different Lame /acoustic equations. ### Local results for 2D case This subsection is devoted to analyzing the local behaviours of slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) around a planar corner. Next, we introduce the first kind of CGO solution \(\mathbf{u}_{0}\) introduced in [6], which is given by \[\mathbf{u}_{0}(\mathbf{x})=\begin{pmatrix}\exp(-s\sqrt{z})\\ \mathrm{i}\exp(-s\sqrt{z})\end{pmatrix}:=\begin{pmatrix}u_{1}^{0}(\mathbf{x}) \\ u_{2}^{0}(\mathbf{x})\end{pmatrix}\quad\text{in}\quad\Omega,\quad\mathbf{x}=(x_ {1},x_{2})^{\top}, \tag{4.1}\] where \(z=x_{1}+\mathrm{i}\,x_{2}\), \(s\in\mathbb{R}_{+}\) and \(\Omega\cap(\mathbb{R}_{-}\cup\{0\})=\emptyset\). Here the complex square root of \(z\) is defined as \[\sqrt{z}=\sqrt{|z|}\left(\cos\frac{\theta}{2}+\mathrm{i}\sin\frac{\theta}{2} \right),\] where \(-\pi<\theta<\pi\) is the argument of \(z\). Furthermore, it yields that \(\mathcal{L}\,\mathbf{u}_{0}=\mathbf{0}\) in \(\Omega\). Some significant properties and regularity results of the CGO solution given in (4.1) need to be reviewed, which is beneficial for the subsequent analysis. **Lemma 4.1**.: _[_6_, Proposition 3.1]_ _Let \(\mathbf{u}_{0}\) be given as above. Then we have the following properties_ \[\int_{\mathcal{K}}u_{1}^{0}(\mathbf{x})\mathrm{d}\mathbf{x}=6\mathrm{i}(e^{-2 \theta_{M}\mathrm{i}}-e^{-2\theta_{m}\mathrm{i}})s^{-4} \tag{4.2}\] Figure 3. Schematic illustration of a piecewise curve or a piecewise surface. _and_ \[\int_{\mathcal{K}}|u_{j}^{0}(\mathbf{x})||\mathbf{x}|^{\alpha}\mathrm{d}\mathbf{x} \leq\frac{2(\theta_{M}-\theta_{m})\Gamma(2\alpha+4)}{\delta_{\mathcal{K}}^{2 \alpha+4}}s^{-2\alpha-4},\quad j=1,2, \tag{4.3}\] _where \(\mathcal{K}\) is defined in Section 3, \(\alpha,h>0\), \(\delta_{\mathcal{K}}=\min\limits_{\theta_{m}<\theta<\theta_{M}}\cos\frac{ \theta}{2}\) is a positive constant._ The following critical estimate can be obtained by using Laplace transform and the exponential function with negative power. **Lemma 4.2**.: _For any \(\alpha>0\), if \(\omega(\theta)>0\), then we have_ \[\lim\limits_{s\to+\infty}\int_{0}^{h}r^{\alpha}e^{-s\sqrt{\tau}\omega(\theta)} \mathrm{d}\mathbf{r}=\mathcal{O}(s^{-2\alpha-2}).\] We next recall some critical lemmas about the regularity of the CGO solution \(\mathbf{u}_{0}\) defined in (4.1). **Lemma 4.3**.: _[_13_, Lemma 2.3]_ _Let \(\mathcal{C}_{\mathbf{x}_{c},h}\) be defined in (3.3) and \(\mathbf{u}_{0}\) be given in (4.1). Then \(\mathbf{u}_{0}\in H^{1}(\mathcal{C}_{\mathbf{x}_{c},h})^{2}\) and \(\mathcal{L}\,\mathbf{u}_{0}=\mathbf{0}\) in \(\mathcal{C}_{\mathbf{x}_{c},h}\). Furthermore, it holds that_ \[\left\|\mathbf{u}_{0}\right\|_{L^{2}(\mathcal{C}_{\mathbf{x}_{c},h})^{2}}\leq \sqrt{\theta_{M}-\theta_{m}}e^{-s\sqrt{\Theta}}h\] _and_ \[\left\|\left|\mathbf{x}\right|^{\alpha}\mathbf{u}_{0}\right\|_{L^{2}( \mathcal{C}_{\mathbf{x}_{c},h})^{2}}\leq s^{-2(\alpha+1)}\frac{2\sqrt{(\theta _{M}-\theta_{m})\Gamma(4\alpha+4)}}{(2\delta_{\mathcal{K}})^{2\alpha+2}},\] _where \(\Theta\in[0,h]\) and \(\delta_{\mathcal{K}}\) is defined in (4.3)._ **Lemma 4.4**.: _[_12_, Lemma 2.8]_ _Let \(\Gamma_{h}^{\pm}\) and \(u_{1}^{0}(\mathbf{x})\) be respectively defined in (3.3) and (4.1) with \(\mathbf{x}_{c}\) coinciding with the origin. We have_ \[\int_{\Gamma_{h}^{+}}u_{1}^{0}(\mathbf{x})\mathrm{d}\sigma=2s^{- 2}\left(\mu(\theta_{M})^{-2}-\mu(\theta_{M})^{-2}e^{-s\sqrt{h}\mu(\theta_{M})}\right.\] \[\left.-\mu(\theta_{M})^{-1}s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{M})} \right),\] \[\int_{\Gamma_{h}^{-}}u_{1}^{0}(\mathbf{x})\mathrm{d}\sigma=2s^{- 2}\left(\mu(\theta_{m})^{-2}-\mu(\theta_{m})^{-2}e^{-s\sqrt{h}\mu(\theta_{m})}\right.\] \[\left.-\mu(\theta_{m})^{-1}s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{m})} \right),\] _where \(\mu(\theta):=\cos(\theta/2)+\mathrm{i}\sin(\theta/2)=e^{\mathrm{i}\theta/2}\)._ To prove our main Theorems 3.1, 3.2 and 3.3, the next critical auxiliary proposition is needed. Since \(\mathcal{L}\) is invariant under rigid motion, in what follows, the underlying corner point \(\mathbf{x}_{c}\) coincides with the origin. **Proposition 4.1**.: _Under the same setup about \(\mathcal{C}_{h}\) and \(\Gamma_{h}^{\pm}\) given in (3.3) with \(\mathbf{x}_{c}\) coinciding with the origin. Let \(\mathbf{v}\in H^{1}(\mathcal{C}_{h})^{2}\) and \(\mathbf{w}\in H^{1}(\mathcal{C}_{h})^{2}\) satisfy_ \[\begin{cases}\mathcal{L}\,\mathbf{v}=\mathbf{0},&\mathcal{L}\,\mathbf{w}= \mathbf{0}&\text{in}\quad\mathcal{C}_{h},\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{+},\,\mathcal{T}_{\boldsymbol{\nu}}\, \mathbf{v}-\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{+}&\text{on} \quad\Gamma_{h}^{+},\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{-},\,\mathcal{T}_{\boldsymbol{\nu}}\, \mathbf{v}-\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{-}&\text{on} \quad\Gamma_{h}^{-}\end{cases}\] _with \(\mathbf{f}_{j}\in H^{\frac{1}{2}}(\Gamma_{h}^{j})^{2}\cap C^{1,\alpha_{j}}(\Gamma_ {h}^{j})^{2}\) and \(\mathbf{g}_{j}\in H^{-\frac{1}{2}}(\Gamma_{h}^{j})^{2}\cap C^{\beta_{j}}(\Gamma_ {h}^{j})^{2}\), where \(j=+,-\) and \(\alpha_{+},\alpha_{-},\beta_{+},\beta_{-}\in(0,1)\). Then we have the following continuities at the vertex point, that is to say,_ \[\mathbf{g}_{+}(\mathbf{0})=W\,\mathbf{g}_{-}(\mathbf{0})\quad\text{and}\quad \mathbf{f}_{+}(\mathbf{0})=\mathbf{f}_{-}(\mathbf{0}), \tag{4.4}\] _where \(W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m})\\ -\sin(\theta_{M}-\theta_{m}),&+\cos(\theta_{M}-\theta_{m})\end{bmatrix}.\)_ Proof.: Thanks to the symmetric role of \((\Re\mathbf{v},\Re\mathbf{w})\) and \((\Im\mathbf{v},\Im\mathbf{w})\). We just have to prove that the corresponding results hold for \((\Re\mathbf{v},\Re\mathbf{w})\). By a similar proof process, it can be proved that those results are still valid for \((\Im\mathbf{v},\Im\mathbf{w})\), hence for \((\mathbf{v},\mathbf{w})\). Due to Betti's second formula, we have the following integral identity \[\int_{\Gamma_{h}^{+}}\Re\mathbf{g}_{+}\cdot\mathbf{u}_{0}-\mathcal{ T}_{\boldsymbol{\nu}}\mathbf{u}_{0}\cdot\Re\mathbf{f}_{+}\,\mathrm{d}\sigma+ \int_{\Gamma_{h}^{-}}\Re\mathbf{g}_{-}\cdot\mathbf{u}_{0}-\mathcal{T}_{ \boldsymbol{\nu}}\mathbf{u}_{0}\cdot\Re\mathbf{f}_{-}\,\mathrm{d}\sigma\] \[=\int_{\Lambda_{h}}\mathcal{T}_{\boldsymbol{\nu}}\big{(}\Re \mathbf{v}-\Re\mathbf{w}\big{)}\cdot\mathbf{u}_{0}-\mathcal{T}_{\boldsymbol{ \nu}}\mathbf{u}_{0}\cdot\big{(}\Re\mathbf{v}-\Re\mathbf{w}\big{)}\,\mathrm{d}\sigma. \tag{4.5}\] Since \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}(\Gamma_{h}^{j})^{2}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}(\Gamma_{h}^{j})^{2}\) for \(j=+,-\), we have the expansions as follows \[\mathbf{f}_{j}(\mathbf{x}) =\mathbf{f}_{j}(\mathbf{0})+\delta\mathbf{f}_{j}(\mathbf{x}), \quad\big{|}\delta\mathbf{f}_{j}(\mathbf{x})\big{|}\leq A_{j}|\mathbf{x}|^{1+ \alpha_{j}}, \tag{4.6}\] \[\mathbf{g}_{j}(\mathbf{x}) =\mathbf{g}_{j}(\mathbf{0})+\delta\mathbf{g}_{j}(\mathbf{x}), \quad\big{|}\delta\mathbf{g}_{j}(\mathbf{x})\big{|}\leq B_{j}|\mathbf{x}|^{ \beta_{j}}, \tag{4.7}\] where \(A_{j}\) and \(B_{j}\) are positive. From the expression of \(\mathbf{u}_{0}\), it is easy to imply that \[\frac{\partial u_{1}^{0}}{\partial r}=-\frac{s}{2\sqrt{r}}e^{-sr^{1/2}\mu( \theta)+\mathrm{i}\frac{\theta}{2}}\quad\text{and}\quad\frac{\partial u_{1}^{ 0}}{\partial\theta}=-\frac{\mathrm{i}s\sqrt{r}}{2}e^{-sr^{1/2}\mu(\theta)+ \mathrm{i}\frac{\theta}{2}},\] where \(\mu(\cdot)\) is given by Lemma 4.9. Thus, we directly obtain \[\frac{\partial u_{1}^{0}}{\partial x_{1}}=-\frac{s}{2\sqrt{r}}e^{-sr^{1/2}\mu( \theta)-\mathrm{i}\frac{\theta}{2}}\quad\text{and}\quad\frac{\partial u_{1}^{ 0}}{\partial x_{2}}=-\frac{\mathrm{i}s\sqrt{r}}{2}e^{-sr^{1/2}\mu(\theta)- \mathrm{i}\frac{\theta}{2}}.\] Notice that \(u_{2}^{0}(\mathbf{x})=\mathrm{i}u_{1}^{0}(\mathbf{x})\), we get \[\nabla\mathbf{u}_{0}=-\frac{s}{2\sqrt{r}}e^{-s\sqrt{r}\mu(\theta)-\frac{\theta }{2}\mathrm{i}}\begin{bmatrix}1&\mathrm{i}\\ \mathrm{i}&-1\end{bmatrix},\] Therefore, one can prove that \[\int_{\Gamma_{h}^{+}}\mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_ {0}\,\mathrm{d}\sigma =\int_{\Gamma_{h}^{+}}-\frac{s}{2r^{\frac{1}{2}}}e^{-s\sqrt{r}\mu( \theta_{M})-\mathrm{i}\frac{\theta_{M}}{2}}\begin{bmatrix}1&\mathrm{i}\\ \mathrm{i}&-1\end{bmatrix}\cdot\begin{bmatrix}-\sin\theta_{M}\\ \cos\theta_{M}\end{bmatrix}\mathrm{d}\sigma\] \[=-\frac{\mu s}{2}e^{\mathrm{i}\theta_{M}/2}\begin{bmatrix}\mathrm{i }\\ -1\end{bmatrix}\int_{0}^{h}r^{-\frac{1}{2}}e^{-s\sqrt{r}\mu(\theta_{M})}\mathrm{d}r\] \[=-\mu se^{\mathrm{i}\theta_{M}/2}\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}\int_{0}^{\sqrt{h}}e^{-s\,t\,\mu(\theta_{M})}\mathrm{d}t\] \[=\mu\big{(}e^{-s\,\sqrt{h}\,\mu(\theta_{M})}-1\big{)}\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}. \tag{4.8}\] By the similar arguments, we can derive \[\int_{\Gamma_{h}^{-}}\mathcal{T}_{\boldsymbol{\nu}_{m}}\mathbf{u}_{0}\, \mathrm{d}\sigma=\mu\big{(}e^{-s\sqrt{h}\mu(\theta_{m})}-1\big{)}\begin{bmatrix} -\mathrm{i}\\ 1\end{bmatrix}. \tag{4.9}\] From Lemma 4.9, we obtain \[\int_{\Gamma_{h}^{+}}\mathbf{u}_{0}\,\mathrm{d}\sigma=2s^{-2}\left(\mu^{-2}(\theta _{M})-\mu^{-2}(\theta_{M})e^{-s\sqrt{h}\mu(\theta_{M})}-\mu^{-1}(\theta_{M})s \sqrt{h}e^{-s\sqrt{h}\mu(\theta_{M})}\right)\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}, \tag{4.10}\] \[\int_{\Gamma_{h}^{-}}\mathbf{u}_{0}\,\mathrm{d}\sigma=2s^{-2}\left(\mu^{-2}( \theta_{m})-\mu^{-2}(\theta_{m})e^{-s\sqrt{h}\mu(\theta_{m})}-\mu^{-1}(\theta_ {m})s\sqrt{h}e^{-s\sqrt{h}\mu(\theta_{m})}\right)\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}, \tag{4.11}\] Substituting (4.8)-(4.11) into (4.5), the following integral identity holds, \[\Re\mathbf{f}_{+}(\mathbf{0})\cdot\mu\Big{(}e^{-s\sqrt{h}\mu( \theta_{M})}-1\Big{)}\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}+\Re\mathbf{f}_{-}(\mathbf{0})\cdot\mu\Big{(}e^{-s\sqrt{h}\mu( \theta_{m})}-1\Big{)}\begin{bmatrix}-\mathrm{i}\\ +1\end{bmatrix}\] \[-\Re\mathbf{g}_{+}(\mathbf{0})\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}2s^{-2}\mu^{-2}(\theta_{M})-\Re\mathbf{g}_{-}(\mathbf{0 })\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}2s^{-2}\mu^{-2}(\theta_{m})=\sum_{j=1}^{7}R_{j}, \tag{4.12}\] where \[R_{1}=-2s^{-2}\Re\mathbf{g}_{+}(\mathbf{0})\cdot\begin{bmatrix}1 \\ \mathrm{i}\end{bmatrix}\Big{(}\mu^{-1}(\theta_{M})\,s\sqrt{h}\,e^{-s\sqrt{h}\, \mu(\theta_{M})}+\mu^{-2}(\theta_{M})\,e^{-s\sqrt{h}\,\mu(\theta_{M})}\Big{)},\] \[R_{2}=-2s^{-2}\Re\mathbf{g}_{-}(\mathbf{0})\cdot\begin{bmatrix}1 \\ \mathrm{i}\end{bmatrix}\big{(}\mu^{-1}(\theta_{m})\,s\sqrt{h}\,e^{-s\sqrt{h}\, \mu(\theta_{m})}+\mu^{-2}(\theta_{m})\,e^{-s\sqrt{h}\,\mu(\theta_{m})}\big{)},\] \[R_{3}=\int_{\Lambda_{h}}\mathcal{T}_{\boldsymbol{\nu}}\big{(} \Re\mathbf{v}-\Re\mathbf{w}\big{)}\cdot\mathbf{u}_{0}-\mathcal{T}_{ \boldsymbol{\nu}}\mathbf{u}_{0}\cdot\big{(}\Re\mathbf{v}-\Re\mathbf{w}\big{)} \,\mathrm{d}\sigma,\] \[R_{4}=-\int_{\Gamma_{h}^{+}}\Re\delta\mathbf{f}_{+}\cdot \mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_{0}\,\mathrm{d}\sigma,\quad\,R_ {5}=-\int_{\Gamma_{h}^{-}}\Re\delta\mathbf{f}_{-}\cdot\mathcal{T}_{\boldsymbol {\nu}_{m}}\mathbf{u}_{0}\,\mathrm{d}\sigma,\] \[R_{6}=\int_{\Gamma_{h}^{+}}\Re\delta\mathbf{g}_{+}\cdot\mathbf{ u}_{0}\,\mathrm{d}\sigma,\qquad\qquad R_{7}=\int_{\Gamma_{h}^{-}}\Re\delta \mathbf{g}_{-}\cdot\mathbf{u}_{0}\,\mathrm{d}\sigma.\] From the expression of \(\mu(\cdot)\) given by Lemma 4.9, it is direct to obtain that \(\mu^{-2}(\theta_{M})\), \(\mu^{-1}(\theta_{M})\), \(\mu^{-2}(\theta_{m})\) and \(\mu^{-1}(\theta_{m})\) are bounded. For sufficient large \(s\), we have \[\big{|}R_{1}\big{|}=\mathcal{O}(s^{-1}e^{-c_{1}s})\quad\text{and}\quad\big{|}R _{2}\big{|}=\mathcal{O}(s^{-1}e^{-c_{2}s}), \tag{4.13}\] where \(c_{1}\) and \(c_{2}\) are positive constants not depending on \(s\). Considering the estimates of \(\delta\mathbf{f}_{+}\) in (4.6), the expression of \(\mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{u}_{0}\) in (4.8) and Lemma 4.9, we get \[\big{|}R_{4}\big{|}\leq\,c_{4}\,\int_{0}^{h}r^{\alpha_{+}+1}e^{-s\sqrt{\tau} \cos\frac{\theta_{M}}{2}}\,\mathrm{d}r=\mathcal{O}(s^{-2\alpha_{+}-2}). \tag{4.14}\] Similarly, we get the following estimates \[\big{|}R_{5}\big{|}=\mathcal{O}(s^{-2\alpha_{-}-2}),\quad\big{|}R_{6}\big{|}= \mathcal{O}(\tau^{-2\beta_{+}-2}),\quad\big{|}R_{7}\big{|}=\mathcal{O}(\tau^{- 2\beta_{-}-2}). \tag{4.15}\] For the term \(R_{3}\), by virtue of Cauchy-Schwarz inequality, trace theorem and Lemma 4.3, we obtain \[\big{|}R_{3}\big{|} \leq\|\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\|\mathcal{T}_{ \boldsymbol{\nu}}(\Re\mathbf{v}-\Re\mathbf{w})\|_{L^{2}(\Lambda_{h})^{2}}+\| \Re\mathbf{v}-\Re\mathbf{w}\|_{L^{2}(\Lambda_{h})^{2}}\|\mathcal{T}_{\boldsymbol {\nu}}\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\] \[\leq\Big{(}\|\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}+\|\mathcal{T} _{\boldsymbol{\nu}}\mathbf{u}_{0}\|_{L^{2}(\Lambda_{h})^{2}}\Big{)}\big{\|} \mathbf{v}-\mathbf{w}\big{\|}_{H^{1}(\mathcal{C}_{h})^{2}}\] \[\leq c_{3}\big{\|}\mathbf{u}_{0}\big{\|}_{H^{1}(\mathcal{C}_{h})^{ 2}}\leq c_{3}e^{-s\sqrt{h}\delta\mathcal{C}}, \tag{4.16}\] where \(c_{3}\) is positive and \(\delta_{\mathcal{K}}\) is defined in (4.3). Together (4.13)-(4.16) with the identity (4.12), as \(s\to+\infty\), one clearly implies that \[\begin{bmatrix}+\mathrm{i}\\ -1\end{bmatrix}\cdot\left(\Re\mathbf{f}_{+}(\mathbf{0})-\Re\mathbf{f}_{-}( \mathbf{0})\right)=0. \tag{4.17}\] Since \(\Re\mathbf{f}_{+}(\mathbf{0})\) and \(\Re\mathbf{f}_{-}(\mathbf{0})\) belong to \(\mathbb{R}\), we derive that \[\Re\mathbf{f}_{+}(\mathbf{0})=\Re\mathbf{f}_{-}(\mathbf{0}). \tag{4.18}\] Substituting (4.18) into (4.12), and then multiplying the new equation by \(s^{2}\), one gets that \[\mu\,s^{2}\Re\mathbf{f}_{+}(\mathbf{0})\cdot\begin{bmatrix}\mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\mu(\theta_{M})}-e^{-s\sqrt{h}\mu(\theta_{m} )}\right)-2\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathbf{g}_{+}(\mathbf{0})}{\mu^{2 }(\theta_{M})}+\frac{\Re\mathbf{g}_{-}(\mathbf{0})}{\mu^{2}(\theta_{m})} \right)=\sum_{j=1}^{7}s^{2}\,R_{j},\] Let \(s\) tend to \(+\infty\), we have \[\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\Re\mathbf{g}_{+}(\mathbf{0})\mu^{-2}( \theta_{M})+\Re\mathbf{g}_{-}(\mathbf{0})\mu^{-2}(\theta_{m})\right)=0.\] Noting that \(\mu^{-2}(\theta_{M})\neq\mu^{-2}(\theta_{m})\) and \(\frac{\mu^{2}(\theta_{M})}{\mu^{2}(\theta_{m})}=e^{\mathrm{i}(\theta_{M}- \theta_{m})}\), then let \(\Re\mathbf{g}_{+}(\mathbf{0}):=\begin{bmatrix}a_{11}\\ a_{21}\end{bmatrix}\) and \(\Re\mathbf{g}_{-}(\mathbf{0}):=\begin{bmatrix}a_{12}\\ a_{22}\end{bmatrix}\), where \(a_{ij}\in\mathbb{R}\) for \(i,j=1,2\), the above equation can be rewritten as follows \[\begin{cases}a_{11}+a_{12}\cos(\theta_{M}-\theta_{m})-a_{22}\sin(\theta_{M}- \theta_{m})=0,\\ a_{21}+a_{12}\sin(\theta_{M}-\theta_{m})+a_{22}\cos(\theta_{M}-\theta_{m})=0, \end{cases}\] which is \[\Re\mathbf{g}_{+}(\mathbf{0})=W\Re\mathbf{g}_{-}(\mathbf{0}).\] Here, \[W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m}) \\ -\sin(\theta_{M}-\theta_{m}),&\cos(\theta_{M}-\theta_{m})\end{bmatrix}\quad \text{and}\quad\det(W)\neq 0.\] The proof is complete. ### Local results for 3D case As discussed in Remark 4.2 in [13], the regularity result on the underlying elastic displacement around a general polyhedral corner in \(\mathbb{R}^{3}\) is challenging. So in this subsection, we shall restrict ourselves to the 3D edge corner. We introduce a dimension reduction operator \(\mathcal{P}\) to study the relevant continuities about slip vectors \(\mathbf{f}\) and \(\mathbf{g}\) at a 3D edge corner. In what follows, we suppose that the dislocation \(\mathcal{S}\) is a Lipschitz surface possessing at least one 3D edge corner \(\mathbf{x}_{c}=(\mathbf{x}_{c}^{\prime},x_{c}^{3})^{\top}\in\mathcal{S}\subset \mathbb{R}^{3}\). The next definition shall state a dimension reduction operator, which is beneficial to derive a crucial auxiliary proposition similar to Proposition 4.1 at a 3D edge corner. **Definition 4.1**.: Let \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\) be defined as \(\mathcal{C}_{\mathbf{x}_{c},h}\) in (3.3) with the vertex \(\mathbf{x}_{c}^{\prime}\). For a given function \(\phi\) in the domain \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\times(-M,M)\) with \(M>0\). For any fixed \(x_{c}^{3}\in(-M,M)\), we assume that \(\phi\in C_{0}^{\infty}(x_{c}^{3}-L,x_{c}^{3}+L)\) is a nonnegative function \(\phi\not\equiv\emptyset\). Write \(\mathbf{x}=(\mathbf{x}^{\prime},x_{3})^{\top}\in\mathbb{R}^{3}\). The dimension reduction operator \(\mathcal{P}\) is defined as follows \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})=\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi(x_{3})\mathbf{h}(\mathbf{x}^{\prime},x_{3})\mathrm{d}x_{3},\] where \(\mathcal{C}^{\prime}_{\mathbf{x}_{c}^{\prime},h}\) is defined in (3.3) with the vertex \(\mathbf{x}_{c}^{\prime}\). Before deriving the main results of this subsection, we review some important properties of such operator. **Lemma 4.5**.: _[_13_, Lemma 3.1]_ _Let \(\mathbf{h}\in H^{m}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M) )^{3}\), \(m=1,2\). Then_ \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})\in H^{m}(\mathcal{C}^{\prime}_{ \mathbf{x}^{\prime}_{c},h}))^{3}.\] _Similarly, if \(\mathbf{h}\in C^{\delta}\big{(}\overline{\mathcal{C}^{\prime}_{\mathbf{x}^{ \prime}_{c},h}}\times[-M,M]\big{)}^{3}\) with \(\delta\in(0,1)\), then_ \[\mathcal{P}(\mathbf{h})(\mathbf{x}^{\prime})\in C^{\delta}\big{(}\overline{ \mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}}\big{)}^{3}.\] Noting that the three-dimensional isotropic elastic operator \(\mathcal{L}\) defined in (2.1)-(2.3) can be rewritten as \[\mathcal{L} =\begin{bmatrix}\lambda\Delta+(\lambda+\mu)\partial_{1}^{2}&( \lambda+\mu)\partial_{1}\partial_{2}&(\lambda+\mu)\partial_{1}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{2}&\lambda\Delta+(\lambda+\mu)\partial_{2 }^{2}&(\lambda+\mu)\partial_{2}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{3}&(\lambda+\mu)\partial_{2}\partial_{3}& \lambda\Delta+(\lambda+\mu)\partial_{3}^{2}\end{bmatrix}\] \[=\widetilde{\mathcal{L}}+\begin{bmatrix}\lambda\partial_{3}^{2}& 0&(\lambda+\mu)\partial_{1}\partial_{3}\\ 0&\lambda\partial_{3}^{2}&(\lambda+\mu)\partial_{2}\partial_{3}\\ (\lambda+\mu)\partial_{1}\partial_{3}&(\lambda+\mu)\partial_{2}\partial_{3}& \lambda\partial_{3}^{2}+(\lambda+\mu)\partial_{3}^{2}\end{bmatrix},\] where \[\widetilde{\mathcal{L}}=\begin{bmatrix}\lambda\Delta^{\prime}+(\lambda+\mu) \partial_{1}^{2}&(\lambda+\mu)\partial_{1}\partial_{2}&0\\ (\lambda+\mu)\partial_{1}\partial_{2}&\lambda\Delta^{\prime}+(\lambda+\mu) \partial_{2}^{2}&0\\ 0&0&\lambda\Delta^{\prime}\end{bmatrix}=\begin{bmatrix}\mathcal{L}_{\mathcal{ P}}&0\\ 0&\lambda\Delta^{\prime}\end{bmatrix} \tag{4.19}\] with \(\Delta^{\prime}=\partial_{1}^{2}+\partial_{2}^{2}\) being the Laplace operator with respect to the \(\mathbf{x}^{\prime}\)-variables. Here, the operator \(\mathcal{L}_{\mathcal{P}}\) is the two-dimensional isotropic elastic operator with respect to the \(\mathbf{x}^{\prime}\)-variables. For any fixed \(x_{c}^{3}\in(-M,M)\) and sufficient small \(L>0\) such that \((x_{c}^{3}-L,x_{c}^{3}+L)\subset(-M,M)\). At this moment, we have \(\mathcal{L}=\widetilde{\mathcal{L}}\). Since \(\mathcal{L}\) is invariant under rigid motion, in what follows we assume that \(\mathbf{x}^{\prime}_{c}=\mathbf{0}\) in \(\mathbb{R}^{2}.\) Hence let \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) and \(\Gamma^{\pm}_{\mathbf{x}^{\prime}_{c},h}\) be defined in (3.3) with \(\mathbf{x}^{\prime}_{c}\) coinciding with the origin in 2D case. By some tedious calculations, we have the following lemma. **Lemma 4.6**.: _Under the setup about \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) and a 3D edge corner as above, suppose that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{\mathbf{x}^{ \prime}_{c},h}\times(-M,M)\big{)}^{3}\bigcap H^{\frac{5}{2}}\big{(}\Gamma^{ \prime j}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{\mathbf{x}^{\prime}_{ c},h}\times(-M,M)\big{)}^{3}\bigcap H^{-\frac{1}{2}}\big{(}\Gamma^{\prime j}_{ \mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}\) do not depend on \(x_{3}\) for \(j=+,-\), where \(\alpha_{+}\), \(\alpha_{-}\), \(\beta_{+}\), \(\beta_{-}\in(0,1)\). Denote \(\mathbf{v}=(\mathbf{v}^{(1,2)},\,v_{3})^{\top}\), \(\mathbf{w}=(\mathbf{w}^{(1,2)},\,w_{3})^{\top}\), \(\mathbf{f}_{\pm}=(\mathbf{f}_{\pm}^{(1,2)},\,f_{\pm}^{3})^{\top}\) and \(\mathbf{g}_{\pm}=(\mathbf{g}_{\pm}^{(1,2)},\,g_{\pm}^{3})^{\top}\). Then the transmission eigenvalue problem for \((\mathbf{v},\mathbf{w})\in H^{1}\big{(}\mathcal{C}^{\prime}_{\mathbf{x}^{ \prime}_{c},h}\times(-M,M)\big{)}^{3}\times H^{1}\big{(}\mathcal{C}^{\prime} _{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\big{)}^{3}:\)_ \[\begin{cases}\mathcal{L}\,\mathbf{v}=\mathbf{0},&\mathcal{L}\,\mathbf{w}= \mathbf{0}&\text{in}&\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M),\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{+},\,\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{v} -\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{+}&\text{on}&\Gamma^{ \prime+}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M),\\ \mathbf{v}-\mathbf{w}=\mathbf{f}_{-},\,\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{v} -\mathcal{T}_{\boldsymbol{\nu}}\,\mathbf{w}=\mathbf{g}_{-}&\text{on}&\Gamma^{ \prime-}_{\mathbf{x}^{\prime}_{c},h}\times(-M,M)\end{cases}\] can be reduced to be_ \[\begin{cases}\widetilde{\mathcal{L}}\,\mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime} )=\mathcal{G}_{1}(\mathbf{x}^{\prime}),&\widetilde{\mathcal{L}}\,\mathcal{P}( \mathbf{w})(\mathbf{x}^{\prime})=\mathcal{G}_{2}(\mathbf{x}^{\prime}),&\mathbf{ x}^{\prime}\in\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w})(\mathbf{ x}^{\prime})+\mathcal{P}(\mathbf{f}_{+})(\mathbf{x}^{\prime}),&\mathbf{x}^{\prime}\in \Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w})(\mathbf{ x}^{\prime})+\mathcal{P}(\mathbf{f}_{-})(\mathbf{x}^{\prime}),&\mathbf{x}^{\prime}\in \Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{+}=\mathcal{R}_{2}^{+}+\mathcal{P}(\mathbf{g}_{+})(\mathbf{x }^{\prime}),&\mathbf{x}^{\prime}\in\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c}, h},\\ \mathcal{R}_{1}^{-}=\mathcal{R}_{2}^{-}+\mathcal{P}(\mathbf{g}_{-})(\mathbf{x }^{\prime}),&\mathbf{x}^{\prime}\in\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c}, h},\end{cases} \tag{4.20}\] _where_ \[\mathcal{G}_{1}=-\int_{x_{c}^{2}-L}^{x_{c}^{3}+L}\phi^{\prime\prime }(x_{3})\begin{bmatrix}\lambda\mathbf{v}^{(1,2)}(\mathbf{x})\\ (2\lambda+\mu)v_{3}(\mathbf{x})\end{bmatrix}\mathrm{d}x_{3}+(\lambda+\mu)\int_{ x_{c}^{2}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\begin{bmatrix}\nabla v_{3}( \mathbf{x})\\ \partial_{1}v_{1}(\mathbf{x})+\partial_{2}v_{2}(\mathbf{x})\end{bmatrix} \mathrm{d}x_{3},\] \[\mathcal{R}_{1}^{+}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu} _{M}}\mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{1}) \boldsymbol{\nu}_{M}\\ \mu\partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(v_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}v_{1})\\ \mathcal{P}(\partial_{3}v_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{M}\end{bmatrix}, \mathcal{R}_{2}^{+}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3}) \boldsymbol{\nu}_{M}\\ \mu\partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(w_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}w_{1})\\ \mathcal{P}(\partial_{3}w_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{M}\end{bmatrix} \text{on}\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\] \[\mathcal{R}_{1}^{-}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu} _{m}}\mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3}) \boldsymbol{\nu}_{m}\\ \mu\partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(v_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}v_{1})\\ \mathcal{P}(\partial_{3}v_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{m}\end{bmatrix},\mathcal{R}_{2}^{-}=\begin{bmatrix}\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3}) \boldsymbol{\nu}_{m}\\ \mu\partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(w_{3})+\mu\begin{bmatrix} \mathcal{P}(\partial_{3}w_{1})\\ \mathcal{P}(\partial_{3}w_{2})\end{bmatrix}\cdot\boldsymbol{\nu}_{m}\end{bmatrix} \text{on}\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h}.\] _Here, \(\boldsymbol{\nu}_{M}\) and \(\boldsymbol{\nu}_{m}\) denote the exterior unit normal vector to \(\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h}\) and \(\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h}\), respectively, \(\mathcal{T}_{\boldsymbol{\nu}}\) is the two-dimensional boundary traction operator._ By applying the decomposition of \(\widetilde{\mathcal{L}}\) given in (4.19), it is direct to obtain the following results. Here, we omit the proof. **Lemma 4.7**.: _Under the same setup in Lemma 4.6, the transmission system (4.20) is equivalent to the following two PDE systems_ \[\begin{cases}\mathcal{L}_{\mathcal{P}}\,\mathcal{P}(\mathbf{v}^{(1,2)})( \mathbf{x}^{\prime})=\mathcal{G}_{1}^{(1,2)}(\mathbf{x}^{\prime})&\text{in}& \mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{L}_{\mathcal{P}}\,\mathcal{P}(\mathbf{w}^{(1,2)})(\mathbf{x}^{\prime})= \mathcal{G}_{2}^{(1,2)}(\mathbf{x}^{\prime})&\text{in}&\mathcal{C}^{\prime}_{ \mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v}^{(1,2)})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w}^{(1,2 )})(\mathbf{x}^{\prime})+\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{x}^{ \prime})&\text{on}&\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{P}(\mathbf{v}^{(1,2)})(\mathbf{x}^{\prime})=\mathcal{P}(\mathbf{w}^{(1,2 )})(\mathbf{x}^{\prime})+\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{x}^{ \prime})&\text{on}&\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{(1,2)}=\mathcal{R}_{2}^{(1,2)}+\mathcal{P}(\mathbf{g}_{+}^{(1,2 )})(\mathbf{x}^{\prime})&\text{on}&\Gamma^{\prime+}_{\mathbf{x}^{\prime}_{c},h},\\ \mathcal{R}_{1}^{(1,2)}=\mathcal{R}_{2}^{(1,2)}+\mathcal{P}(\mathbf{g}_{-}^{(1,2 )})(\mathbf{x}^{\prime})&\text{on}&\Gamma^{\prime-}_{\mathbf{x}^{\prime}_{c},h} \end{cases} \tag{4.21}\] _and_ \[\begin{cases}\lambda\Delta^{\prime}\,\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \mathcal{G}_{1}^{(3)}(\mathbf{x}^{\prime})&\text{in}\ \ \ \mathcal{C}_{\mathbf{x}_{c}^{\prime},h}^{\prime},\\ \lambda\Delta^{\prime}\,\mathcal{P}(w_{3})(\mathbf{x}^{\prime})=\mathcal{G}_{2 }^{(3)}(\mathbf{x}^{\prime})&\text{in}\ \ \ \mathcal{C}_{\mathbf{x}_{c}^{\prime},h}^{\prime},\\ \mathcal{P}(v_{3})(\mathbf{x}^{\prime})=\mathcal{P}(w_{3})(\mathbf{x}^{\prime })+\mathcal{P}(f_{+}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\\ \mathcal{P}(v_{3})(\mathbf{x}^{\prime})=\mathcal{P}(w_{3})(\mathbf{x}^{\prime })+\mathcal{P}(f_{-}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime-},\\ \partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \partial_{\boldsymbol{\nu}_{M}}\mathcal{P}(w_{3})(\mathbf{x}^{\prime})+\frac{ 1}{\mu}\mathcal{P}(g_{+}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\\ \partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(v_{3})(\mathbf{x}^{\prime})= \partial_{\boldsymbol{\nu}_{m}}\mathcal{P}(w_{3})(\mathbf{x}^{\prime})+\frac{ 1}{\mu}\mathcal{P}(g_{-}^{3})(\mathbf{x}^{\prime})&\text{on}\ \ \ \mathcal{I}_{\mathbf{x}_{c}^{\prime},h}^{\prime-},\end{cases} \tag{4.22}\] _where_ \[\mathcal{G}_{1}^{(1,2)}=-\lambda\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi^{\prime\prime}(x_{3})\mathbf{v}^{(1,2)}(\mathbf{x})\mathrm{d}x_{3}+( \lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\nabla v_{3}( \mathbf{x})\mathrm{d}x_{3},\] \[\mathcal{G}_{2}^{(1,2)}=-\lambda\int_{x_{c}^{3}-L}^{x_{c}^{3}+L} \phi^{\prime\prime}(x_{3})\mathbf{w}^{(1,2)}(\mathbf{x})\mathrm{d}x_{3}+( \lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})\nabla w_{3}( \mathbf{x})\mathrm{d}x_{3},\] \[\mathcal{G}_{1}^{(3)}=-(2\lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3 }+L}\phi^{\prime\prime}(x_{3})v_{3}(\mathbf{x})\mathrm{d}x_{3}+(\lambda+\mu) \int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})(\partial_{1}v_{1}+ \partial_{2}v_{2})\mathrm{d}x_{3},\] \[\mathcal{G}_{2}^{(3)}=-(2\lambda+\mu)\int_{x_{c}^{3}-L}^{x_{c}^{3 }+L}\phi^{\prime\prime}(x_{3})w_{3}(\mathbf{x})\mathrm{d}x_{3}+(\lambda+\mu) \int_{x_{c}^{3}-L}^{x_{c}^{3}+L}\phi^{\prime}(x_{3})(\partial_{1}w_{1}+ \partial_{2}w_{2})\mathrm{d}x_{3},\] \[\mathcal{R}_{1}^{+(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3})\, \boldsymbol{\nu}_{M},\ \mathcal{R}_{2}^{+(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{M}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3})\quad \text{on}\ \Gamma_{\mathbf{x}_{c}^{\prime},h}^{\prime+},\] \[\mathcal{R}_{1}^{-(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{v}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}v_{3})\, \boldsymbol{\nu}_{m},\ \mathcal{R}_{2}^{-(1,2)}=\mathcal{T}_{\boldsymbol{\nu}_{m}} \mathcal{P}(\mathbf{w}^{(1,2)})+\lambda\mathcal{P}(\partial_{3}w_{3})\quad \text{on}\ \Gamma_{\mathbf{x}_{c}^{\prime},h}^{\prime-}.\] To obtain the continuity of \(\mathcal{P}(f_{+}^{3})\) and \(\mathcal{P}(f_{-}^{3})\), \(\mathcal{P}(g_{+}^{3})\) and \(\mathcal{P}(g_{-}^{3})\) at a 3D edge corner in the conventional or rotational sense. We also use the CGO solution \(u_{0}\) introduced in [5]. To be more precise, such CGO solution possesses a similar form and properties like Lemma 4.1-Lemma 4.9 (cf. [12]). For the reader's sake, we include these properties here. **Lemma 4.8**.: _[_5_, Lemma 2.2]_ _Let \(\mathbf{x}^{\prime}=(x_{1},x_{2})^{\top}=r(\cos\theta,\sin\theta)^{\top}\in \mathbb{R}^{2}\) and \(s\in\mathbb{R}_{+}\),_ \[u_{0}(s\mathbf{x}^{\prime}):=\exp(-\sqrt{sr}\mu(\theta)), \tag{4.23}\] _where \(\mu(\cdot)\) is defined in Lemma 4.9. Then \(s\longmapsto u_{0}(s\mathbf{x}^{\prime})\) decays exponentially in \(\mathbb{R}_{+}\) and_ \[\Delta^{\prime}u_{0}=0\quad\text{in}\quad\mathbb{R}^{2}\backslash\mathbb{R}_{0,-}^{2} \tag{4.24}\] _where \(\mathbb{R}_{0,-}^{2}:=\{\mathbf{x}^{\prime}\in\mathbb{R}^{2}|\mathbf{x}^{\prime}=( x_{1},x_{2});x_{1}\leq 0,x_{2}=0\}\). Moreover,_ \[\int_{\mathcal{K}_{\mathcal{P}}}u_{0}(s\mathbf{x}^{\prime})\mathrm{d}\mathbf{x}^{ \prime}=6\mathrm{i}(e^{-2\theta_{M}\mathrm{i}}-e^{-2\theta_{m}\mathrm{i}})s^{-2} \tag{4.25}\] _and for \(\alpha,\,s>0\) and \(h>0\)_ \[\int_{\mathcal{K}_{\mathcal{P}}}|u_{0}(s\mathbf{x}^{\prime})||\mathbf{x}^{\prime }|^{\alpha}\mathrm{d}\mathbf{x}^{\prime}\leq\frac{2(\theta_{M}-\theta_{m})\Gamma(2 \alpha+4)}{\delta_{\mathcal{K}_{\mathcal{P}}}^{2\alpha+4}}s^{-\alpha-2},\] \[\int_{\mathcal{K}_{\mathcal{P}}\backslash B_{h}}|u_{0}(s\mathbf{x}^{\prime})| \mathrm{d}\mathbf{x}^{\prime}\leq\frac{6(\theta_{M}-\theta_{m})}{\delta_{ \mathcal{K}_{\mathcal{P}}}^{4}}s^{-2}e^{-\frac{\sqrt{\pi h}}{2}\delta_{\mathcal{ K}_{\mathcal{P}}}}, \tag{4.26}\] _where \(\mathcal{K}_{\mathcal{P}}\) is defined like \(\mathcal{K}\) in Section 3 and \(\delta_{\mathcal{K}_{\mathcal{P}}}=\min\limits_{\theta_{m}<\theta<\theta_{M}} \cos\frac{\theta}{2}\) is a positive constant._ **Lemma 4.9**.: _[_12_, Lemma 2.4]_ _For any \(\alpha>0\), if \(\omega(\theta)>0\), then we have_ \[\lim\limits_{s\to+\infty}\int_{0}^{h}r^{\alpha}e^{-\sqrt{\pi}\omega(\theta)} \mathrm{d}\mathbf{r}=\mathcal{O}(s^{-\alpha-1}).\] **Lemma 4.10**.: _[_12_, Lemma 2.3]_ _Let \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\) be defined as in (3.3) and \(u_{0}\) be given in (4.23). Then \(u_{0}\in H^{1}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h})^{2}\) and \(\Delta^{\prime}u_{0}=0\) in \(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}\). Furthermore, it holds that_ \[\left\|u_{0}\right\|_{L^{2}(\mathcal{C}^{\prime}_{\mathbf{x}^{\prime}_{c},h}) ^{2}}\leq\frac{\sqrt{\theta_{M}-\theta_{m}}e^{-2\sqrt{s\Theta}\delta_{\mathcal{ K}_{\mathcal{P}}}}h^{2}}{2}\] _and_ \[\left\||\mathbf{x}^{\prime}|^{\alpha}u_{0}\right\|_{L^{2}(\mathcal{C}^{\prime }_{\mathbf{x}^{\prime}_{c},h})^{2}}^{2}\leq s^{-2(\alpha+1)}\frac{2(\theta_{M} -\theta_{m})\Gamma(4\alpha+4)}{(4\delta_{\mathcal{K}_{\mathcal{P}}})^{2\alpha +2}},\] _where \(\Theta\in[0,h]\) and \(\delta_{\mathcal{K}_{\mathcal{P}}}\) is given in Lemma 4.8._ We proceed to derive a key proposition to establish one main geometric result, which is a three-dimensional result similar to Proposition 4.1. **Proposition 4.2**.: _Consider the same setup in Lemma 4.6 with \(\mathbf{x}_{c}\) coinciding with the origin. Assume that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\bigcap H^{\frac{1}{2}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\bigcap H^{-\frac{1}{2}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\) are independent of \(x_{3}\), where \(\alpha_{j},\beta_{j}\) are in \((0,1)\) and \(j=+,-\). Let \(\mathbf{v}\in H^{1}\left(\mathcal{C}^{\prime}_{h}\times(-M,M)\right)^{3}\) and \(\mathbf{w}\in H^{1}\left(\mathcal{C}^{\prime}_{h}\times(-M,M)\right)^{3}\) satisfy the PDE system (4.20). Let \(\mathbf{g}_{j}(\mathbf{0})=(\mathbf{g}_{j}^{(1,2)}(\mathbf{0})^{\top},g_{j}^{ 3}(\mathbf{0}))^{\top}\) for \(j=+,-\). Then we have_ \[\mathbf{f}_{+}(\mathbf{0})=\mathbf{f}_{-}(\mathbf{0}),\quad\mathbf{g}_{+}^{(1,2)}(\mathbf{0})=W\mathbf{g}_{-}^{(1,2)}(\mathbf{0})\quad\text{and}\quad g_{+} ^{3}(\mathbf{0})=g_{-}^{3}(\mathbf{0})=0, \tag{4.27}\] _where \(W=\begin{bmatrix}-\cos(\theta_{M}-\theta_{m}),&-\sin(\theta_{M}-\theta_{m}) \\ -\sin(\theta_{M}-\theta_{m}),&+\cos(\theta_{M}-\theta_{m})\end{bmatrix}\), \(\theta_{M}\) and \(\theta_{m}\) are the arguments corresponding to the boundaries \(\Gamma^{\prime+}_{h}\) and \(\Gamma^{\prime-}_{h}\) respectively._ Proof.: Similar to the proof of Proposition 4.1, we only consider the corresponding proofs for \((\Re\mathcal{P}(\mathbf{v}),\Re\mathcal{P}(\mathbf{w}))\). We follow similar arguments in the proof of Proposition 4.1 with some necessary modifications. We divide the proof into two parts. **Part I.** We first shall prove that \[\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})=\Re\mathcal{P}(\mathbf{f}_ {-}^{(1,2)})(\mathbf{0})\quad\text{and}\quad\Re\mathcal{P}(\mathbf{g}_{+}^{(1,2)})(\mathbf{0})=W\Re\mathcal{P}(\mathbf{g}_{-}^{(1,2)})(\mathbf{0}). \tag{4.28}\] In this part, we consider the PDE system (4.21). Noting that \(\mathbf{f}_{j}\in C^{1,\alpha_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M) \big{)}^{3}\) and \(\mathbf{g}_{j}\in C^{\beta_{j}}\big{(}\Gamma^{\prime j}_{h}\times(-M,M)\big{)} ^{3}\) for \(j=+,-\). Since \(\mathbf{f}_{j}\) and \(\mathbf{g}_{j}\) do not depend on \(x_{3}\), we have the following expansions \[\mathcal{P}(\mathbf{f}_{j})(\mathbf{x}^{\prime}) =\mathcal{P}(\mathbf{f}_{j})(\mathbf{0})+\delta\mathcal{P}(\mathbf{ f}_{j})(\mathbf{x}^{\prime}),\quad\left|\delta\mathcal{P}(\mathbf{f}_{j})( \mathbf{x}^{\prime})\right|\leq A_{j}|\mathbf{x}^{\prime}|^{1+\alpha_{j}},\] \[\mathcal{P}(\mathbf{g}_{j})(\mathbf{x}^{\prime}) =\mathcal{P}(\mathbf{g})(\mathbf{0})+\delta\mathcal{P}(\mathbf{g})( \mathbf{x}^{\prime}),\quad\left|\delta\mathcal{P}(\mathbf{g})(\mathbf{x}^{\prime}) \right|\leq B_{j}|\mathbf{x}^{\prime}|^{\beta_{j}}. \tag{4.29}\] By a series of similar derivations in Proposition 4.1, we deduce the following integral identity \[\mu\,\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\,\mu(\theta_{M})}-1\right)+\mu\,\Re\mathcal{P }(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix}\mathrm{-i}\\ +1\end{bmatrix}\left(e^{-s\sqrt{h}\,\mu(\theta_{m})}-1\right)\] \[-2s^{-2}\Re\mathcal{P}(\mathbf{g}_{+}^{(1,2)})(\mathbf{0})\cdot \begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\mu^{-2}(\theta_{M})-2s^{-2}\Re\mathcal{P}(\mathbf{g}_ {-}^{(1,2)})(\mathbf{0})\cdot\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\mu^{-2}(\theta_{m})=\sum_{j=1}^{8}Q_{j}, \tag{4.30}\] where \(\mu(\cdot)\) is given by Lemma 4.9. These \(\Re\mathbf{f}_{j}(\mathbf{0})\), \(\Re\mathbf{g}_{j}(\mathbf{0})\), \(\Re\delta\mathbf{f}_{j}\), \(\Re\delta\mathbf{g}_{j}\), \(\Re\mathbf{v}\) and \(\Re\mathbf{w}\) in \(R_{1}\)-\(R_{7}\) given by (4.12) are replacing by \(\Re\mathcal{P}(\mathbf{f}_{j}^{(1,2)})(\mathbf{0})\), \(\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})(\mathbf{0})\), \(\delta\Re\mathcal{P}(\mathbf{f}_{j}^{(1,2)})\), \(\delta\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})\), \(\Re\mathcal{P}(\mathbf{g}_{j}^{(1,2)})\), \(\Re\mathcal{P}(\mathbf{v}^{(1,2)})\) and \(\Re\mathcal{P}(\mathbf{w}^{(1,2)})\) for \(j=+,-\). In addition, \[Q_{8}=-\int_{\mathcal{C}_{h}^{\prime}}\left(\Re\mathcal{G}_{1}^{(1,2)}( \mathbf{x}^{\prime})-\Re\mathcal{G}_{2}^{(1,2)}(\mathbf{x}^{\prime})\right) \cdot\mathbf{u}_{0}\,\mathrm{d}\mathbf{x}^{\prime}.\] Similar to Proposition 4.1, we have the following estimates \[\begin{array}{ll}\left|Q_{1}\right|=\mathcal{O}(s^{-1}e^{-c_{1}^{\prime}s}),&\left|Q_{2}\right|=\mathcal{O}(s^{-1}e^{-c_{2}^{\prime}s}),&\left|Q_{3} \right|=\mathcal{O}(e^{-c_{3}^{\prime}s}),&\left|Q_{4}\right|=\mathcal{O}(s^{ -2\alpha_{+}-2}),\\ \left|Q_{5}\right|=\mathcal{O}(s^{-2\alpha_{-}-2}),&\left|Q_{6}\right|= \mathcal{O}(s^{-2\beta_{+}-2}),&\left|Q_{7}\right|=\mathcal{O}(s^{-2\beta_{- }-2}),\end{array}\] where these constants \(c_{1}^{\prime}\), \(c_{2}^{\prime}\) and \(c_{3}^{\prime}\) do not depend on \(s\), \(\alpha_{+},\alpha_{-}\), \(\beta_{+}\), \(\beta_{-}\in(0,1)\). From the expressions of \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) defined in (4.20), denote \(\Re\mathcal{G}_{1}^{(1,2)}-\Re\mathcal{G}_{2}^{(1,2)}:=\mathbf{h}_{1}+ \mathbf{h}_{2}\). Here, \(\mathbf{h}_{1}=-\int_{-L}^{+L}\phi^{\prime\prime}(x_{3})\begin{bmatrix} \lambda(\Re v_{1}-\Re w_{1})\\ \lambda(\Re v_{2}-\Re w_{2})\end{bmatrix}\,\mathrm{d}x_{3}\) and \(\mathbf{h}_{2}=(\lambda+\mu)\int_{-L}^{+L}\phi^{\prime}(x_{3})\begin{bmatrix} \partial_{1}(\Re v_{3}-\Re w_{3})\\ \partial_{2}(\Re v_{3}-\Re w_{3})\end{bmatrix}\,\mathrm{d}x_{3}\). By the regularities of \(\mathbf{v}\) and \(\mathbf{w}\) in \(\mathcal{C}_{h}^{\prime}\times(-M,M)\), we directly obtain that \(\Re\mathcal{G}_{1}^{(1,2)}\in H^{1}(\mathcal{C}_{h}^{\prime})^{2}\) and \(\Re\mathcal{G}_{2}^{(1,2)}\in L^{2}(\mathcal{C}_{h}^{\prime})^{2}\). By using the Cauchy-Schwarz inequality and the first equation in Lemma (4.3), we prove \[\left|\int_{\mathcal{C}_{h}^{\prime}}(\Re\mathcal{G}_{1}^{(1,2)}- \Re\mathcal{G}_{2}^{(1,2)})\cdot\mathbf{u}_{0}\,\mathrm{d}x_{3}\right|=\left| \int_{\mathcal{C}_{h}^{\prime}}(\mathbf{h}_{1}+\mathbf{h}_{2})\cdot\mathbf{u} _{0}\,\mathrm{d}x_{3}\right|\] \[\quad\leq\|\mathbf{h}_{1}\|_{L^{2}(\mathcal{C}_{h}^{\prime})^{2}} \|\mathbf{u}_{0}\|_{L^{2}(\mathcal{C}_{h}^{\prime})^{2}}+\|\mathbf{h}_{2}\|_{L ^{2}(\mathcal{C}_{h}^{\prime})^{2}}\|\mathbf{u}_{0}\|_{L^{2}(\mathcal{C}_{h}^{ \prime})^{2}}\] \[\quad\leq C^{\prime}e^{-c_{3}^{\prime}s},\] where \(C^{\prime}>0\) and \(c_{3}^{\prime}>0\) do not depend on \(s\). From (4.30), we have \[\begin{bmatrix}-\mathrm{i}\\ 1\end{bmatrix}\cdot\left(\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})- \Re\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\right)=0\quad\text{as}\quad s \rightarrow+\infty.\] Noticing that \(\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})\) and \(\Re\mathcal{P}(\mathbf{f}_{-}^{(1,2)})(\mathbf{0})\) are real, we can prove \[\Re\mathcal{P}(\mathbf{f}_{+}^{(1,2)})(\mathbf{0})=\Re\mathcal{P}(\mathbf{f}_{ -}^{(1,2)})(\mathbf{0}).\] Substituting the above equation into (4.30), then multiplying the new identity by \(s^{2}\), we get \[\mu\,s^{2}\Re\mathcal{P}(\mathbf{f}_{+}(\mathbf{0}))\cdot\begin{bmatrix} \mathrm{i}\\ -1\end{bmatrix}\left(e^{-s\sqrt{h}\mu(\theta_{M})}-e^{-s\sqrt{h}\mu(\theta_{m}) }\right)-2\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathcal{P}(\mathbf{g}_{+}(\mathbf{0}))}{ \mu^{2}(\theta_{M})}+\frac{\Re\mathcal{P}(\mathbf{g}_{-}(\mathbf{0}))}{\mu^{2} (\theta_{m})}\right)=\sum_{j=1}^{8}s^{2}\,Q_{j}.\] We note that the first term on the left hand of the last equation is bounded by \(s^{2}e^{-c_{7}^{\prime}s}\) with \(c_{7}^{\prime}=\sqrt{h}\min\{\cos\frac{\theta_{M}}{2},\cos\frac{\theta_{m}}{2}\}>0\). As \(s\) tends to \(+\infty\), we obtain \[\begin{bmatrix}1\\ \mathrm{i}\end{bmatrix}\cdot\left(\frac{\Re\mathcal{P}(\mathbf{g}_{+}(\mathbf{0})) }{\mu^{2}(\theta_{M})}+\frac{\Re\mathcal{P}(\mathbf{g}_{-}(\mathbf{0}))}{\mu^{2} (\theta_{m})}\right)=0.\] By the same method of proving the first equation in (4.4), we can prove that the second equation in (4.28) holds. **Part II.** We next shall prove that \[\Re{\mathcal{P}}(f_{+}^{3})({\bf 0})=\Re{\mathcal{P}}(f_{-}^{3})({\bf 0})\quad \text{and}\quad\Re{\mathcal{P}}(g_{+}^{3})({\bf 0})=\Re{\mathcal{P}}(g_{-}^{3})({\bf 0 })=0. \tag{4.31}\] In this part, we consider the PDE system (4.22). Let us deduce some similar operations above by using the CGO solution \(u_{0}\) given in (4.23), we set up an integral identity as follows \[\frac{1}{\lambda}\int_{{\mathcal{C}}_{h}^{\prime}}\big{(}\Re{ \mathcal{G}}_{1}^{(3)}({\bf x}^{\prime})-\Re{\mathcal{G}}_{2}^{(3)}({\bf x}^{ \prime})\big{)}\,u_{0}{\rm d}{\bf x}^{\prime}=\int_{\Lambda_{h}^{\prime}} \partial_{\boldsymbol{\nu}}{\mathcal{P}}(v_{3}-w_{3})\,u_{0}-\partial_{ \boldsymbol{\nu}}u_{0}\,{\mathcal{P}}(v_{3}-w_{3}){\rm d}\sigma\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\Gamma_{h}^{\prime-} }\boldsymbol{\nu}_{m}\cdot\big{(}\Re{\mathcal{P}}(f_{-}^{(1,2)})({\bf x}^{ \prime})+\frac{1}{\mu}\Re{\mathcal{P}}(g_{-}^{(3)})\big{)}\,u_{0}-\Re{ \mathcal{P}}(f_{-}^{(3)})\,\partial_{\boldsymbol{\nu}_{m}}u_{0}{\rm d}\sigma.\] Due to the expansions of \({\bf f}_{\pm}\) and \({\bf g}_{\pm}\) in (4.29), the above integral identity can be reduced into \[\lambda\,\mu\,{\rm i}\,\Big{(}\Re{\mathcal{P}}(f_{+}^{3})(0)-\Re{\mathcal{P} }(f_{-}^{3})(0)\Big{)}-2\lambda\,s^{-1}\bigg{(}\frac{\Re{\mathcal{P}}(g_{+}^{3 })(0)}{\mu^{2}(\theta_{M})}+\frac{\Re{\mathcal{P}}(g_{-}^{3})(0)}{\mu^{2}( \theta_{m})}\bigg{)}=\sum_{j=1}^{9}M_{j},\] where \[M_{1} =2\lambda\,s^{-1}\Re{\mathcal{P}}(g_{+}^{3})({\bf 0})\Big{(}\mu^{- 1}(\theta_{M})\,\sqrt{sh}\,e^{-\sqrt{sh}\,\mu(\theta_{M})}+\mu^{-2}(\theta_{M })\,e^{-\sqrt{sh}\,\mu(\theta_{M})}\Big{)},\] \[M_{2} =2\lambda\,s^{-1}\Re{\mathcal{P}}(g_{-}^{3})({\bf 0})\Big{(}\mu^{- 1}(\theta_{m})\,\sqrt{sh}\,e^{-\sqrt{sh}\,\mu(\theta_{m})}+\mu^{-2}(\theta_{m })\,e^{-\sqrt{sh}\,\mu(\theta_{m})}\Big{)},\] \[M_{3} =-\lambda\mu\,\int_{\Lambda_{h}^{\prime}}\partial_{\boldsymbol{ \nu}}u_{0}\,\,\Re{\mathcal{P}}(v_{3}-w_{3})({\bf x}^{\prime})-\partial_{ \boldsymbol{\nu}}u_{0}\,\,\Re{\mathcal{P}}(v_{3}-w_{3})({\bf x}^{\prime})\,{ \rm d}\sigma,\] \[M_{4} =\lambda\,\mu\,{\rm i}\,\bigg{(}\frac{\Re{\mathcal{P}}(f_{-}^{3 })(0)}{e^{\sqrt{sh}\,\mu(\theta_{m})}}-\frac{\Re{\mathcal{P}}(f_{+}^{(3)})(0)}{ e^{\sqrt{sh}\,\mu(\theta_{M})}}\bigg{)},\,\,M_{5}=\mu\int_{{\mathcal{C}}_{h}^{ \prime}}(t_{1}+t_{2})\,u_{0}{\rm d}{\bf x}^{\prime},\] \[M_{6} =\lambda\mu\,\int_{\Gamma_{h}^{\prime+}}\delta\Re{\mathcal{P}}(f _{+}^{3})({\bf x}^{\prime})\,\,\partial_{\boldsymbol{\nu}_{M}}u_{0}\,{\rm d} \sigma,\qquad M_{7}=\lambda\mu\,\int_{\Gamma_{h}^{\prime-}}\delta\Re{ \mathcal{P}}(f_{-}^{3})({\bf x}^{\prime})\,\,\partial_{\boldsymbol{\nu}_{m}}u_ {0}\,{\rm d}\sigma,\] \[M_{8} =-\lambda\int_{\Gamma_{h}^{\prime+}}\delta\Re{\mathcal{P}}(g_{+ }^{3})({\bf x}^{\prime})\,\,u_{0}\,{\rm d}\sigma,\qquad\qquad M_{9}=-\lambda \int_{\Gamma_{h}^{\prime-}}\delta\Re{\mathcal{P}}(g_{-}^{3})({\bf x}^{\prime}) \,\,u_{0}\,{\rm d}\sigma.\] Here, \[t_{1} =-(2\lambda+\mu)\Re\int_{-L}^{L}\phi^{\prime\prime}({\bf x}^{ \prime})(v_{3}-w_{3}){\rm d}x_{3},\] \[t_{2} =(\lambda+\mu)\Re\int_{-L}^{L}\phi^{\prime}({\bf x}^{\prime}) \big{(}\partial_{1}(v_{1}-w_{1})+\partial_{2}(v_{2}-w_{2})\big{)}{\rm d}x_{3}.\] Using those estimates list in Lemma 4.8-Lemma 4.10, we have \[\big{|}M_{1}\big{|} ={\mathcal{O}}(e^{-q_{1}\sqrt{s}}),\quad\big{|}M_{2}\big{|}={ \mathcal{O}}(e^{-q_{2}\sqrt{s}}),\quad\big{|}M_{3}\big{|}={\mathcal{O}}(s^{-1}e ^{-1/2hs}),\,\big{|}M_{4}\big{|}={\mathcal{O}}(s^{-q_{4}\sqrt{s}}),\] \[\big{|}M_{6}\big{|} ={\mathcal{O}}(s^{-\alpha_{+}-1}),\quad\big{|}M_{7}\big{|}={ \mathcal{O}}(e^{-\alpha_{-}-1}),\quad\big{|}M_{8}\big{|}={\mathcal{O}}(e^{- \beta_{+}-1}),\qquad\big{|}M_{9}\big{|}={\mathcal{O}}(e^{-\beta_{--}-1}),\] where these above constants do not depend on \(s\). Using the similar technique of estimating of \(Q_{8}\), we get \[\big{|}M_{5}\big{|}=\mathcal{O}(e^{-q_{5}s}),\quad\text{where}\quad q_{5}>0.\] Let \(s\to+\infty\), the first equation in (4.31) holds clearly, that is to say, \[\Re\mathcal{P}(f_{+}^{3})(\mathbf{0})=\Re\mathcal{P}(f_{-}^{3})(\mathbf{0}).\] Substituting the above equation into (4.30) and multiplying the new identity by \(s^{2}\), then letting \(s\to+\infty\), one can obtain that \[\frac{\Re\mathcal{P}(g_{+}^{(3)})(0)}{\mu^{2}(\theta_{M})}+\frac{\Re\mathcal{ P}(g_{-}^{(3)})(0)}{\mu^{2}(\theta_{m})}=0.\] It's worth noting that \(\frac{\mu^{2}(\theta_{M})}{\mu^{2}(\theta_{m})}=e^{\mathrm{i}(\theta_{M}- \theta_{m})}\) and \(\theta_{M}-\theta_{m}\in(0,\pi)\). Hence, \[\Re\mathcal{P}(g_{+}^{3})(\mathbf{0})=\Re\mathcal{P}(g_{-}^{3})(\mathbf{0})=0.\] Thanks to the symmetric role of \((\Re\mathbf{v},\Re\mathbf{w})\) and \((\Im\mathbf{v},\Im\mathbf{w})\), the two equations (4.28) and (4.31) directly lead to the results list in (4.27). ## 5. The proofs of Theorem 3.1-Theorem 3.3 Proof of Theorem 3.1.: We prove this theorem by contradiction. Assume that there exists a planar corner/3D edge corner \(\mathbf{x}_{c}\) on \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}\). Without loss of generality, we assume that \(\mathbf{x}_{c}\) coincides with the origin \(\mathbf{0}\), \(\mathbf{0}\in\mathcal{S}_{\mathbf{1}}\) and \(\mathbf{0}\notin\mathcal{S}_{\mathbf{2}}\). Denote \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). **Case 1**: \(n=2\). We note that \(\Gamma_{h}^{\pm}=\mathcal{S}_{1}\cap B_{h}\), \(\mathcal{C}_{h}=\Omega_{1}\cap B_{h}\) and \(\mathcal{C}_{h}\cap\Omega_{2}\neq\emptyset\) for sufficient small \(h\in\mathbb{R}_{+}\), where \(\Omega_{1}\) and \(\Omega_{2}\) are defined in a similar way as (2.8). Since \(\mathbf{u}_{1}\big{|}_{\Sigma_{0}}=\mathbf{u}_{2}\big{|}_{\Sigma_{0}}\) and \(\Sigma_{0}\subset\Sigma_{N}\), we know that \[\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}=\mathcal{T}_{\boldsymbol{\nu}} \mathbf{u}_{1}-\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}=\mathbf{0}\quad \text{on}\quad\Sigma_{0}.\] Let \(\mathbf{w}^{-}\) and \(\mathbf{w}^{+}\) represent \(\mathbf{w}\big{|}_{\Omega_{1}}\) and \(\mathbf{w}\big{|}_{\Omega\setminus\overline{\Omega}_{1}}\), respectively. With the help of the unique continuation principle properly and the fact that \(\mathbf{u}_{2}\) is real analytic in \(B_{h}\), it is clear to obtain \[\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0}\text{ in }\mathcal{C}_{h},\ \mathbf{w}^{-} \big{|}_{\Gamma_{h}^{j}}=\mathbf{u}_{1}\big{|}_{\Gamma_{h}^{j}}-\mathbf{u}_{2 }\big{|}_{\Gamma_{h}^{j}}=-\mathbf{f}_{j}^{1},\ \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{1}\big{|}_{\Gamma_{h}^{j}}- \mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}\big{|}_{\Gamma_{h}^{j}}=- \mathbf{g}_{j}^{1},\ j=+,-. \tag{5.1}\] From Proposition 4.1, we directly imply that \(\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{1}(\mathbf{0})\) and \(\mathbf{g}_{+}^{1}(\mathbf{0})=W\mathbf{g}_{-}^{1}(\mathbf{0})\), where \(W\) is given in Proposition 4.1. This must contradict to the admissibility condition (5) in Definition 3.1. **Case 2**: \(n=3\). It is noted that \(\mathbf{0}=(\mathbf{0}^{\prime},0)^{\top}\in\Gamma_{h}^{\prime\pm}\times(-M,+ M)\subset\mathcal{S}_{1}\) is a 3D edge corner point and \(\mathcal{C}_{h}^{\prime}\times(-M,+M)\subset\Omega_{1}\) and \(\mathcal{C}_{h}^{\prime}\times(-M,+M)\cap\Omega_{2}=\emptyset\), where \(\Gamma_{h}^{\prime\pm}\) and \(\mathcal{C}_{h}^{\prime}\) are defined in (3.3) and \(\Omega_{1}\), \(\Omega_{2}\) are given by (2.8). Similar to the previous case, we get \[\begin{cases}\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0},&\text{in}\quad\mathcal{C} _{h}^{\prime}\times(-M,M),\\ \mathbf{w}=-\mathbf{f}_{+}^{1}&\text{on}\quad\Gamma_{h}^{\prime+}\times(-M,M), \\ \mathbf{w}=-\mathbf{f}_{-}^{1}&\text{on}\quad\Gamma_{h}^{\prime-}\times(-M,M), \\ \mathcal{T}_{\boldsymbol{\nu}_{M}}\mathbf{w}=-\mathbf{g}_{+}^{1}&\text{on}\quad \Gamma_{h}^{\prime+}\times(-M,M),\\ \mathcal{T}_{\boldsymbol{\nu}_{m}}\mathbf{w}=-\mathbf{g}_{-}^{1}&\text{on}\quad \Gamma_{h}^{\prime-}\times(-M,M).\end{cases}\] These results that \(\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{1}(\mathbf{0})\), \(\mathbf{g}_{+}^{1,(1,2)}(\mathbf{0})=W\mathbf{g}_{-}^{1,(1,2)}(\mathbf{0})\) and \(g_{+}^{1,3}(\mathbf{0})=g_{+}^{1,3}(\mathbf{0})=0\) are yielded by Proposition 4.2, where \(W\) is given in Definition 3.1. This contradicts to the admissibility condition (5) in Definition 3.1. The proof is complete. Proof of Theorem 3.2.: Firstly, we shall prove \(\mathcal{S}_{1}=\mathcal{S}_{2}\) by contradiction. Assume that \(\Omega_{1}\neq\Omega_{2}\). Since \(\Omega_{1}\) and \(\Omega_{2}\) are both convex polygons or polyherons, there must exist a corner \(\mathbf{x}_{c}\) belonging to \(\Omega_{1}\Delta\Omega_{2}\), which contradicts Theorem 3.1, thus we have \(\Omega_{1}=\Omega_{2}\). It directly leads to \(\mathcal{S}_{1}=\mathcal{S}_{2}\). In what follows, we shall first prove (3.4) and (3.5) for the 2D case. Denote \(\hat{\Omega}:=\Omega_{1}=\Omega_{2}\) and \(\mathcal{S}:=\mathcal{S}_{1}=\mathcal{S}_{2}\). Let \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). Since \(\mathbf{u}_{1}=\mathbf{u}_{2}\) on \(\Sigma_{0}\subset\Sigma_{N}\), as a direct consequence, we have \(\mathbf{w}=0\) and \(\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}=0\) on \(\Sigma_{0}\), where \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). By virtue of the unique continuation principle properly again, one obtains \[\mathbf{w}^{+}\big{|}_{\Gamma_{h}^{\pm}}=\mathcal{T}_{\boldsymbol{\nu}} \mathbf{w}^{+}\big{|}_{\Gamma_{h}^{\pm}}=\mathbf{0},\] where \(\mathbf{w}^{+}\) represents \(\mathbf{w}\big{|}_{\Omega\setminus\overline{\hat{\Omega}}}\). Since \((\mathcal{S};\mathbf{f}^{1},\mathbf{g}^{1})\) and \((\mathcal{S};\mathbf{f}^{2},\mathbf{g}^{2})\) are admissible, we get \[\mathcal{L}\,\mathbf{w}^{-}=\mathbf{0}\text{ in }\mathcal{C}_{h},\quad \mathbf{w}^{-}\big{|}_{\Gamma_{h}^{j}}=\mathbf{f}_{j}^{2}-\mathbf{f}_{j}^{1} \text{ and }\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}^{-}\big{|}_{\Gamma_{h}^{j}}= \mathbf{g}_{j}^{2}-\mathbf{g}_{j}^{1}\quad\text{for}\quad j=+,-, \tag{5.2}\] where \(\mathbf{w}^{-}\) signifies \(\mathbf{w}\big{|}_{\hat{\Omega}}\). From Proposition 4.1 and (5.2), we obtain the following local uniqueness \[\mathbf{f}_{+}^{2}(\mathbf{0})-\mathbf{f}_{+}^{1}(\mathbf{0})=\mathbf{f}_{-}^{ 2}(\mathbf{0})-\mathbf{f}_{-}^{1}(\mathbf{0})\quad\text{and}\quad\mathbf{g}_{ +}^{1}(\mathbf{0})-\mathbf{g}_{+}^{2}(\mathbf{0})=W\Big{(}\mathbf{g}_{+}^{1}( \mathbf{0})-\mathbf{g}_{+}^{2}(\mathbf{0})\Big{)},\] Furthermore, since \(\mathbf{f}_{i}\) and \(\mathbf{g}_{i}\) (\(i=1,2\)) are piecewise constant valued functions, we immediately obtain (3.4) and (3.5). By a similar method used in 2D case, we can prove (3.4) and (3.5) in 3D case,. Proof of Theorem 3.3.: The argument of proving this theorem is similar to the one used in the proof of Theorem 3.2, where we only need some necessary modifications. Let \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are two different piecewise curves in \(\mathbb{R}^{2}\) or piecewise surfaces in \(\mathbb{R}^{3}\). From Definition 3.2, we can directly imply that \(\mathcal{S}_{1}\Delta\mathcal{S}_{2}\) contains a planar or 3D edge corner can be derived by Definition 3.2. Under the condition (3.6), adopting a similar argument in proving Theorem 3.2, we can show that \(\mathcal{S}_{1}=\mathcal{S}_{2}\). Set \(\mathbf{w}=\mathbf{u}_{1}-\mathbf{u}_{2}\). We have that \(\mathbf{w}=\mathcal{T}_{\boldsymbol{\nu}}\mathbf{w}=\mathbf{0}\) on \(\Sigma_{0}\) for \(\mathbf{u}_{1}=\mathbf{u}_{2}\) on \(\Sigma_{0}\subset\Sigma_{N}\). By using the unique continuation property again, we conclude that \(\mathbf{w}=\mathbf{0}\) in \(\Omega\setminus\overline{\mathcal{S}}\). Hence, it is direct to imply that \[\mathbf{0}=[\mathbf{w}]_{\mathcal{S}_{1}}=[\mathbf{w}]_{\mathcal{S}_{2}}\, \Rightarrow\,[\mathbf{u}_{1}]_{\mathcal{S}_{1}}=[\mathbf{u}_{2}]_{\mathcal{S} _{2}}\,\text{and}\,[\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{1}]_{\mathcal{S} _{1}}=[\mathcal{T}_{\boldsymbol{\nu}}\mathbf{u}_{2}]_{\mathcal{S}_{2}}\, \Rightarrow\,\mathbf{f}_{1}=\mathbf{f}_{2}\,\text{and}\,\mathbf{g}_{1}= \mathbf{g}_{2}.\] The proof is complete. ## Acknowledgment The work of H. Diao is supported by National Natural Science Foundation of China (No. 12371422) and the Fundamental Research Funds for the Central Universities, JLU (No. 93Z172023Z01). The work of H. Liu was supported by the Hong Kong RGC General Research Funds (projects 11311122, 11300821 and 12301420), NSF/RGC Joint Research Fund (project N_CityU101/21) and the ANR/RGC Joint Research Fund (project A_CityU203/19). The work of Q. Meng is supported by the Hong Kong RGC Postdoctoral Fellowship (No. 9061028).
2309.06435
Early time dynamics far from equilibrium via holography
We investigate the early time dynamics of heavy ion collisions studying the time evolution of the energy-momentum tensor as well as energy-momentum correlations within a uniformly thermalizing holographic QGP. From these quantities, we suggest a far-from equilibrium definition of shear viscosity, which is a crucial property of QCD matter as it significantly determines the generation of elliptic flow already at early times. During an exemplary initial heating phase of the holographic QGP the shear viscosity of entropy density ratio decreases down to 60%, followed by an overshoot to 110% of the near-equilibrium value, $\eta/s=1/(4\pi)$. Implications for the QCD QGP are discussed. Subsequently, we consider a holographic QGP which is Bjorken-expanding. Its energy-momentum tensor components have a known hydrodynamic attractor to which all time evolutions collapse independent of the initial conditions. Based on this, we propose a definition for a far from equilibrium speed of sound, and analytically compute its hydrodynamic attractor. Subjecting this Bjorken-expanding plasma to an external magnetic field and an axial chemical potential, we study the chiral magnetic effect far from equilibrium.
Matthias Kaminski, Casey Cartwright, Marco Knipfer, Michael F. Wondrak, Björn Schenke, Marcus Bleicher
2023-09-12T17:56:02Z
http://arxiv.org/abs/2309.06435v1
# Early time dynamics far from equilibrium via holography ###### Abstract: We investigate the early time dynamics of heavy ion collisions studying the time evolution of the energy-momentum tensor as well as energy-momentum correlations within a uniformly thermalizing holographic QGP. From these quantities, we suggest a far-from equilibrium definition of shear viscosity, which is a crucial property of QCD matter as it significantly determines the generation of elliptic flow already at early times. During an exemplary initial heating phase of the holographic QGP the shear viscosity of entropy density ratio decreases down to 60%, followed by an overshoot to 110% of the near-equilibrium value, \(\eta/s=1/(4\pi)\). Implications for the QCD QGP are discussed. Subsequently, we consider a holographic QGP which is Bjorken-expanding. Its energy-momentum tensor components have a known hydrodynamic attractor to which all time evolutions collapse independent of the initial conditions. Based on this, we propose a definition for a far from equilibrium speed of sound, and analytically compute its hydrodynamic attractor. Subjecting this Bjorken-expanding plasma to an external magnetic field and an axial chemical potential, we study the chiral magnetic effect far from equilibrium. Introduction One important practical and theoretical question is why relativistic hydrodynamics describes heavy-ion collision data far beyond its regime of applicability. In particular, hydrodynamics appears to be a valid description far away from local and global equilibrium, in the presence of large gradients, at very early times during the evolution of quark-gluon-plasma (QGP) after collisions of heavy ions or even heavy-light (Pb+p) and light-light (p+p) collisions [1]. In part, these points were confirmed in holographic plasma [2] in which numerical computation of all observables is possible at all times. Here, we report on the continued holographic exploration of the far-from-equilibrium regime of \(\mathcal{N}=4\) Super-Yang-Mills (SYM) theory. We use the holographic correspondence to compute three time-dependent quantities: the shear transport, the speed of sound, and the chiral magnetic current. ## 2 \(\eta/s\) far from equilibrium We intend to explore the early times after a heavy-ion collision during which the system is far from equilibrium. Near equilibrium, a Kubo formula relates the retarded momentum space shear correlator \(\tilde{G}_{R}^{xy,xy}=\langle T^{xy}T^{xy}\rangle\) at vanishing spatial momentum to the shear viscosity: \(\eta=-\lim_{\omega\to 0}\frac{1}{\omega}\mathrm{Im}\,\tilde{G}_{R}^{xy,xy}( \omega,\mathbf{k}=\mathbf{0})\). Here, we holographically compute \(\tilde{G}_{R}^{xy,xy}\) far from equilibrium and define a _far-from-equilibrium shear viscosity_[3] \[\eta(t_{avg})=-\lim_{\omega\to 0}\frac{1}{\omega}\mathrm{Im}\,\tilde{G}_{R}^{ xy,xy}(t_{avg},\omega,\mathbf{k}=\mathbf{0})\,, \tag{1}\] where \(t_{avg}\) is the time with which the state changes as discussed below. Thermalization of a plasma corresponds to horizon formation in the gravity dual [4], Fig. 1 (left). A far-from-equilibrium plasma state heating up over a time \(\Delta t\) is modeled [3] by the AdS\({}_{4}\) Vaidya metric \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\frac{1}{z^{2}}(-f(t,z)dt^{2}-2dtdz+dx^{2}+ dy^{2})\,,\quad f(t,z)=1-2G_{N}M(t)z^{3}\,, \tag{2}\] with the time coordinate \(t\), the radial AdS-coordinate \(z\), having the boundary at \(z=0\) and the horizon at \(z=1\), and Newton's gravitational constant \(G_{N}\). Note, that the black hole mass \(M(t)=m+m_{s}(1+\tanh(t/\Delta t))/2\) is a function of the time \(t\).1 The background metric (2) is perturbed by a metric shear perturbation, \(h_{xy}(t,z)\), which is required to solve a linearized Einstein equation. Solutions correspond to the expectation value of the energy-momentum tensor of the plasma and its source \(h_{\mu\nu}^{(0)}\) according to \(h_{\mu\nu}\sim h_{\mu\nu}^{(0)}+\langle T_{\mu\nu}\rangle\,z^{4}+\dots\). In order to obtain the retarded shear correlator \(G_{R}^{xy,xy}\), linear response theory allows to utilize a delta source: \(h_{xy}^{(0)}=\delta(\tau-t_{p})\). This yields the two-point function in terms of a one-point function at time \(t\) in presence of a delta-source at time \(t_{p}\): \(\langle T^{xy}\rangle_{\delta(t_{p})}=\int d\tau\tau G_{R}^{xy,xy}(\tau,t) \delta(\tau-t_{p})\propto G_{R}^{xy,xy}(t_{p},t)\). Assuming no dependence on spatial boundary coordinates \(x\) or \(y\), a Wigner transform now yields the representation in terms of the relative frequency \(\omega\): \(G_{R}^{xy,xy}(t_{p},t)\to G_{R}^{xy,xy}(t_{avg},t_{\mathrm{rel}})\sim\tilde{G} _{R}^{xy,xy}(t_{avg},\omega)\;e^{-i\omega t_{\mathrm{rel}}}\), where the average time is \(t_{avg}=(t_{p}+t)/2\) and the relative time is \(t_{rel}=t_{p}-t\). Near an equilibrium state, the ratio of shear viscosity to entropy density is \(\eta/s=1/(4\pi)\) in \(\mathcal{N}=4\) SYM theory [5] (black line in Fig 1, right). In Fig. 1 (right), the shear viscosity (1) is shown for an example plasma heat up starting at \(T_{c}=155\) MeV, ending at \(T_{final}=310\) MeV, rising over \(\Delta t=0.3\) fm (RHIC energies). For this example, the shear transport ratio first drops below \(60\%\), then rises above \(110\%\) of \(1/(4\pi)\).2 How typical is this behavior when changing \(\Delta t\) and \(T_{final}\)? Fig. 2 (left) shows that over a wide range of values a significant decrease below \(1/(4\pi)\) is generic. The increase above \(1/(4\pi)\) only exists for small enough \(T_{final}<6.5T_{c}\). Fig. 2 (right) shows a stark contrast between the holographic _far-from-equilibrium_ results (\(\eta/s<1/(4\pi)\)), and the _near-equilibrium_ lattice QCD and _near-equilibrium_ FRG results (suggesting \(\eta/s>1/(4\pi)\)). This may indicate that the Bayesian study [9] underestimated the elliptic flow generated at early times. Footnote 2: It is important to recall that \(\eta/s=1/(4\pi)\) is _not_ a universal lower bound [6]. ## 3 Speed of sound far from equilibrium Consider a Bjorken-expanding \(\mathcal{N}=4\) SYM plasma. At early times, thermodynamic quantities are not strictly well-defined as the plasma is far from equilibrium and has a large pressure anisotropy. Here, we propose working definitions far from equilibrium. We use the temperature definition \(T=(\epsilon/\sigma_{SB})^{1/4}\), which is sometimes called _pseudo temperature_[1]. We holographically compute the speed of sound far from equilibrium according to the proposed definition [10] \[c_{\perp}^{2}=-\frac{\partial\langle T_{x_{1}}^{x_{1}}\rangle}{\partial\langle T _{0}^{0}\rangle}\,,\quad c_{||}^{2}=-\frac{\partial\langle T_{\xi}^{\xi} \rangle}{\partial\langle T_{0}^{0}\rangle}\,, \tag{3}\] with the pseudorapidity \(\xi=\frac{1}{2}\ln[(t+x_{3})/(t-x_{3})]\), the spatial coordinates \(x_{1},x_{2},x_{3}\) and the proper time \(\tau=\sqrt{t^{2}-x_{3}^{2}}\) with the Stefan-Boltzmann constant \(\sigma_{SB}\)[10]. Similar to the previous section, a time-dependent metric provides the thermalizing plasma state. However, now this plasma is Figure 1: Left: Thermalization in quantum field theories corresponds to horizon formation in their gravity dual description. Right: Example for a time-evolution of the far-from-equilibrium shear (1) normalized to the entropy measure \(s(t_{avg})\propto\frac{\partial^{\rm one-shell}}{\partial T}\) based on identification of \(S^{\rm on-shell}\) with the generating functional. expanding in the longitudinal \(x_{3}\)-direction, while isotropic and uniform in the transverse \((x_{1},x_{2})\)-plane. This complication now only allows numerical solutions for the background metric describing the time-dependent state, using [2]. It can be analytically shown [10] that the pressure anisotropy attractor [11] implies an attractor for the time-dependent speed of sound \[\mathcal{C}_{||}^{2}=\frac{1}{3}-\frac{2}{9}\left(\mathcal{A}_{0}(w)+\frac{w} {4}\frac{\partial\mathcal{A}_{0}(w)}{\partial w}\right)\,, \tag{4}\] with \(w=\tau T\) and the pressure anisotropy attractor \(\mathcal{A}_{0}(w)=(2530w-276)/(3975w^{2}-570w+120)\)[11]. This sound attractor (solid black line) is shown in Fig. 3 along with the numerically computed speed of sound in Bjorken-expanding holographic plasma, starting from various distinct initial conditions (solid colorful lines) and the hydrodynamic expectations (dashed lines). Hydrodynamic expectations are coinciding with the sound attractor already at very early times (\(\tau T\approx 0.5\)), indicating again a fast hydrodynamization. All initial states evolve towards the sound attractor very quickly, around \(\tau T<1\). The perpendicular speed of sound has an analogous attractor [10]. ## 4 Chiral magnetic effect far from equilibrium In the Bjorken-expanding holographic plasma described in the previous section, we introduce a chemical potential \(\mu\) and magnetic field \(B\) which both depend on time due to the Bjorken-expansion. In this setting, we compute [12] (highlighted in [13]) the time-dependent chiral magnetic current \(\langle J_{V}^{1}\rangle\) generated due to the chiral magnetic effect (CME).At distinct energies, this current first increases rapidly and then decreases slower, see Fig. 4. Although Fig. 4 suggest the CME to be weaker at higher energies, the accumulated charge which would be measured in the detectors indicates the opposite to be true when various parameter combinations are considered [12]. Figure 2: Left: Dependence of \(\eta/s\) on the instantaneous temperature, \(T(t_{avg})\) defined from the Hawking temperature of the black hole at each time. Shaded areas indicate the values arising from a sweep over a range of heat-up times for a fixed peak temperature. Right: These same holographic results (SYM, area enclosed by red solid curve) compared to \(1/(4\pi)\) (SYM, blue line). Theoretical QCD results are computed near equilibrium by functional renormalization group (FRG, dashed) [7] and lattice QCD (lQCD, circles) [8]. ## 5 Discussion We have computed time-dependent shear viscosity, speeds of sound, and the chiral magnetic current in holographic plasmas far from equilibrium. A small value of \(\eta/s\) at early times implies large generation of elliptic flow at early times, challenging current assumptions. In order to check the far-from-equilibrium speed of sound definition (3), the speed of sound waves is to be calculated directly from the fluctuations around Bjorken-expanding holographic plasma, using techniques from [3]. For a conclusive CME current estimate, a dynamical magnetic field interacting with the charged plasma, and a dynamically created axial imbalance need to be included. In summary, hydrodynamics performs well when its definitions are pushed beyond their limits. This may suggest that an effective field theory of fluid dynamics far from equilibrium is awaiting its construction. This work was supported by an Excellence Fellowship from Radboud University (M.F.W.), the U.S. Department of Energy grant DE-SC0012447 (C.C., M.K., M.K.), and DOE Contract No. DE-SC0012704 (B.P.S.). Figure 4: The charge current generated along the magnetic field due to the CME at distinct energies (\(\propto T^{4}\)). Figure 3: Attractor for the longitudinal speed of sound (black solid line) towards which all initial conditions (solid lines, distinct colors) evolve. Dashed: 0th (black), 1st (red), 2nd (blue) order hydrodynamic expectation.
2305.20041
Simulation and Retargeting of Complex Multi-Character Interactions
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an interaction graph that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to ``clean-up'' existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data.
Yunbo Zhang, Deepak Gopinath, Yuting Ye, Jessica Hodgins, Greg Turk, Jungdam Won
2023-05-31T17:13:24Z
http://arxiv.org/abs/2305.20041v1
# Simulation and Retargeting of Complex Multi-Character Interactions ###### Abstract. We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning. Our method learns control policies for characters that imitate not only individual motions, but also the interactions between characters, while maintaining balance and matching the complexity of reference data. Our approach uses a novel reward formulation based on an **interaction graph** that measures distances between pairs of interaction landmarks. This reward encourages control policies to efficiently imitate the character's motion while preserving the spatial relationships of the interactions in the reference motion. We evaluate our method on a variety of activities, from simple interactions such as a high-five greeting to more complex interactions such as gymnastic exercises, Salsa dancing, and box carrying and throwing. This approach can be used to "clean-up" existing motion capture data to produce physically plausible interactions or to retarget motion to new characters with different sizes, kinematics or morphologies while maintaining the interactions in the original data. Character Animation, Interactions, Physics Simulation, Physics-based Characters, Reinforcement Learning + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ physically simulated characters have been studied far less than single characters, in part because it is very challenging to learn controllers for multiple characters interacting with each other. As with a single character, balance must be maintained, but the interaction constraints also have to be solved simultaneously. Although some breakthrough results were demonstrated in recent studies (Haworth et al., 2020; Liu et al., 2022; Won et al., 2021), the complexity of the demonstrated interactions are still far from what people routinely perform in daily life. We demonstrate a novel learning-based method that provides a physics-based retargeting of complex interactions for multiple characters. More specifically, given reference motions that capture interactions between people, we learn control policies (a.k.a. controllers) of simulated characters via deep reinforcement learning that imitate not only the motion of the individuals but also the interactions between them. Our learned policies can produce plausible and semantically equivalent interactions when the sizes and kinematics of the characters are varied significantly. If the size of the simulated characters match those in the original motion capture data, the resulting motion is almost indistinguishable from the reference data and any errors from the capture process are eliminated by ensuring that the interactions are now physically plausible. To solve the challenges in learning multi-character interactions, we develop new rewards based on an **interaction graph** (IG) which measures distances between pairs of specified locations on the characters, and in particular reflects between-character distances. Rewards based on the IG enable control policies to efficiently deploy complex interactions for physically simulated characters while preserving the semantics of the interactions (i.e. spatial relationship) included in the reference data. In our formulation, manual annotation of the interaction in each motion is not necessary except for choosing a set of general interaction landmarks that work for a variety of scenarios. To show the effectiveness of our method, we record motions that include multi-person interactions at varying levels of difficulty, and test our method with motions that are composed of simple interactions such as a high-five or other greetings, as well as complex interactions such as gymnastic exercises, Salsa dancing, and box moving/throwing. We demonstrate the generality of our system by reproducing interactions not only for simulated characters with different body dimensions than the motion capture subjects, but also for a robot with a different kinematic structure. Finally, we run comparison and ablation studies that justify each choice in the system design. ## 2. Related Work We first review papers synthesizing multi-character interaction for kinematic approaches, which inspired our dynamic formulation. We then review recent progress in control of physically simulated characters via deep reinforcement learning. ### Multi-character Interactions for Kinematic Characters Most approaches for creating or editing multi-character interactions among kinematic characters are data-driven methods, which means that appropriate motion capture data should be obtained in advance. A popular line of work is based on optimization, where the basic idea is to optimize individual character motions with spatio-temporal constraints (Kwon et al., 2008; Liu et al., 2006), game theory (Shum et al., 2007, 2008; Shum et al., 2012; Wampler et al., 2010) so that the optimized motions have newly synthesized interactions. These methods are suitable for synthesizing motions having sparse interactions, however, the optimization quickly becomes intractable as the complexity of the interactions increases, so it is not suitable for synthesizing dense and complex interactions. Another approach is patch-based methods, where a patch includes a short interaction of multiple characters (Hyun et al., 2013; Lee et al., 2006; Shum et al., 2008; Won et al., 2014; Yersin et al., 2009). For this work, motion capture data where multiple actors are recorded simultaneously is required. New motions can be synthesized by connecting boundaries of multiple patches, thus creating multiple interactions that were not performed together in the original data. Methods for adapting existing interactions to new environments and characters have also been studied (Al-Asqhar et al., 2013; Ho et al., 2014, 2010; Jin et al., 2018; Kim et al., 2021, 2014, 2009). The key idea is to define an interaction descriptor that encodes the spatial and temporal relationship, then to edit the motions while minimizing the semantic difference between the original motion and the edited motions where the difference is measured by the descriptor. This idea has also been used to synthesize hands interacting with objects (Zhang et al., 2021). Our state representation and reward function for deep reinforcement learning are inspired by one of these descriptor-based approaches (Ho et al., 2010), where they construct an interaction graph by connecting edges among pre-specified markers on the body surface. By utilizing deep reinforcement learning and a novel formulation to measure interaction graph similarities, our method can be applied to dynamic characters having different body shapes instead of generating kinematic interaction motions as was done in (Ho et al., 2010). ### Physically Simulated Characters and Interactions In many cases, what we refer to as _interaction_ between different characters means physical interaction where physical forces occur between the characters at contacts. By incorporating physics simulation into the motion of the characters, those physical interactions can be synthesized in a plausible manner. Multi-character interactions have been created by solving a quadratic programming problem where the equations of motion for the entire dynamical system are used as either hard or soft constraints (Mordatch et al., 2012; Otani and Bouyarmane, 2017; Vaillant et al., 2017). Although cooperative multi-character interactions could be synthesized by these methods without using reference data, the generated motions are typically slow and less-dynamic due to the quasi-static assumption in their optimization formulation, and they require frame-level specification of all interactions in advance. Combining deep reinforcement learning (DRL) and motion capture data has allowed several breakthroughs in learning imitation controllers (Bergamin et al., 2019; Chentanez et al., 2018; Fussell et al., 2021; Park et al., 2019; Peng et al., 2018, 2021; Won et al., 2020), learning reusable motor skills (Merel et al., 2019; Peng et al., 2019, 2022; Won et al., 2022; Yao et al., 2022), and motion tracking (Winkler et al., 2022; Ye et al., 2022). Although there also have been some studies synthesizing dynamic interactions with objects or other characters (Hawworth et al., 2020; Liu et al., 2022; Merel et al., 2020; Won et al., 2021), the complexity of the demonstrated interactions are still not comparable to what people routinely perform in daily life. In addition, each of these works developed a task-specific reward function to enforce interactions between multiple entities. In this paper, we aim to synthesize various types of spatially and temporally dense interactions for full-body humanoid characters that are physically simulated. This problem is especially challenging because the motor skills must be sophisticated enough to perform those complex interactions while remaining robust enough to maintain balance. ## 3. Method Our goal is to build controllers that enable physically simulated characters to perform complex physical interactions with each other. For each behavior, we take a reference motion capture clip representing the desired multi-character interaction and produce controllers that enable the simulated characters to mimic those interactions. Our goal is to generate character interactions that are _semantically_ similar to those present in the reference motions. To achieve this, we use multi-agent deep reinforcement learning where the states and rewards are designed based on spatial descriptors inspired by (Ho et al., 2010). Different from (Ho et al., 2010) where only kinematic motions are generated, our method can be applied to dynamic characters having dramatically different body shapes from the captured actors. ### Environment Our characters are modeled as articulated rigid body objects by following (Won et al., 2020). Each character has 22 links and 22 joints, where each joint has three degree-of-freedom and is actuated by stable-PD servos (Tan et al., 2011) given target joint angles. We used an open-source framework (Won et al., 2020) to implement and simulate our characters. ### Problem Formulation We formulate the problem as a multi-agent Markov Decision Process (MDP). Consider \(k\) controllable agents, we define the tuple \(\{S,O_{1}\cdots O_{k},A_{1}\cdots A_{k},R_{1}\cdots R_{k},T,\rho\}\) where \(S\) is the entire state of our environment, \(O_{i}\) and \(A_{i}\) are the observation and action of \(i\)-th agent, respectively. The reward function \(R_{i}:O_{i}\times A_{i}\rightarrow\mathbb{R}\) evaluates the quality of the current state and action of \(i\)-th agent, the environment is updated by the transition function \(T:S\times A_{1}\times\cdots\times A_{k}\to S\) given a set of actions performed by all the agents, and \(\rho:S\rightarrow[0,1]\) is the probability distribution of the initial states. We aim to learn a set of optimal control policies \(\{\pi_{i}|i=1\cdots k\}\) that maximizes average expected return \(\mathbb{E}\left[\sum_{t=0}^{T}\gamma^{t}r_{t,t}\right]\) for each agent, where \(\gamma\in(0,1)\) is the discount factor that prevents the sum from being infinity. ### Interaction Graph To better describe the semantics of the interaction happening between agents (or between an agent and an object) during the motion, we define the notion of an Interaction Graph (IG), a graph-based spatial descriptor where the information on interactions is stored in its vertices and edges. This idea is inspired by (Ho et al., 2010). To construct an interaction graph, we first place a collection of markers on salient locations on each character (see Figure 2). Fifteen markers are placed in total for each character, where three markers are on each limb in the vicinity of joint locations, one on the pelvis, one on the torso, and one on the head. These markers will be considered as the nodes of the graph, each of which is associated with a 6-dimensional vector \(n_{i}=(p_{i},v_{i})\in\mathbb{R}^{6}\), where \(p_{i}\in\mathbb{R}^{3}\) is the position of the vertex and \(v_{i}\in\mathbb{R}^{3}\) is the velocity of the vertex. For example, a total of 30 vertices will be used for interactions associated with two characters (see Figure 2). On every time step, we perform a Delauney Tetrahedralization over all the vertices based on the spatial distances between pairs of markers to get a compact collection of edges connecting the vertices. Each edge is assigned a feature vector \(e_{ij}=(p_{ij},v_{ij})\in\mathbb{R}^{6}\) that encodes the relative relationship between the two vertices, where \(p_{ij}=p_{j}-p_{i}\in\mathbb{R}^{3}\) and \(v_{ij}=v_{j}-v_{i}\in\mathbb{R}^{3}\) are the positional and velocity components of the edge features. The example interaction graph in Figure 2 includes both edges connecting nodes on a single character and edges connecting nodes on different characters. The edges within the character help maintain the motion quality of an individual character, while the edges between the characters act as guides for maintaining the relative position of the body parts of the two characters. Details are discussed later in section 3.4. There is a major difference between how we compare two spatial descriptors in the interaction graph and how they are compared in the Interaction Mesh (IM) in (Ho et al., 2010). We perform edge-level (i.e. distance) computation whereas IM computes volumetric deformation on a tetrahedron. We further augment the state of an edge with velocities as they are crucial for a physics simulation. Given the input reference motions clips, we build and store such an IG to capture the spatial relationship across the agents and object at each time-step. ### Reward Design We choose to measure the interaction similarity in two ways: an _edge-weighting function_ that highlights the importance of interaction regions in the graph and an _edge-similarity function_ that measures the similarity between two IGs with the same connectivity. For the following similarity measurement, we make use of two interaction graphs \(G^{sim}\) and \(G^{ref}\) with the same connectivity, one from the simulated environment, the other from the reference motion clips. The connectivity of both graphs is the same as computed on the reference motions using the above mentioned method. The interaction graph we defined is a set of spatial descriptors that encode the relative formation among the vertices in the graph. #### 3.4.1. Edge Weighting Function We are guided by the intuition that instances where two body parts are close or in contact with each other are particularly important for multi-character interactions. We define a function that dynamically assigns different weights for each edge according to its relative importance to the others. More specifically, for an edge connecting vertices \(i\) and \(j\), the weight for the edge \(w_{ij}\) is defined as: \[w_{ij}=0.5*\frac{\exp\left(-k_{w}\|p_{ij}^{sim}\|\right)}{\sum_{ij}\exp\left(-k_{w} \|p_{ij}^{sim}\|\right)}+0.5*\frac{\exp\left(-k_{w}\|p_{ij}^{ref}\|\right)}{ \sum_{ij}\exp\left(-k_{w}\|p_{ij}^{ref}\|\right)}, \tag{1}\] where \(k_{w}\) controls how sensitive the weighting function is with respect to the distance of the edges. The first term gives more attention to an edge if the two nodes in the simulation are close to each other, the second term makes sure an edge in the reference motion gets more attention when its nodes stay close. In practice, we found the second term alone is enough for most of our experiments, and we only use the first term for a few examples where it improves the performance. Normalizing the weights allows our reward function to adapt to various interaction scenarios. For example, when two characters are far away from each other, the edges connecting those two characters do not contribute much to the reward while the edges connecting vertices within individual characters become important. On the other hand, when the two characters are close to each other, some of the connections between their body parts will have large weights. This adjustment based on proximity allows the body parts that are not associated with the close interactions to remain close to the original motion. #### 3.4.2. Edge Similarity Function Given two interaction graphs \(G\) and \(G^{\prime}\), we design a distance function measuring their differences. Our distance function measures position and velocity similarity of the two different formations by comparing all corresponding edges in the two graphs. #### 3.4.3. Positional Graph Similarity To compare the positional graph similarity between two graphs, we separately consider the similarity of the two graph edges connecting each individual character \(E_{self}\) (self-connections) and between characters \(E_{cross}\) (cross-connections). The discrepancy of each edge is computed as follows: \[err_{self,ij}=\|\frac{p_{ij}^{sim}-p_{T,ij}^{sim}}{\|p_{T,ij}^{sim}\|}-\frac{ p_{ij}^{ref}-p_{T,ij}^{ref}}{\|p_{T,ij}^{ref}\|}\| \tag{2}\] where \(p_{T,ij}^{sim}\) and \(p_{T,ij}^{ref}\) are edges computed from the first frame of the motion sequence for both simulation and reference motions. In all the reference motion sequences, the motion capture actors are instructed to start in a T-pose. In other words, we first compute the deviation of an edge from its corresponding T-pose edge, it is then normalized by the length of the T-pose edge. Finally, we compute the difference between the two deviations, one for the simulated characters and the other for the reference motion clips. Note that this similarity measurement is not sensitive to specific body sizes and proportions due to the normalization of the deviations. This formulation is also similar to measuring the Laplacian coordinate difference of all graph nodes between simulation and reference in that they both try to maintain the similarity of the local structure between two graphs, but our formulation gives direct measure to the edge similarity that captures the interaction. It is challenging to define a reference edge length for the cross-connections because the variance can be extremely high. For example, imagine that the two characters are standing 10m apart versus 0.1m. Instead, we directly penalize the difference in the edge length and direction so that the same cross-connection similarity can be applied to various motions: \[err_{cross,ij}=0.5*\frac{\|p_{ij}^{sim}-p_{ij}^{ref}\|}{\|p_{ij}^{sim}\|}+0. 5*\frac{\|p_{ij}^{sim}-p_{ij}^{ref}\|}{\|p_{ij}^{ref}\|} \tag{3}\] where we normalize the difference by the lengths in the simulation and the reference clips, respectively, then average them so that the similarity becomes symmetric. This symmetry also enables the error to be used for characters having different body shapes. The total error for positional graph similarity is then the sum of the two error terms from all edges: \[err_{pos\_graph}=\sum_{ij\in E_{cross}}w_{ij}err_{cross,ij}+\sum_{ij\in E_{ self}}w_{ij}err_{self,ij} \tag{4}\] #### 3.4.4. Velocity Graph Similarity To measure the velocity discrepancy between graphs, we simply measure the difference of velocity of all edges in simulation and reference as: \[err_{vel\_graph}=\sum_{ij\in E_{cross}\cup E_{self}}w_{ij}\|p_{ij}^{sim}-v_{ ij}^{ref}\| \tag{5}\] In contrast to the positional similarities, we observed that the velocities of the graph vertices do not vary much when modifying the body size/proportion of the simulated characters. Thus we do not perform any velocity normalization. We also do not separate the velocity similarities by edge type because we did not find any benefit in doing so. #### 3.4.5. Final Reward Design We define our reward function based on the errors computed from the interaction graphs. In addition, we add two more error terms measuring the tracking of the root joint and center-of-mass, which are frequently used in learning imitation controllers for physically simulated characters. As a result, our reward function is composed of four terms \[r=r_{pos\_graph}\cdot\textbf{{}^{\prime}vel\_graph}\cdot\textbf{{}^{\prime} root}\cdot\textbf{{}^{\prime}com} \tag{6}\] Figure 2. Interaction Graph of the reference characters. Higher opacity on an edge indicates a higher weight for the edge when computing the reward. \[r_{pos\_graph}=\exp(-k_{1}*err_{pos\_graph}) \tag{7}\] \[r_{vel\_graph}=\exp(-k_{2}*err_{vel\_graph})\] \[r_{root}=\exp(-k_{3}*err_{root})\] \[r_{com}=\exp(-k_{4}*err_{com})\] where \(r_{pos\_graph}\), \(r_{vel\_graph}\) measure the difference between the two interaction graphs and \(r_{root}\), \(r_{com}\) encourage the tracking of the root joint and the center-of-mass projected on the ground, \(k_{1},\cdots,k_{4}\) are the sensitivities of the terms, respectively. The errors for the tracking are defined as follows: \[err_{root}=w_{\rho}\|\bar{p}_{sim}-\bar{p}_{ref}\|^{2}+w_{q}\|log(q_{sim}^{- 1}\cdot q_{ref})\|^{2}+\] \[w_{w}\|\bar{e}_{sim}-\bar{e}_{ref}\|^{2}+w_{\omega}\|o_{sim}- \omega_{ref}\|^{2} \tag{8}\] \[err_{com}=w_{com,x}\|x_{sim}-x_{ref}\|+w_{com,x}\|\bar{x}_{sim}-\dot{x}_{ref}\| \tag{9}\] where \(\bar{p},\bar{e}\) are the position and velocity of the root joint excluding the height components, and \(q_{,\omega}\) are the orientation and angular velocity of the root joint, respectively, \(x\) and \(\dot{x}\) are the center-of-mass position and velocity of the simulated character excluding their height components. \(w_{p}\), \(w_{q}\), \(w_{o}\), \(w_{w}\), \(w_{com,x}\), and \(w_{com,\dot{x}}\) are the relative weights of the terms. Note that we ignore the height components of the linear positions and velocities so that the relevant errors are not directly affected by the absolute size of the character. In contrast to (Ho et al., 2010), where tetrahedron volumes are used to measure similarities of meshes, our edges-based reward is more sensitive to point-to-point physical interactions. In addition, it is not trivial to design an adaptive weight function in a volume-based setting, which ensures the motion quality of the individual characters is preserved, even when characters are far apart, making our reward a good substitute for motion imitation. ### Observation and Action Spaces The observation space of our environment is inspired by the design from prior work (Won et al., 2020, 2021) where the observation of an agent \(o_{i}=(o_{sim},o_{ref})\) consists of the states of the simulated characters and objects, which are computed from the simulation and the reference motion clips. For the simulated observation space \(o_{sim}=(o_{sim,self},o_{sim,other},o_{sim,object})\), we include the position, orientation, linear and angular velocity for each link of the characters and the objects. To make sure the state is invariant to the global position and orientation of the agent, all values are transformed to the facing frame of the controlled character. The facing frame of the character is computed by projecting the global transformation of the character root to the ground. The reference observation \(o_{ref}=(o_{ref}^{0},o_{ref}^{0.05},o_{ref}^{0.15})\) contains the reference information 0, 0.05, and 0.15 seconds in the future. For each future reference observation frame \(o_{ref}^{*}=(o_{ref,self}^{*},o_{ref,other}^{*},o_{ref,object}^{*})\), we include the position, orientation, linear and angular velocity for each link of the characters and the objects in the facing frame of the reference character. Our action \(a\) is the change of pose \(\Delta q\) from the pose \(q_{ref}\) given the reference frame at each time-step. A new reference pose \(q_{ref}+\Delta q\) (i.e. a set of joint angles) is given to the stable PD servos attached to our simulated character and then joint torques are computed accordingly. ## 4. Results In this section, we show that our formulation can be applied to a variety of motions with multiple characters and objects. By dynamically adjusting the weights, our method focus on the adjustments to the motion on the physical interactions. This approach results in higher quality motion than existing work in scenarios with complex interactions. Further, our formulation is able to preserve interaction when the body size, kinematics, and skeleton of the characters differ from the reference motion sequences. ### Experiment Setup The structure of our policy follows a encoder-decoder style as presented in (Won et al., 2021), where the encoder is a fully connected neural network with two hidden layers with 256 and 128 units respectively. The encoder takes the full observation and projects it onto a 32 dimensional latent vector \(z\). The decoder is another fully connected network with two hidden layers with 256 units, and it takes as input the concatenated vector \(z_{decoder}=(o_{sim,self},z)\) and outputs the action of the policy. To speed up the learning for all of the experiments below, we pre-train an imitation policy of a single character on sequences that can be performed without a partner (e.g. high five, greetings, and push ups). When training an interaction-graph based policy, we reuse the pre-trained decoder and allow its weights to be updated during the training. The decoder is reusable because the latent dimensions are unchanged. The encoder trained simultaneously with the pre-trained decoder is not reusable due to differences in input dimensions. This design makes it easier for the policy to maintain balance at the initial phase of learning, and therefore results in faster training. The training time of a policy varies based on the difficulty of the sequence. For easier sequences, it takes about 300 million to 500 million samples to train one policy. For harder sequences, it could take more than 2 billion samples to train a policy. All experiments are run using 640 CPUs and take from 3 days to 9 days to train a policy based on the sequence difficulty. ### Human-Human Interaction In the human-human interaction scenarios, we aim to show that our method is capable of reproducing imitation policies with similar motion quality as existing works such as (Fussell et al., 2021; Peng et al., 2018; Won et al., 2020) while the interaction is better preserved. We show a variety of scenarios ranging from sparsely interacting motions to continuously interacting motions between the two human characters. #### 4.2.1. Light Interaction Figure 2(a) and 2(b) shows light physical interactions. In _Rapper-Style Greetings_, the two characters touch their hands, elbows, shoulders, and legs in sequence to greet each other, an action which has been shown in many hip-hop music videos (Sinestesia3000 2012). In _Jumpower_, one character jumps over the other character. In these scenarios, physical interactions are of short duration with little physical forces, and the interactions are well-preserved semantically when the interacting body parts are close enough with the right timing. #### 4.2.2. Heavy Interaction Figure 2(a), 2(b), and 2(c) shows physical interactions where significant forces occur between the two characters. The _Lift-Pushup_ example includes interactions where one character needs to lift the other character's legs while that character is performing a push-up exercise. In the first salsa dancing motion (_Salsa Grasping_), two character's hands are grasped together to form a loop for one character to go under. In another salsa dancing motion (_Salsa Support_), one character needs to support the other while they lean backward. This type of interaction is more challenging than the light interactions because the two simulated characters need to perform highly coordinated motions with force exchange. For example, the character performing a push-up would not be able to imitate the reference motions successfully unless his legs are grasped by the other character. Furthermore, these heavy interactions make maintaining balance difficult because significant forces are applied between the two characters. Our method was able to imitate these challenging motions successfully as shown in Figure (a)a. Because our characters do not have fingers, we mimic the grasp by adding weld constraints in the physics simulation. More specifically, we examine the reference motion and label a sequence of grasping windows to identify when grasping should be presented on specified body pairs at certain moment. During simulation, when a character's hand is close to a body part it should grasp at that moment, weld constraints between the two body parts are made temporarily. The constraint is removed when the grasping window is over, representing the character releasing their hands. Other than hand grasping, we did not use any weld constraints, all the complex physical interactions emerged during the learning process. These results show that our formulation allows for the control policies to be aware of the relative formation among various body parts regardless of the interaction duration, and to preserve those formations in the simulation. ### Human-Object Interaction We further demonstrate that our formulation can also handle human-object interactions where the objects are passively simulated. Figure (a)a and (b)b show the two motions: One includes interactions where two persons are throwing and catching a small box repeatedly, the other includes interactions where two persons are lifting and moving a large box. For both motion sequences, we place an extra marker on every vertex of the box (i.e. 8 markers in total) when constructing the interaction graph. For the edges connecting the characters to the box, we use the reward formulation in Equation 3 to measure the discrepancy between simulation and reference. In addition, we choose to remove all edges connecting the markers on the box because their relative distances will stay constant throughout the motion. The resulting graph is shown in Figure 12. The control policies learned with those additional markers successfully reproduce both hand-object interaction scenarios, which shows the generality of our method. ### Retargeting to different body sizes Our graph-based formulation is robust to the change in body conditions because we compute features in a normalized manner. As a result, the interactions in the same reference motions can be applied to simulated characters that have completely different body dimensions from the actors in the reference motions. Figure (c)c, (d)d demonstrates motions that include light interaction. In both sequences, we scale all limbs of the yellow and blue characters by a factor of 1.3 and 0.5, respectively, so the yellow character is almost 2 times taller than the blue character. The scaled characters are trained using our framework to track the reference motion. In the _Rapper-style Greeting_ motion, for example, we see the taller character deliberately bends down to reach their hand, elbow, and shoulder to the shorter character when the interaction happens, and they straighten back after they finish the interaction. Similarly, the taller character lowers their waist when the shorter character jumps over their back in the _Jumpover_ motion. Learning how to transfer forces via physical interactions is crucial to imitating motions including heavy interaction as in Figure (d)d,(e)e,and(f)f. For the _Lift-Pushup_ motion (Figure (d)d), we give a 0.5 scaling factor to all limbs of the blue character, for the _Salsa Grasping_ motion (Figure (e)e), we scale the yellow character's limbs by 0.5, and for _Salsa Support_ motion ((f)f), we scale the yellow character's limbs by 0.8. For this type of motion, our method allows the scaled characters to adjust their motions to preserve the interactions rather than simply mimicking the original reference motions, and therefore the semantics of the interaction are transferred successfully to the scaled characters. For example, the taller characters in the _Lift-Pushup_ motions learned to bend down and reach the target grasping region to form the grasp. Finally, we also scale the characters and objects for human-object interaction scenarios. Figure 6 shows the control policies learned successfully for the small box throwing-catching and the large box lifting-moving. For both human-object interaction motions, we scale the yellow character's limbs by 0.7. ### Non-human Characters Our method can also transfer interactions in the reference motions to characters with different kinematic configurations. For example, if we use a robot with fewer DoFs than the reference character, our method can still make the robot create the interactions existing in the reference motions. As shown in Figure 4, we replace one of the characters by a Baxter robot composed of two industrial manipulators. Because the robot has a fixed base, we place the robot at the location where the greeting motion is conducted and place a total of eight markers on the upper body of the robot on its head, torso, upper arms, lower arms, and end-effectors to match those of the human character. For the human character, we keep the same 15 markers as described earlier on the human body. We then use a total of 23 markers to construct the interaction graph for the training. During the training, we use two separate reward functions for the character and robot. The character receives the same reward terms as described above, the robot only receives a reward from \(r_{pos\_graph}\) and \(r_{vel\_graph}\) because it is not mobile. In addition, we found that including the first term in Equation 1 was helpful for the robot because it was immobile. This term highlights the edge error when the robot body parts are staying close but the reference characters' body is far away. Because the kinematic structure of the robot is completely different that of the actor in the reference character/motion, we ask the policy to directly output the absolute target joint angles \(q\) instead of learning the deviation (i.e. \(\Delta q\)) from the reference \(q_{ref}\) for both the human character and the robot. Our framework can successfully generate animations of the Baxter robot performing greetings with a human character (Figure 4(a)), and perform a highfive with another Baxter robot (Figure 4(b)). These examples demonstrate the potential of our method as an option to retarget human motion onto robots and create compelling human-robot interactions. ### Comparison We conduct comparison and ablation studies to show the effectiveness of our graph-based formulation in reproducing complex interaction for physically simulated characters. #### 4.6.1. Joint-based Reward To highlight the necessity of formulating the interaction-graph-based reward, we compare our method with the commonly used joint-based reward formulation for motion imitation. For the sequences that use a joint-based reward, we apply a similar formulation as described in (Peng et al., 2018; Won et al., 2020). That formulation asks the policy to minimize the positional and angular differences of the joints and links between the simulation and the reference motion. In this formulation, no reward term exists to evaluate the quality of interactions between multiple characters or characters to objects. As a result, when the simulated character has a different body configuration from the reference motion, the characters will only learn to mimic the poses in the reference motion instead of learning to adapt to the other characters (or object) to correctly perform the interaction. Figure 7 shows a comparison for the greeting motions. The control policies trained using a joint-based reward fail to cause the taller character to bend down to meet the shorter character. Similar behaviors are observed in the other motions for the control policies trained using the joint-based reward only. We further contrast the performance for the dense interaction example between the interaction graph and joint-based rewards. Figure 8 shows such a comparison on _Lift-Pushup_ sequence with a scaled character. When using the interaction graph reward, the taller character actively bends forward to reach its hands to the shorter character's lower leg to form the grasping constraints and lift the shorter character. When using a joint-based reward, on the other hand, there is no reward based on the relative poses between the two characters and the taller character cannot grasp the shorter character's leg and the interaction semantics are not preserved. Furthermore, we show that a joint-based reward also produces lower quality motions when re-targeting motions for human-object interactions. Figure 10 shows a comparison for a small box throw and catch motion trained with the interaction graph reward and joint-based reward. The two characters are able to perform the throw and catch motion sequence in the joint-based reward because of the presence of the additional object observation and reward as described above. However, it fails to preserve the interaction semantics because the shorter character should catch the box by holding on two opposite faces of the box instead of supporting the box on its bottom. #### 4.6.2. Edge Weighting Function We do an ablation on the edge weighting function (Equation 1) to understand how this helps the training selectively pay attention to more important edges and ignore irrelevant edges. Our experiments demonstrate that this design can help in generating more natural-looking motions. In Figure 11, we compare the resulting policy trained with (left) and without (right) the weighting function for the greeting motion. When the edge weighting function is present, the taller character learns to bend its waist to reduce its height when greeting the shorter character. However, when all the edges have the same weight during training, the taller character instead learns to walk and complete all the greetings with the legs bent at an unnatural angle. This unnatural behavior is created because the policy tries to get a low error on every edge of the graph regardless of the distances of the nodes. ## 5. Discussion We demonstrated a method of simulating and retargeting complex multi-Character interactions by using deep reinforcement learning where novel state and rewards that are character-agnostic are developed based on an _Interaction Graph_. Our formulation is applicable to a variety of interactions among people ranging from sparse interactions (e.g. greeting, jumpower) to complex ones (e.g. exercise motion, Salsa dancing) regardless of whether the body size, kinematics or skeleton of the simulated characters are the same as that of the actors who recorded the reference motions. While we demonstrate many successful examples, there are some limitations to our method. First, there are some limitations of our reward function design. Because the action space of our policy is not directly associated with the reward function, our training usually requires more samples to converge compared to a joint-based reward function. In addition, due to the lack of supervision on the joint angles, the motion generated from our policy could contain artifacts on joints that have little impact to the interaction. For example, sometimes the character may tilt the head or the waist at an unnatural angle because this deviation from the reference will not affect the positions of the interaction graph's node, and therefore it does not decrease the reward. Adding more markers would be an immediate remedy but this would also increase computational cost. Another limitation is that our controllers are imitation controllers which cannot perform interactions that do not exist in the reference motions. Further the controllers only work for the specific body configuration that it was trained on, so one policy cannot easily be generalized to work on a character with a different body configuration. We also observe that the variability of our result is limited by the dissimilarity of the character and the difficulty of the task. Extreme scaling or drastically different skeletons could fail to imitate the interactions due to their physical limits. For example, in challenging interaction scenarios such as box throwing, our method fails when replacing one human character with a robot. We envision several future directions to reduce the limitations. For better variability, we can build a stronger motion prior that contains a larger variety of motion types. Further training on top of the motion prior could be more sample efficient, and allow the policy to explore the motion space to find a valid solution when the character shape undergoes extreme changes. To improve the generalization of our method, a better observation representation would be helpful. Currently we are using the commonly used joint-based
2310.04427
Generative AI in the Construction Industry: Opportunities & Challenges
In the last decade, despite rapid advancements in artificial intelligence (AI) transforming many industry practices, construction largely lags in adoption. Recently, the emergence and rapid adoption of advanced large language models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown great potential and sparked considerable global interest. However, the current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector, creating a critical knowledge gap for researchers and practitioners. This underlines the necessity to explore the prospects and complexities of GenAI integration. Bridging this gap is fundamental to optimizing GenAI's early-stage adoption within the construction sector. Given GenAI's unprecedented capabilities to generate human-like content based on learning from existing content, we reflect on two guiding questions: What will the future bring for GenAI in the construction industry? What are the potential opportunities and challenges in implementing GenAI in the construction industry? This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis, and integrates authors' opinions to answer these questions. This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI within the construction and its allied architecture & engineering domains.
Prashnna Ghimire, Kyungki Kim, Manoj Acharya
2023-09-19T18:20:49Z
http://arxiv.org/abs/2310.04427v1
# Generative AI in the Construction Industry: Opportunities & Challenges ###### Abstract In the last decade, despite rapid advancements in artificial intelligence (AI) transforming many industry practices, construction largely lags in adoption. Recently, the emergence and rapid adoption of advanced large language models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown great potential and sparked considerable global interest. However, the current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector, creating a critical knowledge gap for researchers and practitioners. This underlines the necessity to explore the prospects and complexities of GenAI integration. Bridging this gap is fundamental to optimizing GenAI's early-stage adoption within the construction sector. Given GenAI's unprecedented capabilities to generate human-like content based on learning from existing content, we reflect on two guiding questions: What will the future bring for GenAI in the construction industry? What are the potential opportunities and challenges in implementing GenAI in the construction industry? This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis, and integrates authors' opinions to answer these questions. This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI within the construction and its allied architecture & engineering domains. G 1 Ph.D. Student, Durham School of Architectural Engineering & Construction, University of Nebraska-Lincoln, USA; [email protected] 1 Assistant Professor, Durham School of Architectural Engineering & Construction, University of Nebraska-Lincoln, USA; [email protected] 2 AI Scientist, SRI International, USA; [email protected] 2 Footnote 2: email: [email protected]; **Keywords:** Generative AI; Construction; AEC; OpenAI; GPT; PaLM; Llama; LLM; Fine Tuning ## 1 Introduction In the last four decades, the field of machine learning (ML), particularly the deep learning subdomain reliant on artificial neural networks, has undergone substantial maturation, causing immense transformations across many industrial landscapes [1]. It has emerged as a powerful asset, automating procedures within the construction sector, an industry that trails behind others in both efficiency and output. However, embracing this paradigm shift faces impediments due to gradual headway in overseeing data quality and the absence of directives for integrating domain expertise with data-centric evaluation. These challenges crystallize into three critical concerns: the disparity between a feature-rich space and limited samples, the balance between model precision and applicability, and the reconciliation of machine learning outcomes with field-specific insights [1, 2]. Here are three simple examples of these challenges: (1) A construction company has a large amount of data on the features of construction projects, but only data on a limited number of projects. This disparity between the feature-rich space and the limited samples makes it difficult to train a machine learning model that can precisely predict the cost of construction projects, (2) An owner organization is trying to implement a machine learning model to predict the completion time of a construction project based on data they have access to such as project value, delivery method, complexity, and materials quantity in previous projects. However, the company wants to make sure that the model is applicable to a wide range of projects, so it does not want to make the model too precise. A more precise model will be able to make more accurate predictions about the completion time of a project, but it may not be applicable to a wide range of projects. A less precise model will be more applicable to a wider range of projects, but it may not be as accurate, (3) Safety manager is using a machine learning model to predict the likelihood of a fall accident on a construction site and has access to data on the weather, the type of construction, and the safety practices used on previous projects and predicts that there is a 10% chance of a fall accident on the current project. However, the developed model may not be able to account for all of the factors, such as human errors, unforeseen conditions, that can contribute to an accident. Therefore, traditional machine learning algorithms are somewhat constrained in their capabilities restricted to these limitations[3]. The rapid growth of artificial intelligence (AI), a discipline that involves developing computer systems capable of human-like cognition and actions, has enabled the advancement of sophisticated large language models (LLMs), such as GPT, PaLM, and Llama. GenAI, a subset of deep learning, leverages neural networks, and can process both labeled and unlabeled data using supervised, unsupervised, and semi-supervised methods to synthesize novel content like text, images, and audio [4, 5]. An LLM trains models on existing data, constructing statistical representations to predict content. When provided prompts, generative systems output new synthesized content learned from underlying patterns. Architecturally, transformer models enable GenAI, containing encoders to process inputs and decoders to translate them into contextually relevant outputs [5]. There are four major types of GenAI models: text-to-text, text-to-image, text-to-video/3D, and text-to-task. Text-to-text models, trained to learn mappings between text pairs, accept natural language input and generate text output [6]. Text-to-image models, a recent development, are trained on image datasets paired with text captions. These models take text prompts as input and generate corresponding images as output, often using diffusion techniques[7]. Text-to-video models synthesize videos from text prompts, accepting inputs ranging from single sentences to full scripts, and outputting corresponding video representations [8]. Similarly, text-to-3D models create 3D objects that match a user's textual description. Text-to-task models are trained to execute particular tasks based on textual prompts. These models can perform diverse actions including responding to questions, conducting searches, making predictions, and carrying out requested behaviors [9]. LLMs are a type of general AI. As large pre-trained models designed for adaptability, foundation models like GPT constitute AI architectures that are trained on vast data quantities. This enables fine-tuning to a wide range of tasks including question answering (Q&A), sentiment analysis, information extraction, image captioning, object recognition, instruction following, and more. [10] Over the past few decades, in the construction, researchers have published articles on implementing AI and its subdomains to address industry-specific challenges. These studies demonstrate AI and machine learning applications across the construction management spectrum, including safety management [11, 12, 13, 14, 15], cost predictions [16, 17, 18, 19, 20], schedule optimization [1, 21, 22], progress monitoring [23, 24, 25, 26, 27], quality control [28, 29], supply chain management[30, 31, 32, 33], logistics management[34, 35], project risks management [36, 37, 38, 39, 40, 41], disputes resolution [42, 43], waste management [44, 45, 46], sustainability assessments[47, 48, 49, 50, 51], visualization [52, 53], and overall construction process improvements [1, 54, 55, 56, 57]. Also, there have been studies highlighting the integration of AI with Building Information Modeling (BIM) to enhance information extraction, streamline workflows, and optimize construction management efficiency [58, 59, 60, 61, 62]. Furthermore, some research studies also emphasized the impact of robotics and AI integration in construction such as improvements in construction quality, safety, project acceleration, and the mitigation of labor shortages [63, 64, 65, 66]. However, there is a noticeable gap in research on GenAI's applications, future opportunities, and adoption barriers specific to the construction industry. This gap is likely due to the recent and rapid emergence of GenAI as a novel technology for this field, resulting in a delay in research and implementation when compared to other industries that have already begun to explore and capitalize on the benefits of GenAI adoption [67, 68, 2, 69, 70, 4]. As the construction industry continues to deal with its unique challenges, there exists a vital need to bridge this research gap, uncover the untapped opportunities offered by GenAI, and address the barriers obstructing its adoption within the construction sector. With this background, in this study we seek to answer the two major research questions: (1) What are the current opinions and evidence about the opportunities & potential applications, and overall challenges related to GenAI technologies implementation in the context of construction?, and (2) What are the most important research questions to investigate in future related to GenAI technologies in the context of construction? The remainder of this paper is arranged as follows: Section 2 summarizes our methodology. Section 3 describes various GenAI model structures and presents related work in construction. Section 4 synthesizes opinions and evidence on opportunities, summarizes potential application areas, and visualizes conceptual implementation framework, and Section 5 examines key challenges, from technical limitations to industry challenges. The recommendations for implementation, and critical research questions to prioritize investigating GenAI's unknowns in construction will be discussed in Section 6. Finally, Section 7 concludes by spotlighting this study's significant findings. ## 2 Methodology To achieve our research goals, we followed a research framework as mentioned in Figure 1. Given the limited literature on generative AI in construction, we conducted a non-systematic review using keywords like "Generative AI AND Construction", "Generative AI", and "Large Language Models AND Construction" in Scopus and Google Scholar. We then used the snowball method, identifying key articles and mining their references and citations to find more relevant studies. In addition, to get the most up-to-date insights, construction industry professionals' perceptions of generative AI via posts on LinkedIn over the three months leading up to August 20, 2023. Using three keyword combinations - "Generative AI in construction", "#generativai #construction", and "#generativeai #aec" - we identified 32 relevant opinions comprising a total of 63,778 words. Our analysis incorporated various formats including posts, comments, polls, and articles. Articles accounted for 48% of the data, comments 34%, posts 16%, and polls 6%. To analyze this data, we utilized programming-based text mining techniques including word cloud analysis to highlight the most frequent terms, sentiment analysis to categorize opinions as positive, negative, or neutral, and frequency analysis to summarize key themes throughout the corpus. With a literature review and industry perspectives, this paper outlines potential GenAI applications in construction. A conceptual implementation framework is then proposed to implement identified applications, along with key implementation challenges. Figure 1: Research Framework Furthermore, we integrated the perspectives of the authors in this study. As experts in allied disciplines related to emergence technology such as generative AI in the built environment, the authors contribute more than a decade of combined experience in areas including AI in construction, automation in construction, and generative AI specifically. ## 3 Various GenAI Model Structures and Related Work in Construction In recent years, researchers have increasingly focused on modifying the learning algorithms of generative AI (GenAI) models to fit specific domains and tackle industry-specific problems. The choice of which generative AI model to use depends on the specific task at hand. Based on their generative mechanism, there are five major types of GenAI models [2, 71, 72]. Generative Adversarial Networks (GAN) are often used for image generation because they can create realistic images. Variational AutoEncoders (VAE) are commonly used for text generation, as they can produce clear, grammatically correct samples by learning the original distribution of the training data. Autoregressive models are best at text generation similar to their training data, since they generate text token-by-token while conditioning on previous tokens. Diffusion models can create smooth and natural image samples by starting with noise and reversing a diffusion process. And, flow-based models learn transformations between data and latent representations, enabling diverse and creative image generation. In the following subsections, we will investigate the background of each model, explain their operational mechanisms including model architecture, underline any limitations, examine their relevance within the construction domain, if such use cases exist, and summarize the characteristics, advantages, and disadvantages of all models. ### Generative Adversarial Network First introduced by Goodfellow et al. in 2014, GANs are a type of deep learning model comprised of two neural networks: a generator and a discriminator[73]. The generator is tasked with creating new synthetic data, while the discriminator attempts to differentiate between real and generated data. As shown in Figure 2 (a) [72], GANs are trained through an adversarial process, where the generator produces fake samples that are fed along with real samples into the discriminator. The discriminator then predicts which samples are real or fake, and loss gradients are calculated using a loss function to update both models. During training, the generator tries to fool the discriminator by improving its ability to generate realistic data [71, 74]. The format of the real and synthetic data samples can vary, as long as the neural network architectures are adapted accordingly. GANs have proven adept at generating images, video, and text that are remarkably close to actual data distributions. Their adversarial training process allows for modeling complex, multi-modal data. However, GAN training can be unstable, and finding the optimal balance between the generator and discriminator is challenging [75]. GANs have shown possibilities for a variety of applications in the construction industry. Researchers have demonstrated that GANs can generate plausible technical drawings, including floorplans, mechanical/electrical/plumbing diagrams, sectional views, and colored plans [72]. The adversarial training process allows GAN models to synthesize images that closely match the style and content of real architectural drawings across multiple domains. In another study, GANs have been applied to generate photorealistic renderings of building facades [76]. By learning from datasets of real facade images, GANs can produce synthetic views that are useful for tasks like style classification and image restoration. ### Variational AutoEncoders Variational Autoencoders (VAEs) are a class of generative models specifically designed to acquire a data representation in a lower-dimensional latent space. This latent space provides a compressed yet essential feature representation of the original data [81]. Kingma and Welling introduced VAEs in 2013, establishing them as a pivotal model in the field [82]. VAEs consist of two intertwined and independently parameterized components: the encoder, responsible for recognition, and the decoder, focused on generation. These components work in tandem to support each other's operations [83]. The model comprising an encoder network \(Q_{\phi}(Z|X)\) and a decoder network \(P_{\theta}(X|Z)\) is illustrated in Figure 2 (b). VAEs are proficient in approximate inference and can be effectively trained using gradient descent methods. The encoder network, characterized by parameters \(\phi\), efficiently compresses data into the lower-dimensional latent space, mapping input data X to a continuous latent variable Z. Conversely, the decoder network, parameterized by \(\theta\), utilizes this latent variable to generate data, performing the reverse mapping from Z to reconstructed data. Both the encoder and decoder employ deep neural networks for their construction, with parameters \(\theta\) and \(\phi\), respectively [77]. VAEs are trained to utilize Figure 2: GenAI Models variational inference, enabling the acquisition of a probabilistic distribution over the latent space. This learned distribution empowers VAEs to generate new data samples that closely resemble the training data. VAEs exhibit versatility and find applications in several domains, including data compression, image synthesis, text generation, and discovery. Because VAE imposes assumptions about the latent space, they are less flexible than other generative models in capturing complex real-world data distributions and data sequences [84], [85]. Like other industries, construction struggles with limited access to large datasets, a major obstacle for implementing deep learning models. While several studies have investigated big data challenges, solutions remain needed to compile requisite construction data. A recent study by Delgado & Oyedele [86] highlighted the approaches to addressing limited data including data augmentation through distortions and variants of original data, synthetic data generation with methods like VAE, and transfer learning. And, the study explored using VAE to expand financial datasets for construction projects, as financial data lacks the transformation invariance present in images, making AutoEncoders a promising technique. The results showed that VAE provided more robust outputs and better represented the non-linear correlations between the variables in the financial datasets. Another study by Balmer et al. [87] presented the use of VAEs for the conceptual design of pedestrian bridges from synthetically generated data, eliminating manual and time-consuming traditional design processes. Variational AutoEncoders show promise for generating new design and construction data to address limited datasets, and facilitating advanced deep learning applications. VAEs can be used to generate new data that is similar to existing data for defect detection, extract features from sensor data for predictive maintenance, model uncertainty in construction projects for risk assessment, and generate new designs for buildings or infrastructure. VAEs can learn from data at different levels of abstraction, depending on the specific task being performed. ### Autoregressive models An autoregressive model is a type of generative model that predicts the next token in a sequence, given the previous tokens. This means that the model is trained on a sequence of data, and it learns to predict the next token in the sequence based on the previous tokens[88]. One common architecture for an autoregressive model is a recurrent neural network (RNN) as shown in Figure 2 (c). The output at time 't' in an autoregressive model relies not only on the input 'x\({}_{i}\)' but also on prior inputs 'x' from preceding time steps. Nevertheless, in contrast to an RNN, the preceding 'x's are not conveyed through a concealed state; rather, they are directly supplied to the model as additional inputs [78]. Autoregressive generative models leverage the chain rule of probability to decompose the joint distribution of a sequence into conditional distributions over tokens based on their context [84], [89]. While autoregressive models are powerful density estimators, their sequential sampling is slow for high-dimensional data and requires a fixed ordering to decompose the data, which is not always straightforward [84]. A study by Elfahham [90] found the prediction of the construction cost index using the autoregressive time series method was most accurate compared to neural network and linear regression approaches. The autoregressive technique's specialized modeling of temporal dependencies allowed it to outperform. Autoregressive models have the potential to enable advanced analytics in construction by modeling temporal dependencies in historical data. Applications include forecasting construction costs, risk identification, schedule optimization, and automating tasks. These models capture relationships over time to predict future outcomes and empower data-driven decision-making. ### Diffusion Models Diffusion models, a type of GenAI, produce high-quality synthetic images and videos by learning to reverse an artificial diffusion process. This process involves gradually adding Gaussian noise to training data over multiple time steps, following a predefined schedule that gradually masks the original data[7], as shown in Figure 2 (d) [79]. During training, the model learns to take a noisy sample from an intermediate point within this noise schedule and subsequently predict a less noisy version of the data from the previous time step. By repeatedly applying this de-noising prediction across many time steps, the model can start from pure noise and reverse the diffusion back to a realistic generated image[91]. Though sampling is relatively slow due to the multiple required predictions, diffusion models can generate sharp and coherent outputs, especially for image generation. Their ability to condition the sampling makes them versatile and broadly applicable across computer vision tasks. Popular GenAI models like DALL-E2 and Imagen are based on the diffusion model concept[7]. Some studies underline the major limitations of the diffusion models such as poor time efficiency during inference requiring many evaluation steps, and high computational expense for the iterative de-noising [92], [93]. ### Flow-based Models Flow-based models represent a category of GenAI models that generate synthetic outputs by framing the data generation process as a continuous normalizing flow. They work by taking noise vectors and repeatedly transforming them through a series of bijective functions, each designed to bring the distributions closer to the target data distribution. Unlike other generative models, the flow model only uses a reversible encoder to complete the model's construction, which makes the design more delicate [2] as shown in Figure 2 (e) [90]. Through these transformations, flow models can convert noise inputs into realistic generated samples. The origin of flow-based generative models dates back to the work of Dinh et al. in 2014 [94]. These models offer various advantages, including precise latent-variable inference, accurate log-likelihood evaluation, and efficiency in both inference and synthesis processes [95]. These models were further refined and extended by Dinh et al. in 2016 [96]. The flow-based models have some challenges in terms of training complexity due to the need for inverting networks and computing determinants, which creates a primary drawback. Table 1 provides a summary of GenAI model types, their characteristics, advantages, and disadvantages. It helps in understanding and selecting the suitable generative model for specific applications. \begin{table} \begin{tabular}{c c c c} _GenAI Model_ & **Characteristics** & **Advantages** & **Disadvantages** \\ _Type_ & & & \\ _Generative_ & Two neural networks, a & - Generate high-quality & - Unstable to train \\ _Adversarial_ & generator, and a & data that is & - Difficult to find the \\ _Network (GAN)_ & discriminator, compete with & indistinguishable from & right balance between \\ _with_ & each other to generate & real data. & the generator and \\ _Variational_ & Encodes data into a latent & - Generate data that is & - Less flexible than \\ _AutoEncoder_ & space and then decodes it & similar to the training & GANs & - Lack the ability to \\ _(VAE)_ & back into the original space. & data. & tackle sequential data \\ \end{tabular} \end{table} Table 1: Summary of GenAI Models ## 4 Opportunities of GenAI in Construction ### Current GenAI Applications and Developments in Construction Recent studies using LLMs to solve construction-related problems demonstrate the long-term opportunities of GenAI in the industry. In 2023, Zheng and Fischer developed a BIM-GPT integrated framework [97] to retrieve, summarize, and answer questions from the BIM database, overcoming the challenges due to the extensive engineering required to automate complex information extraction from rich BIM models. By prompting the LLM appropriately, BIM-GPT shows how advanced integration can extract value from construction data assets. In the early days, such a pioneering idea laid the groundwork for GenAI in the AEC domain. A recent work by Prieto et al. in 2023 [98] shows the potential for large language models to automate repetitive, time-intensive construction tasks. Their study tested using ChatGPT to generate coherent schedules that logically sequence activities and meet scope requirements. Hasan et al. proposed a novel method for classifying injury narratives to identify risks and hazards in construction by fine-tuning bidirectional encoder representations from transformers (BERT) sentence-pair models [99]. The BERT-based approach was also utilized for the automatic detection of contractual risk clauses within construction specifications [100]. A study indicated that limited language generation applications in construction despite extensive documentation such as drawings, reports, and contract documents, cannot feed intelligent systems, though they contain critical references for decisions. Generative AI-like technologies such as ChatGPT and BARD can enable automated synthesis of construction documents and question answering, overcoming analog barriers to unlock the value in this data [101]. In construction automation, the major challenge in maximizing robotic systems is creating efficient sequence planning for construction tasks. Current methods, including mathematics, and machine learning, have limitations in adapting to dynamic construction settings. To address this, a recent study introduced RoboGPT, leveraging ChatGPT's advanced reasoning for automated sequence planning in robot-based construction assembly [102]. The recent CREATE AI Act authorizing the National Artificial Intelligence Research Resource (NAIRR) indicates growing government interest in expanding AI development. By providing open access to key AI resources, NAIRR aims to catalyze innovation across sectors while also serving as a testbed for trustworthy AI practices. Though in the early stages, this initiative represents an important step toward equitable AI advancement by connecting public infrastructure to circulate capabilities more widely through academia and industry[103]. Given the rapid development and deployment of LLMs in recent years, comparing LLMs is useful for tracking progress in this fast-moving field and understanding tradeoffs between model scale, and accessibility to provide an at-a-glance overview for researchers and practitioners. The training parameter size indicates the scale and potential capability of LLMs, giving users insight into model strength, and infrastructure requirements. Bigger models with more parameters tend to be more powerful, generally costlier and need more computational resources. The LLMs include both open-source and closed-source approaches, each with distinct implications for access, innovation, and collective development. On one hand, open-source large language models promote transparency by providing public access to critical model assets like source code, training data, and model parameters. With freely available implementation details, open source fosters collaboration as developers and researchers can contribute to enhancing and customizing the models to align with specific needs. However, hosting and maintaining accessible open-source models incur infrastructure costs. In contrast, closed-source LLMs are proprietary models restricted to license-holder organizations. Without access to the underlying code, the specific details of the architecture, and training data, the algorithms of closed-source LLMs may not be known to the public. While commercial closed-source models may ensure consistent uptime through dedicated cloud resources, their lack of public transparency limits external innovation opportunities. At the same time, closed-source models carry the advantage of preserving training data privacy. Table 2 summarizes the top ten LLMs currently available, and offers insights for developers and researchers to evaluate both open-source and closed-source options against capability, and updated time when selecting a model aligned with their priorities and constraints. ### What Opportunities are Perceived by Construction Industry Practitioners? To gain insights into construction industry professionals' perspectives on GenAI, various text analytics techniques were applied. A word cloud uncovered frequent key terms, sentiment analysis indicated overall sentiment, and opportunities list synthesized potential application areas. This comprehensive text data analysis provides a picture of discussion topics, attitudes, and outlooks regarding the potential of integrating GenAI into the construction industry. A word cloud visualization of the LinkedIn data provides an overview of frequently mentioned terms related to generative AI in construction (Figure 2). A word cloud provides a visual representation of textual data, \begin{table} \begin{tabular}{l l l l l l} & **LLM** & **Developed** & **Training** & **Release** & **Access** \\ & & **by** & **Parameter** & **Year** & **Year** \\ & & & & **Size** & \\ & & & **(Billion)** & & \\ \hline **1** & GPT-4 & OpenAI & 1000+ & 2023 & Closed \\ **2** & PaLM & Google AI & 540 & 2022 & Open \\ **3** & MT- & Nvidia & 530 & 2021 & Closed \\ & NLG & & & & \\ **4** & Llama 2 & Meta AI & 500 & 2023 & Open \\ **5** & Gopher & DeepMind & 280 & 2021 & Open \\ **6** & GPT- & OpenAI & 175 & 2022 & Closed \\ & 3.5 & & & & \\ **7** & GPT-3 & Open AI & 175 & 2020 & Closed \\ **8** & OPT & Meta AI & 175 & 2022 & Open \\ **9** & LaMDA & Google AI & 137 & 2022 & Open \\ **10** & GPT- & Microsoft & 100 & 2023 & Closed \\ & NeoX & & & & \\ \end{tabular} \end{table} Table 2: Current Ten Largest LLMs [94], [95], [96], [97], [97], [98]-[102] serving as an impactful tool for text analysis [113, 114]. We preprocessed the data by cleaning and tokenization to improve quality. Text cleaning involved formatting adjustments to improve computational readability. Tokenization segmented the text into discrete, meaningful units by isolating individual words and phrases. We then utilized the Natural Language Toolkit (NLTK) in Python to remove generic stop words and distill the corpus down to substantive terms [115, 116]. This shaped a refined dataset with reduced noise, ready for analysis. The results summarize a diverse range of terms that capture the overarching themes and trends within the dataset. The most dominant word is "ai" highlighting the increased attention on artificial intelligence technologies broadly. Notably, "generative" appears with high frequencies demonstrating awareness of this specific AI subdomain. Other common terms like "design", "data", "project", and "technology" indicate a focus on potential applications in construction processes. "ChatGPT" arises fairly often as well, suggesting this popular demo has significantly shaped industry impressions of generative AI capabilities and potential applications in construction. Numerous terms point to opportunities like "productivity", "designs" "tools", and "processes". Meanwhile, words such as "help", "need", "could", and "future" convey a sense of anticipation and speculation around GenAI's developing impacts. Taken together, the word cloud provides a snapshot of how construction professionals are engaging with the emergent GenAI phenomenon, highlighting key opportunities while also indicating uncertainty about optimal applications and next steps. Furthermore, it is important to uncover the underlying sentiments conveyed in the text. Sentiment analysis, also called opinion mining, involves using computational methods to determine the opinions, attitudes, and emotions expressed toward a subject[114, 117, 118]. Sentiment analysis classifies opinions at three levels: document level categorizes the sentiment of entire documents; sentence-level determines the sentiment of each sentence; and aspect-level examines deeper to categorize sentiment towards specific entity aspects[119]. In our study, we utilized the TextBlob library to quantify sentiment polarity scores, ranging from -1 to 1, revealing positive, negative, or neutral sentiment. Through preprocessing, tokenization, and model-driven analysis, we categorized each text segment. In our sentiment analysis, the discernment of emotional tonality yielded a remarkable distribution: a predominant positivity, coupled with very small negativity and an equivalent neutrality. This outcome highlights the overwhelmingly positive sentiment inherent within the analyzed corpus about GenAI in construction. Visualization using a bar chart showed proportions of positive, negative, and neutral sentiments as shown in Figure 4. Figure 3: Word Cloud Analysis of Industry Practitioners’ Opinions Based on the analysis of people's perspectives, this study synthesizes the key themes regarding the potential opportunities of Generative AI in construction as mentioned in Table 3. First, we identified the main points and common ideas expressed across multiple perspectives in the body of the text through careful reading and analysis. Second, we synthesized these main points into a few key, overarching themes that capture the essence of the perspectives. There is consensus around Generative AI's promise to drive greater efficiency, innovation, and data-driven decision-making across the construction lifecycle. However, viewpoints diverge regarding the scale and scope of GenAI's applications, as well as the need to thoughtfully manage its integration to maximize benefits versus risks. Figure 4: Sentiment Analysis of Industry Practitioners’ Opinions \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{1}{|c|}{Perspectives: Main Points} & Key Theme \\ \hline Applying GenAI for construction documents management & \\ \hline Enterprise search & \\ \hline Data management ultimately offers time-saving benefits and increased productivity when effectively leveraged & Construction Documents and Data Management \\ \hline For example, Integrating GenAI in scheduling to identify the most effective schedule path to follow. & \\ \hline Can help improve conversations and collaboration between project stakeholders such as contractors, designers, and owners. & Question Answering (QnA): \\ \hline Stakeholder demands for faster, affordable, and sustainable builds create opportunities for GenAI and automation to address construction’s unique challenges such as repetitive tasks and unsafe work environments. & \\ \hline AI-generated designs and plans reduce manual work, enhancing data systems for faster payments, fewer errors, and better decisions. & AI-Generated Designs \\ \hline \end{tabular} \end{table} Table 3: Overarching Themes on Opportunities Generative AI increases predictive capabilities, leveraging historical data for accurate project forecasting, forecasting of trends, risk assessment, and opportunity identification. Incorporating GenAI streamlines the synthesis of project data and provides avenues for automating intricate information management, such as contract-related data, thereby enhancing decision-making during the initial phases of construction. AI and modern innovations in construction address labor shortages, cost escalation, and environmental concerns, positioning the industry for a transformative future. Integrate materials assessment AI tools to support informed materials selection for improved sustainability, maximizing de-carbonization. The development of GenAI, like ChatGPT, enhances human capabilities rather than replacing jobs. ### Potential Applications of GenAI in Construction Generative AI shows huge potential to transform information workflows in architecture, engineering, and construction. Advanced LLMs can parse volumes of unstructured data to extract insights with new levels of ease and speed. For instance, by analyzing building codes, generative models can identify relevant requirements and produce summarized, project-specific reports for architects. This automates laborious manual reviews. Similarly, contractors can input design specifications into AI systems to automatically compile cost and schedule estimates by associating 3D models with external databases. Many simple properties like material name, soil type, concrete strength, roof slope, furniture suppliers, last changed by, as well as complex analytical queries become accessible to stakeholders through AI's natural language capabilities. Whether generating code requirements from regulations, connecting designs to cost data, or retrieving wind load assumptions, GenAI allows seamless information flow between physical and virtual manifestations of the built environment. The power of language models lies in their ability to comprehend, reason about, and generate knowledge. As explained through these use cases, GenAI can improve project understanding and decision-making by unlocking information trapped in unstructured data. The GenAI holds vast potential to increase productivity and collaboration in the AEC industry. In this section, based on lessons learned from literature, peoples' perspectives, and building lifecycle tasks identified[120, 121, 122, 123], we provide the potential application examples across the project lifecycle, detailing beneficiaries and appropriate GenAI model types for each as shown in Table 4. Clearly defining the output modality generated by each AI system, whether text, image, 3D, video, or task, simplifies technical requirements for implementation. Readers can identify suitable architectures by mapping desired functionality to output types. In addition, clustering potential applications by common model families also enables knowledge transfer across use cases and highlights productive pairings of activities with generative techniques. In addition, the popular model examples of each type at the end of the table expedites the process of model selection, allowing researchers and practitioners to make quicker decisions customized to their specific application requirements and objectives. \begin{table} \begin{tabular}{|l c c c|} \hline \hline **Phase** & **Potential GenAI Application** & **Main beneficiary** & **Model type based on the output** \\ \hline Feasibility & * To generate a feasibility report & Stakeholders & text-to-text \\ & * To generate a project initiation document (PID) & Owner & text-to-text \\ & * Interactive Q&A chatbot to refine PID details & Owner & text-to-text \\ & * To create visual representations of data such as site & & \\ & conditions, traffic patterns, zoning laws, etc. & & \\ & * To predict project milestones and success criteria for & & \\ & different phases of the project & Stakeholders & text-to-text \\ & * To create contracts and agreements & Stakeholders & text-to-text \\ \hline Design & * To generate multiple conceptual designs based on the & & \\ & program requirements and communicate with the architect & Architect & text-to-task \\ & * Animated 3D visualization of organization chart and & & \\ & responsibilities & Stakeholders & text-to-3D \\ & * To automatically generate detailed cost estimation report & Owner & text-to-text \\ & * To associate cost/time data with building design & Contractor & text-to-text \\ & * To extract structural design requirements & Engineer & text-to-text \\ & * To extract MEP design requirements & Engineers & text-to-text \\ & * To generate a permit application draft & Architect & text-to-text \\ & * To generate a risk analysis report & Stakeholders & text-to-text \\ & * To develop a design communication Q&A chatbot & Architect & text-to-text \\ & * To compare the design against the building code & & \\ & requirements & Architect & text-to-task \\ & * To perform complex design checking (routing analysis, & & \\ & etc.) & Architect & text-to-task \\ & * To select the most suitable contractors based on project- & & \\ & specific criteria, performance histories, and contractual & & \\ & considerations & Owner & text-to-text \\ \hline Procurement & * To visualize the material delivery schedule & Logistics team & text-to-3D \\ & * To generate a request for a quotation & Procurement team & text-to-text \\ & * Identification of optimal supplier based on variables & Project manager & text-to-text \\ & * Streamline subcontractor bidding and selection & Contractor & text-to-text \\ & * Automated inventory management & Procurement team & text-to-text \\ \hline Construction & * To extract project information from construction & & \\ & documents such as dimensions, materials used, & & \\ & responsible person, point of contact, etc. & & \\ & * To generate new documents. Examples- proposals, & & \\ & reports, etc. & & \\ & * To classify \& cluster documents based on project types, & \\ & internal departments, sub-contractors, project phases, & & \\ & document types, materials, supply chain, etc. & Contractor & text-to-text \\ & * Generating code to automate tasks. & Contractor & text-to-text \\ & * Translating documents into different languages. & Contractor & text-to-text \\ \hline \hline \end{tabular} \end{table} Table 4: Potential Applications of GenAI in Different Phases of Building Lifecycle * To optimize cost estimation workflow * To help progress tracking and identify safety concerns with drone integration * To provide customized alerts and notifications on changes * To help quality control such as comparing completed tasks to project specifications to identify defects and deviations * To generate an optimal schedule path * Searching specific information on the data lake, shared folder, project-specific information, etc. * To generate targeted safety training materials * To generate targeted trade training materials & To generate targeted trade training materials & To create a knowledge management system using a Q&A Maintenance & To create a work order from logs & Generative design of replacement parts & To generate maintenance schedule & predictive & To generate maintenance & Facility manager & To generate an energy consumption report & Chabot to assist occupants [MISSING_PAGE_POST] & may include privacy constraints and noise reduction to enhance the model's performance. The resulting fine-tuned model has knowledge customized for the construction domain. Finally, the adapted model is deployed through careful prompt engineering to query its capabilities. Users provide prompts and obtain answers or visualizations based on the fine-tuned model's specialized intelligence. This conceptual framework for fine-tuning LLMs bridges the gap between pre-trained models and enterprise-specific applications, promoting adaptability in a wide range of domains. ## 5 Challenges of GenAI Implementation in Construction Generative AI adoption across industries is rapidly growing, driven by the immediate integration of new technologies like ChatGPT intensifying competitive pressures on organizations while this novelty presents new risks[69]. Like other industries, the integration of GenAI in construction is associated with complex challenges. Therefore, it is important to understand these challenges before applying the proposed conceptual framework. These challenges comprise various areas, including domain knowledge, the potential for hallucinations in AI-generated outputs, the crucial aspect of accuracy in AI predictions, the generalizability of AI models to new situations, the need for frequent model updates and interpretability, the cost implications of deploying generative AI, and the ethical considerations around data privacy, bias, and accountability as shown in Figure 6. Furthermore, the construction sector faces specific regulatory hurdles related to the responsible use of GenAI, prompting the need for AI skill development and training, liability determination, copyright and intellectual property concerns, and certification protocols. Addressing these multidimensional challenges requires a proactive and collaborative effort involving industry experts, policymakers, and AI researchers to ensure the safe and effective implementation of GenAI in construction practices. Figure 5: A Conceptual GenAI Implementation Framework ### Domain knowledge The construction industry poses unique difficulties in applying GenAI due to its vast domain knowledge requirements. Capturing the industry's complicated technical engineering expertise across structural, mechanical, electrical, plumbing, and project management disciplines remains challenging. Construction also relies heavily on physical situational awareness and spatial reasoning when manipulating materials and navigating dynamic job site capabilities stretching the limits of AI [36]. Consequently, construction's vast knowledge context hinders GenAI's ability to extract meaningful structure-activity relationships from industry data. However, promising avenues exist to address these knowledge gaps. For instance, large language models like GPT require fine-tuning and contextual input tailored to the construction domain in order to efficiently generate industry-specific insights [124]. Hybrid reasoning techniques combining top-down ontological, symbolic knowledge with bottom-up neural networks can be beneficial. Therefore, advancing construction-focused GenAI requires incorporating domain knowledge more seamlessly into model architecture and training. This domain knowledge infusion remains an open research area for unlocking GenAI that can meet construction's complex and ever-changing demands. ### Hallucinations Generative artificial intelligence systems face challenges with hallucination, generating convincing but false outputs due to limited knowledge [70]. These hallucinations often result from factors such as inadequate or noisy training data, a lack of contextual understanding, or imposed constraints. GenAI systems are particularly notorious for producing aesthetically pleasing yet inaccurate predictions, often with an unwarranted high level of confidence. For instance, in the context of a GenAI scheduling system, hallucinations could lead to the generation of inaccurate timelines for critical paths. In construction-focused AI, which lacks the capability to perceive and validate real-world complexities directly, there is a risk of generating hallucinatory outputs that are apart from reality. To mitigate these potentially unsafe hallucinations, several strategies can be employed. These include the use of high-quality training data, a strong grounding in engineering and construction knowledge, simulated testing to validate predictions, continuous monitoring of uncertainty, and the introduction of human oversight throughout the AI's decision-making processes. Figure 6: Challenges of GenAI in Construction ### Accuracy Ensuring accuracy is a major challenge for GenAI, as inappropriate outputs can lead to big failures. Large language models like GPT-3 show these limits, relying on minimal training data from unverified sources [125]. Lack of fundamental construction engineering knowledge, such models obtain only superficial statistical associations rather than causal basics, risking construction decisions through misguided outputs. However, techniques exist to enhance output validity. Construction-specific fine-tuning with validated datasets can align models to the complexities of the built environment. Uncertainty indicators can flag doubtful predictions needing additional verification. Simulated testing enables early correction of inaccuracies before real-world implementation [126]. Further, prompted self-improvement may allow models to iteratively refine their outputs [127]. Overall, connecting robust datasets, uncertainty metrics, simulated validation, and self-correction procedures can introduce proper engineering causality over statistics, improving construction GenAI's accuracy. Advancing fundamental reasoning capabilities remains critical for developing generative intelligent systems that meet the construction industry's need for reliable automation and decision-making. ### Generalizability Generalizability refers to the ability of a generative AI model to extend its learning beyond the specific datasets and distributions it was trained on. A GenAI system utilizing historical data may encounter issues with poor generalization, where the knowledge derived from training data in the in-sample period does not effectively apply to new, out-of-sample data in testing. Even if a model fits the training data well, its poor generalization is unusable for addressing real-world decision-making challenges [128]. For example, a model pre-trained on fixed historical data may fail to account for unexpected changes like weather delays, labor availability, or design changes. Models trained on a limited dataset, unfamiliar inputs, and lack of a casual understanding mechanism in the model are the major challenges that contribute to the generalizability problem. Collecting diverse training data and testing models on novel inputs helps the construction GenAI better generalize [129]. Leveraging simulation, causal reasoning, and common-sense checks also improves generalization by teaching strong process knowledge. And, continual learning enables adaptation to new data over time. Together these solutions improve generalization. ### Model Updates and Interpretability Model updating is a key challenge for deploying generative AI in construction. Training data can quickly become outdated as materials, methods, and regulations frequently change. Without recent data, models will miss new innovations and provide unreliable guidance. For example, an AI chatbot trained before the pandemic may overlook the impacts of supply chain disruptions and labor shortages. Regularly retraining models on new data is essential, but costly and complex at scale. Potential solutions include modular model architectures to simplify updating, simulations to generate fresh synthetic training data, and lightweight model adaptation techniques like transfer learning. However, balancing model accuracy and update will remain an obstacle. User oversight and paired human-AI collaboration are recommended when utilizing construction generative AI. In addition, another limitation of deep generative models is their black-box nature - the internal workings are not transparent or easily interpretable. This is problematic for critical construction applications where explainability is important [130], [131]. The opaque processes by which generative AI systems produce outputs create uncertainties around reliability and trustworthiness. Users cannot validate which parts of the model's knowledge base are being leveraged. Therefore, more research is needed to develop interpretable model architectures and training techniques, making the decision-making logic clear. Progress in the construction of explainable AI will be key to wider adoption by explaining the reasoning behind outputs and establishing confidence in the technology. ### Cost Training and operating generative AI models require significant costs, presenting challenges for widespread construction industry adoption. The training phase alone demands massive computing resources and time to produce capable generative capacity. Ongoing operating expenses also accumulate from the energy required to run large models and web-serving infrastructure [2]. For example, monthly subscription fees to access ChatGPT currently start at $20 with traffic limitations. In addition, utilizing GPT models to develop conversational apps produces additional usage costs billed per generated token [124]. Initial application development leveraging these models is expensive upfront too. The considerable resource demands and ongoing costs act as barriers, especially for smaller construction companies with limited budgets [132]. Further optimizations to reduce the computing power, energy, and data needs of generative models would support feasibility. More cost-effective scaling solutions tailored for construction use cases could also expand access. Overcoming these cost challenges requires a well-balanced approach, considering the long-term benefits of GenAI integration against the upfront investments needed to tie together its capabilities effectively. ### Ethical Challenges The adoption of generative AI models also raises ethical issues around data privacy, bias, and accountability that the construction industry must proactively address. These data-intensive models can utilize sensitive project information and personal details lacking proper consent, presenting risks of confidentiality breaches and intellectual property violations. Researchers and the industry should implement data privacy safeguards and anonymization measures. For example, OpenAI's ChatGPT explicitly acknowledges its potential to generate inaccurate information about individuals, locations, or facts, underlining the need for researchers to be aware of this limitation and ethical challenges when incorporating ChatGPT in scientific works. This includes essential considerations regarding data privacy, confidentiality, and informed consent [133]. The handling of sensitive data by ChatGPT introduces vulnerabilities that may be exploited for unauthorized access or misuse, thereby posing substantial privacy and security risks [69]. Also, the adoption of LLMs raises concerns about creating potential biases[134]. The utilization of confidential construction data like cost, schedule, safety records, contract documents, and BIM model information may potentially trespass upon intellectual property rights and give rise to ethical and legal difficulties. Therefore, establishing clear accountability for errors or accidents caused by AI-generated outputs remains a complex issue needing careful consideration, in order to develop ethically responsible frameworks for implementing generative AI within the construction industry. ### Construction Regulatory Challenges In the construction sector, the integration of GenAI poses several complex regulatory challenges. Successful implementation requires AI understanding, skillsets, and trainings so that industry experts can properly utilize these models. One of the major skills required is proficiency in "prompt engineering," optimizing prompts to maximize model efficacy [124], [135]. However, overreliance on automation risks in reduction of human expertise and the potential for errors in cases of AI malfunction or erroneous information provision [136]. As generative models become capable of autonomously producing comprehensive deliverables, for example, a detailed site safety plan, a serious concern emerges regarding accountability in the event of a failure. Determining liability in such instances, wherein something goes wrong, becomes a complex matter. Who bears responsibility in the event of a failure - is it the developer of the AI system, the construction company implementing it, or the safety manager who approved the final AI-generated plans? Additionally, the independent origination of new content by AI raises questions about copyrights and intellectual property. The ownership of AI-generated content requires a clear legislative definition. To maintain expertise and safety standards, construction companies could introduce certification protocols for AI training and deployment. Moreover, close cooperation between industry experts, policymakers, and AI researchers is essential to navigate these regulatory challenges. ### What Challenges are Perceived by Construction Industry Practitioners? The challenges obstructing GenAI adoption in construction are associated with both technological and human factors. A recent LinkedIn poll of 48 AEC professionals investigated the frequency of generative AI usage in their work, finding 40% have never tried it, 33% use it sometimes, 19% use it often, and 8% use it all the time[137]. This reveals that most AEC professionals are still in the early stages of generative AI adoption, though a segment has integrated these tools into their regular workflows. And, another poll of 16 AEC professionals examined whether their organizations have policies regarding the use of commercial GenAI tools, finding 63% do not, 31% do, and 6% are unsure[137]. This indicates that most companies currently lack formal guidelines on GenAI usage, presenting an opportunity to implement policies and controls given the rise of technologies like ChatGPT. The analysis of perspectives shows key themes around security, governance, awareness, and adaptation as mentioned below. Construction companies must proactively address these multifaceted challenges to unlock their potential. This requires strategic approaches customized to the construction industry's distinct needs within this rapid innovation. A thoughtful, industry-centered path can help overcome obstacles and realize GenAI's potential. * **Proactive Approach Needed**: The implementation of GenAI in construction requires a proactive approach to security and governance. Addressing these challenges is vital to unlock the potential for improved productivity and creativity during the industry's technological transformation. * **Strategic Adoption:** The adoption of GenAI within construction companies requires a strategic approach to manage security, risks, and governance effectively. The practical procedures allow responsible and ethical utilization while maintaining standards of security, safety, and compliance. The guidance from construction technology experts can support in setting up a successful generative AI program. * **Implementation Challenges**: GenAI systems help a comprehensive analysis of trade-offs in construction projects, including physical, financial, and sustainable aspects. However, addressing implementation challenges, such as increasing awareness and understanding, is essential to drive broader adoption and establish convincing business cases for technology investments. * **Limited Awareness**: The construction industry is facing difficulties in building an efficient business case for investments in software, hardware, training, and infrastructure due to limited awareness. These challenges related to accessing and sharing big data hinder the effectiveness of GenAI models. Moreover, regulatory and legal complexities, particularly concerning intellectual property rights, add compliance concerns when deploying GenAI in visualizations or renderings. * **Expectation of Mature Technologies**: The construction market expects mature technologies ready for immediate use, focusing on solutions designed to the industry's distinctive challenges. However, this expectation leads to a deeper exploration of automation and AI in construction, recognizing the need for specialized solutions. * **Risk Mitigation and Ethical Governance:** To effectively implement GenAI in the construction industry, it is important to apply comprehensive risk mitigation strategies. These include various measures such as data encryption, strict access controls, and secure data storage practices. Furthermore, to safeguard AI-generated outcomes, addressing intellectual property concerns through well-defined guidelines and contractual agreements is essential. * **Novelty Challenge**: Another challenge in applying GenAI lies in its novelty. For example, many traditional schedulers are familiar with long-standing tools and may hesitate to embrace newer, more advanced solutions. ## 6 Recommendations and Future Directions In section 4.3, we have explained various potential applications that serve as a foundation for future research directions. We have structured this section into two subsections: 1) recommendations: short-term and long-term adaption strategies and, 2) future research directions: major future research questions. These sections show the directions for studies aimed at facilitating the effective integration of GenAI within the industry. ### Recommendations We recommend the following short-term and long-term strategies for adapting GenAI in construction: * **Fine Tuning LLMs:** The recommended initial approach for the integration of GenAI into the construction industry involves the fine-tuning of available powerful pre-trained language models using construction-specific data. Construction companies have the opportunity to curate datasets comprising various resources such as design documents, building codes, contractual documents, technical documents, and BIM data. This data is helpful in informing the selected LLM about specialized vocabulary and contextual nuances of the construction. Starting with modest datasets and focusing on strongly defined tasks can simplify the process of prompt engineering that enables the GenAI systems for construction needs. * **Human Oversight:** GenAI systems still require human oversight to validate quality and accuracy while capable of automating tasks. Model outputs should be reviewed and feedback can be provided to improve performance. Therefore, human-in-the-loop approaches that combine AI generation with human judgment can improve the strengths of both. * **Evaluating Business Impact:** It is recommended to assess the business impacts of GenAI using experiments measuring key performance indicators. Pilot studies could evaluate model influence on metrics such as productivity, cost, time, risks, etc. The measurement as a model integrates more data and provides insight into returns over investment. This can help to quantify the benefits of GenAI investment for the organization. * **Developing Custom LLMs:** In the long run, collaborative efforts between the AEC industry and researchers can focus on designing specialized language model architectures for construction-related tasks. This involves compiling extensive datasets from the AEC domain. The fundamental approach is to establish a secure central data repository, with contributions from construction companies, and consultants. Training models on this data, with the support of AI researchers, will allow domain expertise and innovation. ### Future Research Directions We present the following major future research questions for adapting GenAI in construction: * How can we develop GenAI models that can accurately extract detailed project information from a variety of construction documents and BIM models? This could help improve productivity. * What techniques can enable GenAI models to automatically generate feasible building designs based on requirements? Generative design could help with time and cost savings. * How can we build AI assistants that can have natural conversations with human stakeholders to refine project details, requirements, and reports in different phases of the building lifecycle? Conversational AI could help project stakeholders. * What GenAI techniques can enable the automated generation of 3D visualizations, videos, and images from text descriptions? This could help in better communication. * How can we develop AI systems to accurately evaluate construction progress, safety, and quality using visual data? Computer vision integration could be key to achieving this. * What GenAI techniques can optimize construction scheduling, logistics, and cost estimating? This could help in construction project management. * How can we build AI assistants that can understand BIM model information, extract that information, and update BIM models based on prompts? This could help to accelerate the BIM execution process for general contractors. * How can we integrate robotics with natural language AI to enable easy human-robot interactions? This could help enhance the usability, and accessibility of robotic systems, leading to improved collaboration. * What machine learning techniques can support accurate automatic code generation for construction tasks and changes in scope? This could help to track changes and troubleshoot issues. * How can we build GenAI models that learn continuously from construction data to improve predictions and decision-making over time? This could help in the overall success of an organization, and future project forecasting. ## 7 Conclusion This study makes important contributions by investigating the evolving opportunities and challenges of implementing Generative AI in the construction industry. Through a detailed literature review, we have identified the limitations of traditional AI methods and examined the recent use cases of GenAI models. We have also investigated the industry practitioners' insights, using sentiment analysis and theme-based interpretation, into the perceived application potential and barriers to adopting GenAI in the construction sector. Synthesizing these findings, we identified potential applications and proposed a conceptual framework to guide researchers and practitioners in implementing GenAI in construction. The mapping of different GenAI model types to various construction tasks suggested potential future applications of text-to-text, text-to-image, text-to-3D/Video, and text-to-task models for applications across project feasibility, design, procurement, construction, and operation phases. However, our study also highlights significant GenAI implementation challenges around domain knowledge, hallucinations, model accuracy, generalizability, interpretability, cost, ethical, and regulatory challenges that must be addressed before executing the proposed framework. Recommendations provided in this study are expected to help construction stakeholders with strategies for initiating GenAI adoption and plan for long-term application while mitigating risks. The future research questions identified can direct the construction research community to focus on the practical applications of GenAI capabilities. Moreover, this study provides a strong literature foundation for realizing the capacity and challenges of GenAI in this industry. Further validation studies implementing the proposed framework and developing real construction applications would be a natural extension of this research. ## Funding This research received no external funding. ## Conflicts of Interest The authors declare no conflict of interest.
2302.14425
Circumplanetary disk ices II. Composition
The subsurface oceans of icy satellites are among the most compelling among the potentially habitable environments in our Solar System. The question of whether a liquid subsurface layer can be maintained over geological timescales depends on its chemical composition. The composition of icy satellites is linked to that of the circumplanetary disk (CPD) in which they form. The CPD accretes material from the surrounding circumstellar disk in the vicinity of the planet, however, the degree of chemical inheritance is unclear. We aim to investigate the composition of ices in chemically reset or inherited circumplanetary disks to inform interior modeling and the interpretation of in situ measurements of icy solar system satellites, with an emphasis on the Galilean moon system. We used a radiation-thermochemical code to produce circumplanetary disk models and extract the ice composition from time-dependent chemistry, incorporating gas-phase and grain-surface reactions. The initial sublimation of ices during accretion may result in a CO2-rich ice composition. Sublimated ammonia ice is destroyed by background radiation while drifting towards the CPD midplane. Liberated nitrogen becomes locked in N2 due to efficient self-shielding, leaving ices depleted of ammonia. A significant ammonia ice component remains only when ices are inherited from the circumstellar disk. The observed composition of the Galilean moons is consistent with the sublimation of ices during accretion onto the CPD. In this scenario, the Galilean moon ices are nitrogen-poor and CO2 on Callisto is endogenous and primordial. The ice composition is significantly altered after an initial reset of accreted circumstellar ice. The chemical history of the Galilean moons stands in contrast to the Saturnian system, where the composition of the moons corresponds more closely with the directly inherited circumstellar disk material.
Nickolas Oberg, Stephanie Cazaux, Inga Kamp, Tara-Marie Bründl, Wing-Fai Thi, Carmen Immerzeel
2023-02-28T09:06:58Z
http://arxiv.org/abs/2302.14425v1
# Circumplanetary disk ices ###### Abstract Context:The subsurface oceans of icy satellites are among the most compelling among the potentially habitable environments in our Solar System. The question of whether a liquid subsurface layer can be maintained over geological timescales depends on its chemical composition. The composition of icy satellites is linked to that of the circumplanetary disk (CPD) in which they form. The CPD accretes material from the surrounding circumstellar disk in the vicinity of the planet, however, the degree of chemical inheritance is unclear. Aims:We aim to investigate the composition of ices in chemically reset or inherited circumplanetary disks to inform interior modeling and the interpretation of in situ measurements of icy solar system satellites, with an emphasis on the Galilean moon system. Methods:We used the radiation-thermochemical code ProDiMo to produce circumplanetary disk models and then extract the ice composition from time-dependent chemistry, incorporating gas-phase and grain-surface reactions. Results:The initial sublimation of ices during accretion may result in a CO\({}_{2}\)-rich ice composition due to efficient OH formation at high gas densities. In the case of a Jovian CPD, the sublimation of accreted ices results in a CO\({}_{2}\) iceline between the present-day orbits of Ganymede and Callisto. Submillimeter ammonia ice is destroyed by background radiation while drifting towards the CPD midplane. Liberated nitrogen becomes locked in N\({}_{2}\) due to efficient self-shielding, leaving ices depleted of ammonia. A significant ammonia ice component remains only when ices are inherited from the circumstellar disk. Conclusions:The observed composition of the Galilean moons is consistent with the sublimation of ices during accretion onto the CPD. In this scenario, the Galilean moon ices are nitrogen-poor and CO\({}_{2}\) on Callisto is endogenous and primordial. The ice composition is significantly altered after an initial reset of accreted circumstellar ice. The chemical history of the Galilean moons stands in contrast to the Saturnian system, where the composition of the moons corresponds more closely with the directly inherited circumstellar disk material. Conclusions: ## 1 Introduction The search for habitable worlds beyond the solar system has historically focused on planets in the so-called "habitable zone," where surface conditions theoretically support the presence of liquid water (Hart, 1979). In the Solar System, however, icy satellites and minor bodies outside of the classical habitable zone are the most common type of worlds that are known to host oceans of liquid water (Hussmann et al., 2006; Nimmo & Papalardo, 2016). Evidence strongly supports the presence of a sub-surface ocean on the Galilean satellites Europa and Ganymede, as well as (to a lesser extent) Callisto (Carr et al., 1998; Khurana et al., 1998; Kivelson et al., 2002; Sohl et al., 2002; Saur et al., 2015). The resonant configuration of the satellites prevents a damping of the orbital eccentricities, producing levels of tidal heating capable of sustaining subsurface oceans over geological timescales (Peale & Lee, 2002; Hussmann & Spohn, 2004; Showman et al., 1997). Whether or not a given level of tidal heating produces subsurface melt depends in part on the composition of the satellite ices. The proposed abundant impurities include NH\({}_{3}\), CH\({}_{4}\), CO, and CO\({}_{2}\), along with salts MgSO\({}_{4}\) and NaCl (Kargel, 1992; Mousis & Alibert, 2006). The liquids temperature of co-deposited ice mixtures can be depressed by the presence of NH\({}_{3}\)(Choukroun & Grasset, 2010; Sohl et al., 2010) or methanol (CH\({}_{3}\)OH) (Deschamps et al., 2010; Dougherty et al., 2018), as well as salts to a lesser extent. Hence, the composition of the volatile reservoir from which icy satellites form is of direct relevance to the presence of a subsurface ocean, their geothermal and physical evolution (Hammond et al., 2018), the interpretation of in-situ geophysical measurements (Vance et al., 2018), and the eventual atmospheric composition by outgassing or impact dissociation (Sekine et al., 2014; Glein, 2015). In particular, ammonia is important to the interior state and evolution of icy bodies. The presence of NH\({}_{3}\) in the form of dihydrate can drive differentiation of rock and ice (Desch et al., 2009). Ammonia in a pure H\({}_{2}\)O-NH\({}_{4}\) eutectic system produces a freezing point depression of \(\sim\)100 K (Kargel, 1992; Grasset et al., 2000; Leliwa-Kopystynski et al., 2002). Ammonia can also reduce the density of melt with implications for buoyancy and cryovolcanism (Croft et al., 1988), while increasing viscosity and reducing the efficiency of convection (Grasset et al., 2000). Ammonia has been detected in the plumes of Enceladus (Waite et al., 2009) but not on the surface of the Galilean moons. Tentative evidence for a subsurface ocean on Callisto would be bolstered by the presence of an ammonia component of 1-5% (Kirk & Stevenson, 1987; Showman & Malhotra, 1999; Spohn & Schubert, 2003). In the "gas-starved" circumplanetary disk (CPD) paradigm, moon formation occurs in a relatively low-mass, cool disk that must accumulate solids to form giant moons over time (Canup & Ward, 2002, 2006; Batygin & Morbidelli, 2020). Infalling material from the surrounding circumstellar disk may be shock-heated in the process of accretion onto the CPD (Szulagyi, 2017; Szulagyi & Mordasini, 2017; Aoyama et al., 2018), with increasing shock temperature for increasing planetary mass. If the shock heating chemically resets infalling gas or ices, new ice formation must occur within the CPD to produce the icy satellites we see today. The resulting composition of the satellite ices may then depart substantially from those in the planetary feeding zone. Prior works modeling equilibrium condensation chemistry in a Jovian CPD suggest that in the event of an initial vaporization of ices, the "mostly inefficient" gas-phase reactions lead to ratios of CO\({}_{2}\):CO:CH\({}_{4}\) and N\({}_{2}\):NH\({}_{3}\) that are not substantially different from those in the feeding zone of Jupiter (Mousis & Alibert, 2006; Mousis et al., 2006). However, it has long been recognized that grain-surface chemistry plays a critical role in the formation of many common molecules under interstellar conditions (Hasegawa et al., 1992; van Dishoeck & Blake, 1998; Cazaux & Tielens, 2002; Caselli et al., 2004; Garrod et al., 2006; Ruaud et al., 2015; Wakelam et al., 2017). The use of a more comprehensive modeling approach including grain-surface and photochemistry to revisit the formation of ices in CPDs is thus motivated. We aim to investigate the composition of ices that form in a chemically reset CPD with viscous timescale \(10^{3}-10^{4}\) yr, where infalling ices are sublimated and gas is atomized by shock-heating. These results will be contrasted with a partial reset in which only ices are sublimated during accretion and with a full chemical inheritance scenario in which the composition of the circumstellar disk gas and ice is preserved. We intend to link observations of solar system icy satellites with modern chemical disk models to lay the foundation for our understanding of how icy moons are built up from material in the CPD. ## 2 Methods We used the radiation-thermochemical disk modeling code ProDiMo1 to model gas and dust chemistry and physics in disks (Woitke et al., 2009, 2016; Kamp et al., 2010, 2017; Thi et al., 2011, 2020). The gas-grain chemistry is solved self-consistently with the 2D radiative transfer and heating and cooling balance using a rate equation-based approach. Most reaction rates are selected from the UMIST2012 database (McElroy et al., 2013) and three-body collider reactions are adopted from the UMIST2006 rate file (Woodall et al., 2007), as they were not included in the 2012 release. In the following sections we review the implementation of the grain surface chemistry (Sect. 2.1), extensions to our standard chemical network (Sect. 2.1.1), and properties of the CPD model (Sect. 2.2). Footnote 1: [https://prodimo.iwf.oeaw.ac.at/](https://prodimo.iwf.oeaw.ac.at/) ### Grain surface chemistry ProDiMo includes a rate-equation based, statistical two-phase approach to gas and dust grain surface chemistry that is largely based on the work of Hasegawa et al. (1992). Gas-phase atoms and molecules can become weakly adsorbed to grain surface physisorption sites. Physisorbed species diffuse in a random-walk process "hopping" from one physisorption site to another (Barlow & Silk, 1976). Diffusion occurs thermally if there is sufficient energy to overcome a diffusion barrier or can otherwise occur by tunneling. The surface diffusion rate is the sum of the thermal, tunneling, and cosmic-ray induced diffusion rates. The rate of thermal diffusion of species, \(i\), is: \[R_{i}^{\rm diff,th}=\nu_{0,i}Q_{i}^{\rm diff}(t_{i}^{\rm diff},E_{i}^{\rm diff} )e^{\Delta E_{i}/\rm{\rm{i}}h_{\rm{i}}T_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it \it{ \it{ \ the probability of a reaction per encounter multiplied by the encounter rate between the two species diffusing across the surface. The encounter rate between two adsorbed species \(i\) and \(j\) hopping across the surface is then: \[k_{ij}=\kappa_{ij}(R_{i}^{\rm diff}+R_{j}^{\rm diff})/n_{d}\ \ {\rm cm}^{3}{\rm s}^{-1}, \tag{7}\] where \(\kappa_{ij}\) is the reaction probability, \(R_{i}^{\rm diff}\) and \(R_{j}^{\rm diff}\) are the diffusion rates (s\({}^{-1}\)) for species, \(i\) and \(j\), and \(n_{d}\) is the dust grain number density (cm\({}^{-3}\)). The reaction probability, \(\kappa_{ij}\), takes into account the competition between association of the species and diffusion (Garrod & Pauly 2011; Bonfanti & Martinazzo 2016; Ruaud et al. 2016): \[\kappa_{ij}=\frac{Q_{\rm Bell}(a^{\prime}_{ij},E_{i}^{\rm ext})}{Q_{\rm Bell}( a^{\prime}_{ij},E_{i}^{\rm ext})+P_{i}^{\rm diff}+P_{j}^{\rm diff}}, \tag{8}\] where \(a^{\prime}_{ij}\) is the reactive barrier width, \(E_{i}^{\rm ext}\) is the activation energy of the reaction barrier, and \(P_{i}^{\rm diff}=R_{i}^{\rm diff}/\nu_{0,i}\). We assume the semi-equilibrium theory in which reactions between physisorbed and gas-phase species (Eley-Rideal) is equal to the probability of the gas atom colliding with the physisorbed species multiplied by the probability of the gas-phase species having sufficient energy to overcome the reaction barrier. Impinging gas-phase species are assumed to have an energy relative to the surface species \(1/2k_{\rm B}T_{g}+E_{i}^{b}\), where \(T_{g}\) is the gas temperature and \(E_{i}^{b}\) is the binding energy. Photon and cosmic-ray induced dissociation and desorption of grain-surface species are also included. Adsorption and desorption processes are described fully in Thi et al. (2020). #### 2.1.1 Extending chemistry beyond the standard network We developed an extended chemical network based on the "DIANA standard large" network described in Kamp et al. (2017), which contains 235 species (including 63 ceses) and 12 elements + polycyclic aromatic hydrocarbons (PAHs) and is optimized for gas-phase chemistry + direct adsorption and desorption from grains. The use of grain-surface reactions necessitates the inclusion of several additional species to the DIANA standard chemical network to capture the relevant chemistry occurring at the disk midplane. These seven additional gas-phase species and six additional ices are listed in Table 1. Physisorbed atomic hydrogen and H\({}_{2}\) are included for their critical role in many grain-surface reactions. Hydrogenated PAH and O\({}_{2}\)H are included for their relevance to the chemical reset scenario in which atomic H is initially very abundant. The rationale for their inclusion is discussed in the following sections. In addition, HOCO and HCOOH are more directly involved in the formation of relevant ices and their roles are discussed in Sect. 3.1.3. #### 2.1.1.1 Hydrogenated polyelic aromatic hydrocarbons (PAH-H) In PaoDMo, the formation rate of H\({}_{2}\) can be calculated in multiple ways. The standard approach is that H\({}_{2}\) formation proceeds via a pseudo-reaction at a rate calculated according to the analytical approach of Cazaux & Tielens (2002)m which presupposes that surface-chemisorbed H atoms play a dominant role at high temperatures (\(\geq\)100 K). However, the formation of H\({}_{2}\) is calculated explicitly when grain-surface reactions are included in the reaction network (Thi et al. 2020). It was noted in the accompanying work that H\({}_{2}\) formation occurs in parallel with H\({}_{2}\)O ice deposition on grains at the midplane when the CPD is chemically reset (Oberg et al. (2022) hereafter, Paper I). The formation of H\({}_{2}\)O ice after the reset is rapid and a median-sized grain is coated in several (\(\gg\)3) monolayers of water ice prior to the complete conversion of H to H\({}_{2}\). This poses a problem as the formation of H\({}_{2}\) via surface-chemisorbed H is considered implausible when the number of water ice monolayers exceeds a certain number (\(\sim\) 3) (Wakelam et al. 2017). We assume that the diffusion timescale of the atomic hydrogen in a water ice matrix and subsequent difficulty of H\({}_{2}\) escape from the grain silicate surface precludes this formation pathway in our scenario. An alternative path to form H\({}_{2}\) is via hydrogenated polycyclic aromatic hydrocarbons (PAH-H) (Bauschlicher 1998). Experimental and theoretical works have demonstrated that H\({}_{2}\) can form via Eley-Rideal abstractions on neutral PAHs (Bauschlicher 1998; Rauls & Hornekaer 2008; Mennella et al. 2012; Thrower et al. 2012) and cationic PAHs (Hirama et al. 2004; Cazaux et al. 2016; Boschman et al. 2012). We include in the chemical network the singly hydrogenated species PAH-H, PAH-H\({}^{+}\), and the physisorbed ice form PAH-H\(\#\)(Thrower et al. 2009) to enable this formation path. As a first step towards H\({}_{2}\) formation, the neutral or ionized PAH is hydrogenated with a small (324 K) activation barrier. The H\({}_{2}\) formation at the CPD midplane then proceeds primarily via \[\rm{PAH\mbox{-}H+H\to H_{2}+PAH}, \tag{9}\] and to a lesser extent (\(\sim 1-10\%\) of the total H\({}_{2}\) formation rate depending on location in the CPD), directly via the gas-phase neutral-neutral reactions: \[\rm{H+HCO\to H_{2}+CO}, \tag{10}\] \[\rm{H+HNO\to H_{2}+NO}. \tag{11}\] While we do include several grain-surface reactions to form H\({}_{2}\) (e.g., H\(\#\) + HCO\(\#\) \(\to\) CO\(\#\) + H\({}_{2}\#\), O\(\#\) + H\({}_{2}\)CO\(\#\) \(\to\) CO\({}_{2}\#\) \begin{table} \begin{tabular}{l l} \hline \hline gas-phase species \\ \hline O\({}_{2}\)H & \\ HOCO & \\ HCOOH\({}^{+}\) & \\ HCOOH\({}^{+}_{2}\) & \\ PAH-H & \\ PAH-H\({}^{+}\) & \\ \hline ices & E\({}_{\rm ads}\) [K] \\ \hline H\# & 600\({}^{1}\) \\ H\({}_{2}\)\# & 430\({}^{2}\) \\ O\({}_{2}\)H\# & 3650\({}^{2}\) \\ HOCO\# & 2000\({}^{3}\) \\ HCOOH\# & 5000\({}^{4}\) \\ PAH-H\# & 5600\({}^{5}\) \\ \hline \end{tabular} Adsorption energies are adopted from \({}^{1}\)Cazaux & Tielens (2002), \({}^{2}\)Garrod & Herbst (2006), \({}^{3}\)Ruaud et al. (2015), \({}^{4}\)�berg et al. (2009) \({}^{5}\)Thrower et al. (2009) \end{table} Table 1: Non-standard species included in the chemical network. + H\({}_{2}\)#); in practice, these occur at a negligible rate due to the 50 K minimum dust temperature in the CPD. The resulting efficiency of H\({}_{2}\) formation is lower than the analytic rate of Cazaux & Tielens (2002) in part due to the low ambient temperatures (\(<\) 200 K, which in combination with the activation barrier impede the process) at the optically thick midplane. The correspondingly longer time over which atomic hydrogen is present has direct consequences for the efficiency of water formation. Gas-phase H\({}_{2}\)O can then form via the hydrogenation of OH for an extended period of time (discussed further in Paper I). #### 2.1.1.2 O\({}_{2}\)H The hydroperoxyl radical O\({}_{2}\)H is a very reactive oxygen species that we have found to play a role in the formation of methanol in the inner region of chemically reset CPDs. We include the gas and ice form of O\({}_{2}\)H in the extended chemical network, with an adsorption energy of 3650 K (Garrod & Herbst, 2006). The oxygen-bearing gas-phase species abundances are sensitive to the presence of O\({}_{2}\)H at high densities. Three-body collider reactions with free atomic hydrogen and O\({}_{2}\) produce O\({}_{2}\)H. This reaction has been extensively studied both theoretically (Horowitz, 1985; Sellevag et al., 2008; Morii et al., 2009) and experimentally (Kurylo, 1972; Davidson et al., 1996; Hahn et al., 2004; Mertens et al., 2009) at high and low temperatures. With the inclusion of O\({}_{2}\)H in the extended network the gas-phase O\({}_{2}\) reservoir at the midplane (nominally present at an abundance \(\sim\)10\({}^{-4.4}\) relative to hydrogen in the standard network) is depleted and converted via OH into H\({}_{2}\)O through the following reactions \[{\rm O}_{2}+{\rm H}+{\rm M}\rightarrow{\rm O}_{2}{\rm H}+{\rm M}, \tag{12}\] or \[{\rm O}_{2}+{\rm HCO}\rightarrow{\rm O}_{2}{\rm H}+{\rm CO}, \tag{13}\] followed by \[{\rm O}_{2}{\rm H}+{\rm H}\rightarrow{\rm OH}+{\rm OH}, \tag{14}\] \[{\rm OH}+{\rm H}\rightarrow{\rm H}_{2}{\rm O}+{\rm photon}, \tag{15}\] These reactions compete for the free H that is required to form methanol via \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{3}{\rm O}, \tag{16}\] \[{\rm CH}_{3}{\rm O}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}, \tag{17}\] and thus suppress its formation. The inclusion of O\({}_{2}\)H in the chemical network reduces the abundance of methanol ice interior to the NH\({}_{3}\) iceline relative to the results of the standard chemical network by 90-99%. However, this has a negligible impact on the total disk-integrated methanol abundance. ### Circumplanetary disk model We adopted the properties of the PadoDiMo circumplanetary disk model developed in Paper I. The CPD is a 'gas-starved," actively fed accretion disk (Canup & Ward, 2002) that is heated primarily by viscous dissipation at the midplane (D'Alessio et al., 1998; Frank et al., 2002). The parameters of the reference CPD model are listed in Table 2. The physical, radiative, and thermal properties of the CPDs are demonstrated in Paper I, namely, Figures 3 and 4. The disk structure in terms of radial and vertical dimension, density profile, dust-to-gas ratio, and temperature, are assumed to exist in a steady-state and are kept fixed. Following the gas-starved disk paradigm, the CPD does not instantaneously contain the solid mass required to form the Galilean satellites. The total refractory dust mass is 1.7\(\times\)10\({}^{-5}\) M\({}_{\oplus}\) and exists in the form of small grains (0.05-3000 um). The dust grain size distribution is described by a smooth power-law \(n\propto a^{-3.5}\). Such a disk is optically thick out to approximately \(\sim\)1/3rd of the planetary Hill radius \(R_{\rm H}\), which is coincident with the theoretical limit for stable orbits (Quillen & Trilling, 1998; Ayliffe & Bate, 2009; Martin & Lubow, 2011). For the ice-to-rock ratio of the solids in the CPD to be consistent with the ice-to-rock ratio of the outer Galilean satellites, it was found in Paper I that the dust-to-gas ratio of the CPD may be depleted relative to the canonical 10\({}^{-2}\) by a factor \(\gtrsim 10-20\). This depletion in dust corresponds with the rapid inwards drift and loss of grains larger than \(\sim\)150 um, which was found to occur naturally for a disk with a mass of 10\({}^{-7}\) M\({}_{\odot}\) and accretion rate \(\dot{M}=10^{-11}\) M\({}_{\odot}\)yr\({}^{-1}\) (Paper I). Alternatively, pressure-bump trapping at the gap edge can directly prevent larger grains from accreting onto the CPD (Rice et al., 2006; Morbidelli & Nesvorny, 2012; Zhu et al., 2012; Bitsch et al., 2018). For assumptions regarding the efficiency of settling, surface density power-law, and maximum grain size, it was found in Paper I that the global dust-to-gas ratio should not exceed 10\({}^{-3.3}\) for a CPD with a mass of 10\({}^{-7}\) M\({}_{\odot}\) and accretion rate of \(\dot{M}=10^{-11}\) M\({}_{\odot}\)yr\({}^{-1}\) to satisfy the constraints on Jovian icy moon bulk composition. The properties of the disk models are further justified and detailed in Paper I, where the authors explored a small grid of plausible CPD parameters. In this work, our analysis is focused on the case of the 10\({}^{-7}\) M\({}_{\odot}\) CPD with accretion rate \(\dot{M}=10^{-11}\)M\({}_{\odot}\)yr\({}^{-1}\), as it was most closely able to reproduce the radial ice-to-rock ratio of the Galilean satellites while being consistent with the circumstellar-lar disk gas component expected lifetime. We contrast our results with those of the "high" viscosity, hotter CPD with \(\dot{M}=10^{-10}\)M\({}_{\odot}\)yr\({}^{-1}\) and correspondingly shorter viscous timescale 10\({}^{3}\) yr given uncertainties in the magnitude of the disk viscosity. #### 2.2.1 Initial chemical conditions Three different initial chemical conditions are considered. The first case is a full chemical "reset," in which ices are initially sublimated and the circumstellar gas is reverted to a purely atomic, ionized state. A chemical reset may occur if, for instance, the circumstellar material is shock-heated during accretion onto the CPD, if the gas and dust are irradiated while crossing the optically thin gap, or if material only flows into the gap from the upper optically thin surface layers of the circumstellar disk. The CPD model is initialized in this fully reset state after which it is allowed to chemically evolve over its viscous timescale \(t_{\rm visc}\). The viscous timescale of the disk is defined as the time over which the majority of the gas mass is lost \(t_{\rm visc}={\rm M}_{\rm pol}/\dot{M}\), where \(\dot{M}\) is the mass accretion rate. We assume that gas is lost to viscous radial flow either to decretion beyond the disk edge or accretion onto the planet. As the disk mass is assumed to be constant, the net inflow-outflow rate of matter is necessarily zero. Our reference CPD model has a viscous timescale of 10\({}^{4}\) yr with a corresponding midplane heating rate equivalent to an \(\alpha\)-viscosity of 10\({}^{-3.6}\). We contrast these results with a "partial reset" in which only the ices are placed back in the gas-phase. This is similar to the work of Mousis & Alibert (2006) wherein the authors consider a case in which infalling ices are initially sublimated in a warm disk which subsequently cools, although we consider a disk with a static temperature structure. Finally we consider an "inheritance" case in which the chemical composition at the circumstellar disk outer edge is used as the initial state. The abundance of the most common species for these three initial conditions can be found in Appendix B. The circumstellar disk model and the sampling of the inheritance chemistry are described in the accompanying Paper I. It is also necessary to consider the consequences of the gas and dust being shocked at several scale heights above the CPD midplane (Takasao et al. 2021) prior to the gas turbulently diffusing downwards into the optically thick region. The ambient conditions at \(\sim\)5 pressure scale heights (\(A_{\rm V}=0.01\)) differ significantly from those at the midplane (\(A_{\rm V}\)=21) given the magnitude of the external stellar irradiation. To take into account this gradual change in ambient conditions, we incorporated an additional step necessary to prevent the sublimated ices immediately re-adsorbing to grains. We adapted the model to follow a single parcel of gas and dust that is initialized above the midplane and then settles towards the midplane at the centrifugal radius (\(\sim 0.03\) R\({}_{\rm H}\)) (Machida et al. 2008). This process is labeled as step 2 in Fig.1. In this step, we evolved the chemistry in a 0D grid-cell for a fraction of the diffusion timescale. The resulting composition of the gas and ice was extracted and used to populate a new grid-cell, in which the background conditions are updated to correspond to the downwards motion of the gas parcel. The extracted relative species abundances were simply applied to the new cell and absolute abundances were rescaled to correspond to the new grid-cell density. This process was repeated iteratively as ambient conditions (optical depth, density, and gas and dust temperature) change. As a simplification owing to significant uncertainties in the origin, magnitude, and spatial distribution of turbulence within the CPD, we simply assumed that the parcel travels at a constant rate until it reaches the midplane. The timescale of this process is \(\sim 10\) yr (Paper I), although this value is still highly uncertain. Accordingly, we also considered diffusion timescales of 1, 10, and 100 yr. The final composition of the parcel at the midplane was then used to populate the CPD midplane for the final step (step 3 in Fig.1), whereby chemical evolution proceeds up until the viscous timescale. ### Likelihood of chemical reset and magnitude of shock-heating Icy grains passing through an optically thin gap at 5 au around a Sun-like star can retain their icy mantles if swept up by the planet within \(\sim 10-100\) orbital timescales (Paper I). If a (partial) chemical reset occurs, it must instead be due to either accreted material originating from a higher altitude in the circumstellar disk where ices are unstable or, otherwise, shock-heating on the CPD surface. We can estimate the shock velocity of infalling matter where it strikes the CPD and consider which of our initial chemical conditions corresponds most appropriately to the formation of the Galilean moon system. Angular momentum of infalling circumstellar gas and dust causes it to accrete onto the CPD near the so-called centrifugal radius, \(r_{\rm cf}\) (Hayashi et al. 1985; Machida et al. 2008). The infall velocity at \(r_{\rm cf}\) must be \(\gtrsim 8-10\) km s\({}^{-1}\) for dust grain icy mantles to be lost due to sputtering and thermal desorption (Woitke et al. 1993; Aota et al. 2015; Tielens 2021). We approximated the infall velocity as a function of planetocentric radius by considering orbits with apoapsis of a single circumstellar disk pressure scale height at the position of Jupiter (\(z=0.5\) au) (Paper I), with orbital eccentricities corresponding to passage through the planet equatorial plane at some distance \(r\). The resulting infall velocities, \(v_{\rm infall}\), can be seen in Fig. 2 for planets of Saturnian, Jovian, and super-Jovian (10 M\({}_{\rm J}\)) mass. The infall velocity at \(r_{\rm cf}\) is independent of \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Symbol & Value \\ \hline Planetary mass & \(M_{\rm p}\) & 1.0 M\({}_{\rm J}\) \\ Planetary luminosity & \(L_{\rm p}\) & \(10^{-5}\) L\({}_{\odot}\) \\ Effective temperature & \(T_{\rm eff,p}\) & 1000 K \\ UV luminosity & \(L_{\rm UV,p}\) & 0.01 L\({}_{\rm p}\)\({}^{*}\) \\ Interstellar UV field & \(\chi\) & \(3\times 10^{3}\) \\ Background temperature & \(T_{\rm back}\) & 50 K \\ \hline Disk mass & \(M_{\rm cpd}\) & \(10^{-7}\) M\({}_{\odot}\) \\ Disk inner radius & \(R_{\rm in,cpd}\) & 0.0015 au \\ Exponential decay radius & \(R_{\rm in,cpd}\) & 0.11 au \\ Disk outer radius & \(R_{\rm out,cpd}\) & 0.34 au \\ Column density power in. & \(\epsilon\) & 1.0 \\ \hline Accretion rate & \(\dot{M}\) & \(10^{-11}\)-\(10^{-10}\) M\({}_{\odot}\) yr\({}^{-1}\) \\ Viscosity & \(\alpha\) & \(10^{-36}\)-\(10^{-2.7}\) \\ \hline Minimum dust size & \(a_{\rm min}\) & 0.05 \(\rm\upmu m\) \\ Maximum dust size & \(a_{\rm max}\) & 3000 \(\rm\upmu m\) \\ Dust-to-gas ratio & \(d/g\) & \(10^{-3.3}\) \\ Flaring index & \(\beta\) & 1.15 \\ Reference scale height & \(H_{\rm 0.1au}\) & 0.01 au \\ \hline \end{tabular} * Planetary UV luminosity is expressed in multiples of the planetary luminosity, L\({}_{\rm p}\). \end{table} Table 2: Parameters of the reference CPD model. Figure 1: Schematic illustration of the modeling process. In step 1, the chemistry in a circumstellar disk model is evolved for 4 Myr. This chemistry is extracted from the gap outer wall region and used as a starting point prior to accretion. To consider various possible accretion scenarios, the composition of the infalling material is either reset to atomic (full reset), the ices are sublimated (partial reset) or the chemistry remains unaltered (inherit). In step 2, the chemistry of a parcel of gas and ice is evolved for 10 yr as it travels towards the CPD midplane. In step 3, the chemistry is evolved at the CPD midplane for the viscous timescale of the disk. the planetary mass, but it is instead a function of the planetary semimajor axis (for a circular orbit). The shock velocity at the centrifugal radius of Jupiter is in the regime of icy mantle loss to sputtering (Draine and McKee, 1993; Tielens, 2005). Hence, if the majority of grains accrete near \(r_{\rm cf}\), Jupiter's CPD may be best represented by the "partial reset" scenario. Conversely, no ice sublimation is expected to occur due to shock-heating in the case of Saturn. A full chemical reset is more likely to occur for a super-Jupiter at a stellocentric distance of 2-3 au from a solar-mass star. ### Chemical network diagrams Throughout this work, we make use of algorithmically generated chemical network diagrams to describe relations between atomic and molecular species, their relative abundances, formation rates, and the types of reactions that are involved. The diagrams are generated with an implementation of the PyVis software package (itself based on the VisJS library Perrone et al. (2020)). A description of how these diagrams are generated and interpreted can be found in Appendix A. ## 3 Results and discussion Prior to reaching the midplane, the accreted gas and dust diffuses downwards from the optically thin surface layer of the CPD at the centrifugal radius, \(r_{\rm c}\). We iteratively evolved the disk chemistry as the background conditions change during the descent. The relevant properties of the vertical slice through the circumplanetary disk during the descent to the midplane at \(r_{\rm c}\) can be found in Fig. 3 panel (a). The post-shock evolution of the water ice abundance during the descent to the midplane can be found in panels (b), (c), and (d) of Fig. 3 for the reset, partial reset, and inheritance cases, respectively. Solid lines trace the evolution of ice impurity abundances as the gas parcel moves downwards from five scale heights (left) to the midplane (right). Dashed lines trace the abundances in the case of a hotter, higher viscosity CPD with \(t_{\rm visc}=10^{3}\) yr. The initial pre-shock abundances of the impurities are indicated by the colored circles on the ordinate. In the case of the reference (low-viscosity) fully or partially reset CPD, significant quantities of water ice have already formed prior to reaching the midplane. In the fully reset case, the ice is predominantly water with \(<0.1\%\) impurities in the form of CH\({}_{3}\)OH and HCOOH ice. In the partial reset case, a significant (25%) CO\({}_{2}\) component has formed. In the inheritance case, ices are able to survive the brief exposure to the optically thin upper layers of the disk and the CPD accret Figure 3: (a) Properties of a vertical slice in the circumplanetary disk at the centrifugal radius, from five pressure scale heights (left) to the midplane (right). Evolution of the abundance of H\({}_{2}\)O, NH\({}_{3}\) and CO\({}_{2}\) ice as the parcel of gas and dust sinks to the midplane after accretion in the reset case: (b) in the partial reset case (c) and the inheritance case (d). Solid lines trace the relevant properties and abundances for the low viscosity (\(t_{\rm visc}=10^{4}\) yr) case and the dashed line describes the high viscosity (\(t_{\rm visc}=10^{3}\) yr) case. Figure 2: Velocity of material falling onto a CPD \(v_{\rm infall}\) at radius \(r\) for planets of Saturnian (dotted line), Jovian (dashed lines), and super-Jovian (10 M\({}_{\rm J}\)) (solid line) mass. The centrifugal radii \(r_{\rm cf}\) of Jupiter and Saturn are indicated by the blue triangle and orange star, respectively. The radial position of the Galilean satellites is indicated by the four black circles and the radial position of Titan is indicated by the white circle. The planetary orbital radius, \(a_{\rm p}\), that corresponds to the infall velocity at the centrifugal radius, \(v_{\rm infall}(r_{\rm cf})\), is indicated on the right vertical axis. The shaded colored regions indicate different chemical consequences of shock-heating corresponding to a given \(v_{\rm infall}\). The units on the abscissa are in multiples of the Jovian Hill radius. All calculations correspond to a solar mass star. ice composition. In the high-viscosity (\(\alpha=10^{-3}\)) CPD model, ices are not thermally stable at the centrifugal radius midplane. This can be seen in Fig. 3, where ice abundances decline immediately prior to reaching the midplane. Consequently the initial post-shock conditions of a "partial reset" and "inheritance" converge to a similar ice-free molecular gas composition by the time the gas parcel reaches the midplane. After the step involving the accreted gas and dust being followed as it travels towards the midplane (step 4 in Fig. 1), the resulting chemical abundances are used to specify the initial conditions for the rest of the CPD as it evolves on the viscous timescale (step 5 in Fig. 1). After \(10^{3}\)-\(10^{4}\) yr of further evolution, we extracted the radial ice composition at the midplane from six distinct CPD models describing three initial chemical conditions (reset, partial reset, and inheritance) and two disk \(\alpha\)-viscosities (corresponding to viscous timescales of \(10^{3}\) and \(10^{4}\) yr). An overview of the radial midplane ice composition, the disk-integrated total molecular ice composition, and the disk-integrated total elemental ice budget of the low-viscosity CPDs can be found in Fig. 4. For reference, the ice-to-rock ratio of the solids at the CPD midplane is also included in Fig.4 (left column) as a solid white line. The settling of large grains to the midplane strongly reduces the local ice-to-rock ratio. Realistically, accreting moons may be able to capture solids drifting at higher altitudes above the midplane within their gravitational sphere of influence. Hence, we included also the ice-to-rock ratio of solids integrated up to an altitude equal to the Hill radius of a Ganymede-mass object (dashed white line). The radial abundance profiles of NH\({}_{3}\), HCOOH, CO\({}_{2}\), and CH\({}_{3}\)OH ices can be found in Fig. 5. Ices at the partially or fully reset CPD midplane are found to contain significant impurities in the form of CO\({}_{2}\) and HCOOH, as well as, to a lesser extent, CH\({}_{3}\)OH. The chemically inherited CPD additionally contains HCN and hydrocarbon ices which were already present at the time of accretion. Trace amounts of OCN, SO, SO\({}_{2}\), NH, H\({}_{2}\), OH, and HNO ices can also be found, but each at \(<0.1-0.5\)% of the total ice mass. Although several of these ices have negligible absolute abundances, the fraction of their key element which has frozen out can be substantial. In particular, sulfur has frozen completely out of the gas-phase outside of the centrifugal radius in all cases. The element fraction in ice can be found in Appendix C. In the following subsections, we discuss the formation and abundance of the impurities NH\({}_{3}\), CO\({}_{2}\), HCOOH, and CH\({}_{3}\)OH. ### Partial reset (initial sublimation of ices) It is likely that in the case of the Jupiter system, the shock velocity of matter accreting at the centrifugal radius did not lead to the full dissociation of molecules. A less extreme C-type shock-heating could simply cause icy grain mantles to desorb by sputtering, for instance. Accordingly, we focus our analysis and discussion on this case, whereby all ices are put back into their respective gas-phase counterpart. #### 3.1.1 Ammonia (NH\({}_{3}\)) Immediately after accretion onto the CPD and the sublimation of ices, hydrogen is predominantly found in the form of H\({}_{2}\), oxygen in H\({}_{2}\)O, and nitrogen in NH\({}_{3}\) and HCN at a ratio 1:0.63. After ten years of drifting towards the midplane the gas is still H\({}_{2}\)-dominated, but nitrogen is found primarily in N\({}_{2}\). After being initially sublimated a minor fraction of the NH\({}_{3}\) immediately re-adsorbs to the grains (see Fig. 3 (c)), but it is not stable against photodissociation given the background UV field intensity (\(\chi_{\rm{RT}}>1000\)). Above 2-3 scale heights, the NH\({}_{3}\) ice is photodissociated on the grain surface to, for instance, NH\({}_{2}\)# and H# or back into the gas phase as NH. Once the majority of nitrogen is locked into N\({}_{2}\) via NH+NH, it is stable against photodissociation due to self-shielding (Li et al., 2013), preventing the accumulation of NH\({}_{3}\). The photodissociation timescale of N\({}_{2}\) is much larger than the disk viscous timescale. Near the midplane, NH\({}_{3}\) ice forms by direct adsorption from the gas phase onto dust grains. The gas-phase NH\({}_{3}\) originates primarily via a sequence of three body collider reactions: \[{\rm H_{2}+N+M\to NH_{2}+M,} \tag{18}\] \[{\rm NH_{2}+H+M\to NH_{3}+M.} \tag{19}\] Here, M = H, H\({}_{2}\), or He. The importance of this pathway is illustrated clearly by the green arrows in the chemical network diagram Fig. 6. These collider reactions are very efficient at the typical CPD midplane densities (n\({}_{\rm{H}}\sim 10^{12}\) cm\({}^{-3}\)). However, the absence of abundant atomic nitrogen prevents the collider pathway from producing significant quantities of NH\({}_{3}\). Then, N\({}_{2}\) is destroyed predominantly by reactions with He ions at a relatively low rate, as He+ is produced only be cosmic-ray ionization. The collider pathway to form NH\({}_{3}\) thus does not result in significant accumulation of NH\({}_{3}\) ice. By the time the gas parcel has reached the midplane, NH\({}_{3}\) ice is present only as a trace species. The collider-pathway begins with the formation of NH\({}_{2}\) (Eq. 18) which is also relevant to the water formation pathway involving NH\({}_{2}\) + O \(\to\) NH + OH (Kamp et al., 2017). While the pre-exponential factor \(10^{-26}\) cm\({}^{6}\) s\({}^{-1}\) is derived from the work of Avramenko & Krasnen'kov (1966), we have chosen to adopt a significantly lower rate more typical of collider reactions (\(10^{-30}\) cm\({}^{6}\) s\({}^{-1}\)), which still produces enough NH\({}_{2}\) for this path to be the dominant NH\({}_{3}\) formation route in the inner disk. It has been noted that this particular reaction is critical to accurately reproduce observed OH and H\({}_{2}\)O gas-phase abundances, but that modern reevaluation of its rate and temperature dependence are needed (Kamp et al., 2017). For the second collider reaction in this path (Eq. 19), we adopted the rate coefficients of Gordon et al. (1971), listing a pre-exponential factor \(6.07\times 10^{-30}\) cm\({}^{6}\) s\({}^{-1}\). Other more recent experimental results assuming the reaction to be in the three body pressure regime give values in the range 2.3\(\times 10^{-30}\) - 1.42\(\times 10^{-29}\) for various third bodies (Altinay & Macdonald, 2012, 2015), hence, we consider this a reasonable value. In the outer disk, NH\({}_{3}\) gas is efficiently photodissociated. The NH\({}_{3}\) ice is instead formed primarily by barrier-less successive hydrogenation of atomic nitrogen on icy grain surfaces (Charnley et al., 2001; Fedoseev et al., 2015) which has been experimentally demonstrated to occur (Hiraoka et al., 1995; Hidaka et al., 2011) via the Langmuir-Hinshelwood mechanism. The formation pathway is then \[{\rm NH\#+H\#\to NH_{2}\#,} \tag{20}\] \[{\rm NH_{2}\#+H\#\to NH_{3}\#.} \tag{21}\] Both in the inner and outer disk NH\({}_{3}\) ice does not consitute more than \(10^{-3}\) of the total ice by molar fraction. #### 3.1.2 Carbon dioxide (CO\({}_{2}\)) While CO\({}_{2}\) ice is initially only a trace species in the accreted circumstellar disk material, it becomes abundant in the CPD prior to the accreted material reaching the midplane. The chemical network diagram of the predominant CO\({}_{2}\) ice formation paths during this stage can be found in Fig. 7. This figure illustrates how the production of OH by collider reactions (green arrows) is critical to the efficient formation of CO\({}_{2}\) ice. In the time that accreted gas and ice reside in the optically thin surface layers of the CPD, it initially liberates significant quantities of atomic oxygen from gas-phase H\({}_{2}\)O, which is hydrogenated via three-body collider reactions. The OH then reacts with abundant gas-phase CO to produce 98% of the CO\({}_{2}\), which then freezes out onto grains. In particular three-body collider reactions account for nearly all (\(>99\%\)) OH formation which is critical for the CO+OH gas-phase reaction. It can also be seen in Fig. 7 that the grain-surface formation of CO\({}_{2}\) ice plays only a minor role prior to the gas parcel reaching the midplane. After the gas and dust parcel reaches the midplane, the chemistry is evolved for an additional 10\({}^{5}\)-10\({}^{4}\) yr for the high- and low-viscosity cases, respectively. The resulting composition at \(t_{\rm visc}\) is similar to that of the full reset case, with the exception that the inner CPD (near the present day orbit of Callisto) also retains a significant CO\({}_{2}\) ice component. This can be seen in Fig. 5 (a) and Fig. 5 (d). CO\({}_{2}\) ice formation continues in the outer CPD at the midplane in the absence of abundant atomic O, as OH is produced instead on grain-surfaces by the photodissocia Figure 4: Overview of the chemical composition at the CPD midplane for the “full reset” case (_top row_), for the “partial reset” case (_middle row_), and for the “full inheritance” case (_bottom row_). _Left column_: Radial mass fraction of ices at the CPD midplane (filled-colored regions) where f\({}_{\rm ice}\)\(>\) 0.01. The white lines indicate the radial ice-to-rock ratio of solids at the midplane (solid line) and integrated up to an altitude above the midplane equal to the Hill radius of Ganymede (dashed line). The estimated ice-to-rock ratio of the Galilean satellites is included (circles with error bars). _Center column_: Radially integrated midplane ice composition out to \(R_{\rm H}/3\) (outer ring) and within the orbit of Callisto a\({}_{\rm HV}\) (inner circle). _Right column_: Total disk-integrated elemental composition of the ices are shown in the same two radial zones. tion of H2O#. This is described in the following section and can be seen in Fig. 8. #### 3.1.3 Formic acid (HCOOH) HOCO (hydrocarboxyl1 radical) and HCOOH (formic acid) are of relevance in the cold, high-density midplane where CO\({}_{2}\) ice can form; thus, these were included in our extended chemical network. Formic acid is the simplest carboxylic acid and has been identified in star-forming regions (Schutte et al., 1999; Ikeda et al., 2001) both in gaseous and solid states, as well as in protoplanetary disks (Favre et al., 2018) and in comets (Crovisier et al., 2004). Its abundance typically varies with 1-10% of water ice (Bisschop et al., 2007). The chemical network diagram of HCOOH formation in the outer CPD can be found in Fig. 8. It is clear that grain surface reactions play a completely dominant role in this process. In the outer CPD, we find that although it is not stable as an ice, the gas-phase CO freezes out and temporarily occupies a physisorption site on the grain surface. Prior to desorbing the CO# reacts on the grain surface OH# to form CO\({}_{2}\)# and H# (Oba et al., 2010; Liu and Sander, 2015), for which we have adopted the effective barrier of 150 K (Fulle et al., 1996; Ruaud et al., 2016). \[\mathrm{CO}+\mathrm{dust}\rightarrow\mathrm{CO}\#, \tag{22}\] \[\mathrm{CO}\#+\mathrm{OH}\#\rightarrow\mathrm{CO}_{2}\#+ \mathrm{H}\#. \tag{23}\] Alternatively, as an intermediate step of the OH# + CO# reaction the van der Waals complex HOCO# is formed, which can be hydrogenated to form HCOOH#. \[\mathrm{CO}\#+\mathrm{OH}\#\rightarrow\mathrm{HOCO}\#, \tag{24}\] \[\mathrm{HCO}\#+\mathrm{H}\#\rightarrow\mathrm{HCOOH}\#. \tag{25}\] The HCOO# formation route can explain the presence of HCOOH# in cold, dense clouds (Ioppolo et al., 2011; Qasim et al., 2019). The resulting radial abundance of HCOOH# in the reference CPD can be seen in Fig.5 (c). In the partial reset case, HCOOH ice can locally constitute a significant fraction of the ices in the reference CPD (\(\sim\)10mol%). We found significant abundances (\(\sim 10\%\) relative to H\({}_{2}\)O ice) of HCOOH ice in the outer region of the CPD. This is comparable to the upper end of inferred abundances (\(\sim 1-10\%\) level relative to H\({}_{2}\)O ice) observed toward young stellar objects (Schutte et al., 1999; Keane et al., 2001; Knez et al., 2005; Boogert et al., 2015). The relatively large abundance of HCOOH ice in the outer CPD relative to its observationally derived abundance in astrophysical ice mixtures in the ISM is noteworthy. However, this was not entirely unexpected. The minimum CPD temperature set by equilibrium with the background radiation field Figure 5: Radial abundance of selected non-H\({}_{2}\)O ices as a fraction of the total ice abundance for the low-viscosity case with \(r_{\mathrm{visc}}=10^{4}\) yr (solid lines) and high-viscosity case with \(r_{\mathrm{visc}}=10^{3}\) yr (dashed lines). The position of the Galilean satellites are indicated by the empty circles. A light gray horizontal line indicates a concentration of 1%. Figure 6: Chemical network diagram illustrating the formation of NH\({}_{3}\) ice in the CPD after a partial reset, immediately prior to the accreted gas reaching the midplane. The pathway from N\({}_{2}\) to NH3# is highlighted. Percentages for reaction A\(\rightarrow\)B indicate the fraction of reactions forming B which involve species A. A label “steady-state” indicates that the net rate is zero. ensures that a large region in the outer CPD exhibits a narrow range of temperature from 50-55 K. Given that the majority of the disk surface area is in this zone, the total disk composition is weighted heavily towards these specific conditions. However, background temperatures as low as 30 K or as high as 70 K do not produce abundant alternative impurities, while the outer CPD remains dominated by CO\({}_{2}\) and HCOOH ice. Additionally, the stability of the HCOOH ice in our model is subject to several uncertainties. The only grain-surface reaction in our network that is able to destroy HCOOH# is the photo-induced dissociation to HCO# and OH#. Alternatively, it can be placed directly back into the gas phase by thermal, cosmic-ray, or UV-photon induced desorption. We did not include grain-surface hydrogenation of the HCOOH ice. Bisschop et al. (2007) found that hydrogen bombardment of a pure multilayer HCOOH ice does not result in the production of detectable reaction products, concluding that the hydrogenation of HCOOH does not play a role in formation of more complex species and that only minor amounts desorb. In contrast Chaabouni et al. (2020) found that H-bombardment of a \(<1-3\) monolayer coating of HCOOH ice at 10-100 K results in efficient production of CO\({}_{2}\) and H\({}_{2}\)O molecules, as well as CH\({}_{3}\)OH and H\({}_{2}\)CO. The authors suggest that this disagreement stems from the inefficiency of H atom diffusion through the pure HCOOH multilayer used in the experimental setup of Bisschop et al. (2007). Alternatively, the sub-monolayer conditions present in the setup of Chaabouni et al. (2020) potentially cause the substrate itself to become hydrogenated, increasing the sticking coefficient for H atoms and promoting surface reactions. Where HCOOH ice is found in our CPD, it has been co-deposited with H\({}_{2}\)O ice and CO\({}_{2}\) ice (with molar ratio H\({}_{2}\)O:CO\({}_{2}\):HCOOH 100:80:80), with an equivalent thickness of several hundred monolayers. Hence we consider it plausible that the majority of the HCOOH embedded within the ice matrix would not be efficiently hydrogenated. Overall, HCOOH ice has not been detected on the surface of any Galilean moon. Experimental results indicate that HCOOH ice has a relatively short \(8\times 10^{7}\) yr half-life against irradiation by galactic cosmic rays, being dissociated into CO or CO\({}_{2}\)(Bergantini et al., 2013). Any HCOOH accreted onto the surface of, for instance, Callisto would therefore likely be absent in the present era, having reduced to \(<1\%\) of its initial concentration within only 0.56 Gyr. There is a paucity of research investigating the role of HCOOH in subsurface melts, however, we know that under hydrothermal conditions, water can act as a homogeneous catalyst for the decarboxylation pathway of HCOOH decomposition in liquids (Ruelle et al., 1986), where it decomposes to become the astrobiologically relevant CO\({}_{2}\) and H\({}_{2}\) molecules (Yu & Savage, 1998). ### Full reset (initially atomic gas) In the full reset case, the gas in the CPD is initially fully atomic and ionized and no ices are present. This state represents, for instance, the accretion of a high-mass planet (\(M>1M_{\rm J}\)), with correspondingly higher infall shock-velocity at the CPD surface, or an accretion of material originating from a greater scale height in the circumstellar disk than we have considered. In the fully reset case, the abundant free atomic hydrogen enables highly efficient combustion chemistry to produce a water-dominated ice composition, as found in Paper I. This efficient water formation locks away the majority of atomic oxygen early on and it is \(10^{5}\) times less abundant than in the partial reset case after 5 yr. Accordingly, the OH formation rate via O+H is lower and so significantly less OH is available to form CO\({}_{2}\) via CO+OH, while the CO abundances are very similar between the two cases (\(10^{-3.86}\) vs. \(10^{-3.88}\) relative to H\({}_{2}\)). Again, ammonia ice is not able to form in abundance as the initially atomic nitrogen is predominantly locked in N\({}_{2}\) within a single year via N + NO \(\rightarrow\)N\({}_{2}\) + O or N + NH \(\rightarrow\)N\({}_{2}\) + H. The radial composition of the ices after \(10^{4}\) yr is similar to the partial Figure 8: Chemical network diagram centered on the formation of HCOOH ice in the outer regions of the CPD at the midplane. Figure 7: Chemical reaction network illustrating the formation of CO\({}_{2}\) ice after a partial reset in which ices accreting onto the CPD are initially sublimated and placed into the gas-phase. reset case, although CO\({}_{2}\) ice is found in abundance only in the outer disk beyond \(\sim 2\times\) the semi-major axis of Callisto. In contrast to the partial reset case, the inner disk region is dominated by water ice with a minor (\(<1\%\)) methanol (CH\({}_{3}\)OH) component. Methanol is an important primordial solar system volatile and may act as an anti-freeze in subsurface oceans (Deschamps et al. 2010; Dougherty et al. 2018). It has been found to be abundant in solid form near protostars (Dartois et al. 1999; Boogert et al. 2015), in comets (Bockelee-Morvan et al. 1991; Mumma et al. 1993; Bockelee-Morvan & Biver 2017; Biver & Bockelee-Morvan 2019), and in the gas-phase in planet-forming disks (Walsh et al. 2016; Booth et al. 2021), where it may be formed via grain-surface reactions involving hydrogenation of CO# (Hiroska et al. 1994; Watanabe & Kouchi 2002). At typical pressures in our reference CPD the freeze-out temperature of methanol is greater than that of NH\({}_{3}\) and CO\({}_{2}\) (Mousis et al. 2009; Johnson et al. 2012). Thus, if the CO\({}_{2}\) ice observed on Callisto's surface was formed primordially in the CPD, we could expect that temperatures in the CPD could have allowed for stable methanol ice to be present as well. Indeed, we find that in the inner disk this occurs for \(t_{\rm visc}>10^{3}\) yr, where methanol ice is present at the 1% level at temperatures above 65 K with a peak abundance at 95-100 K. At these densities, it originates almost exclusively from reactions in the gas-phase via sequential hydrogenation of CO in two- and three-body reactions. Approximately 70% is formed via: \[{\rm CO}+{\rm H}+{\rm M}\rightarrow{\rm HCO}+{\rm M}, \tag{26}\] \[{\rm HCO}+{\rm H}+{\rm M}\rightarrow{\rm H}_{2}{\rm CO}+{\rm M},\] (27) \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{3}{\rm O},\] (28) \[{\rm CH}_{3}{\rm O}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}, \tag{29}\] and the remainder by \[{\rm H}_{2}{\rm CO}+{\rm H}\rightarrow{\rm CH}_{2}{\rm OH}, \tag{30}\] \[{\rm CH}_{2}{\rm OH}+{\rm H}\rightarrow{\rm CH}_{3}{\rm OH}. \tag{31}\] For the reaction H\({}_{2}\)CO\(\rightarrow\)CH\({}_{3}\)OH, we have adopted the rate coefficients from Huynh & Violi (2008) with a barrier of 865 K. In the absence of this reaction, we find that methanol is produced in similar quantities via the CH\({}_{2}\)OH pathway. The rate of formation is thus highly contingent on the availability of free atomic hydrogen in the gas-phase. The absence of abundant atomic hydrogen prevents the accumulation of methanol in the partial reset or inheritance cases. An additional "bottleneck" in the reaction network is H\({}_{2}\)CO. This can be seen in Fig. 9. H\({}_{2}\)CO is formed almost exclusively (\(>99\%\)) via gas-phase three-body collider reactions. In the ISM, methanol ice abundances can significantly exceed that which we find in the CPD. The grain-surface hydrogenation of H\({}_{2}\)CO to form CH\({}_{3}\)O (Eq. 28) has been observed at low temperatures experimentally (Hidaka et al. 2004; Fuchs et al. 2009; Chuang et al. 2016), suggesting that successive hydrogenations of CO can explain the observed abundance of interstellar methanol at low temperatures (\(<15\) K). Above this temperature the desorption of H atoms and lower sticking efficiency of H due to the absence of H\({}_{2}\) causes a considerable drop in this reaction rate. While these reactions are included in our chemical network, the gas temperature in the CPD does not fall below 50 K; thus, we find this path to be inefficient. ### Full inheritance case In the event of a full chemical inheritance from the circumstellar disk gap edge, the ice accreting onto the CPD consists predominantly of water, with ratios H\({}_{2}\)O:NH\({}_{3}\):HCN of 100:15:10 and a significant \(\sim 10\%\) component of hydrocarbon ices (e.g. C\({}_{2}\)H\({}_{2}\), C\({}_{3}\)H\({}_{2}\)). This result is generally consistent with modeling of the outer regions of the circumstellar disk where NH\({}_{3}\)/H\({}_{2}\)O = 0.14 with as much as 80% of the nitrogen locked into NH\({}_{3}\) and to a lesser extent HCN (Dodson-Robinson et al. 2009). The final composition of the ices in the inheritance scenario is highly contingent on their initial composition. Given the difficulties in correctly capturing the relevant physical conditions at the outer gap edge and the uncertainty from which altitude the gas originates, we consider it more informative to discuss how the ices are altered post-accretion, rather than focusing on their final composition. Some minor processing of the ices occurs once they are incorporated into the CPD. The more volatile HCN and hydrocarbon ices are lost in the inner region of the disk where only NH\({}_{3}\)# and a minor component of hydrocarbon ices remain as impurities. In the outer region of the CPD, some conversion of H\({}_{2}\)O and HCN to HCOOH occurs, and to a minor extent CO\({}_{2}\). At temperature below 70 K HCOOH is co-deposited with the NH\({}_{3}\) ice. In the presence of the proton acceptor NH\({}_{3}\), HCOOH will convert to the formate anion HCOO\({}^{-}\) and NH\({}^{+}\) (Hudson & Moore 1999; Schutte et al. 1999; Galvez et al. 2010) however formate is not included in our chemical network. Likewise the salt formation reaction is not included in the network. We consider what impact the inclusion of this process could have on our final derived abundances. While the activation barrier of the reaction is negligible, the barrier against diffusion prevents it from occurring at 50-70 K (Schutte et al. 1999). However some of the HCOOH will react immediately upon deposition due to the acid and base being in direct contact at adjacent binding sites. 10% of the HCOOH ice is observed to react promptly at 10 K in H\({}_{2}\)O Figure 9: Chemical network diagram centered on the formation of CH\({}_{3}\)OH at the CPD midplane in the reset case. NH\({}_{3}\)-HCOOH mixtures with equal parts NH\({}_{3}\)-HCOOH (Schutte et al. 1999). Hence we might expect that as much as \(\sim 20\%\) of the HCOOH present in the outer disk could be converted upon adsorption to HCOO-NH\({}_{4}\)+. ### Differing diffusion timescales Owing to the uncertainity in the diffusion timescale on which the gas parcel drifts towards the CPD midplane we considered also the case of 10\(\times\) shorter and longer \(t_{\rm diff}\). For \(t_{\rm diff}=100\) yr all three initial conditions converge towards a similar final composition which is CO\({}_{2}\)-dominated (\(>95\%\) by weight) across the entire disk. This is clearly inconsistent with observations of the Galilean moons. The shorter \(t_{\rm diff}=1\) yr leaves the chemistry less affected by the time spent at the disk surface. In the partial reset case, a minor fraction (3%) of the accreted circumstellar NH\({}_{3}\) ice survives and can still be found at the CPD midplane after \(10^{4}\) yr. In the full reset case, the CH\({}_{3}\)OH component in the inner disk region becomes more substantial, increasing to a peak of 4% of the total ice mass. This additional CH\({}_{3}\)OH forms because more of the initially atomic hydrogen survives until ices become stable against photodissociation, and are available to hydrogenenate H\({}_{2}\)CO and CH\({}_{3}\)O. ## 4 Implications Absence of ammonia as an indicator of chemical reset We have found that a partial or complete chemical reset of the CPD tends to suppress NH\({}_{3}\) formation as efficient N\({}_{2}\) self-shielding locks up nitrogen in N\({}_{2}\). Even if a substantial component (\(\sim 20-30\%\)) of NH\({}_{3}\) ice were present in Jupiter's feeding zone, a partial or complete reset would prevent its accumulation in the building blocks of the moons. Without a substantial NH\({}_{3}\) component the liquidus temperature of the Galilean subsurface oceans may not differ substantially from that of a pure water ice. Europa appears to be the only Galilean moon where tectonic or cryovolcanic processes have recently exchanged material between the surface and subsurface where it could provide clues to composition of an ocean (Kargel et al. 2000; Zolotov & Shock 2001). NH\({}_{3}\) brought to the surface in the form of an NH\({}_{3}\)-H\({}_{2}\)O matrix could be lost on geologically brief timescales to external radiation (Moore et al. 2007; Bergantini et al. 2014). Longevity of surface ammonia might be extended if it would appear in a more stable form such as a hydrate or salt (Cook et al. 2018) but no positive detection has thus far been made (Clark et al. 2014). The non-detection of ammonium compounds on Europa's surface is compatible with a lack of ammonia in a subsurface ocean, although is certainly not conclusive evidence of its absence. In contrast to the Galilean system, several lines of evidence indicate the presence of NH\({}_{3}\) ice during the accretion of the Saturnian moons. The inferred interior composition of the Saturnian moon Enceladus appears to resemble more closely well-mixed outer solar system material and is generally consistent with a composition inherited from solar nebular (cometary) material (Waite et al. 2009). Enceladus contains a liquid water ocean (Thomas et al. 2016) from which interior material is ejected through plumes (Spahn et al. 2006; Porco et al. 2006; Waite et al. 2006). The presence of NH\({}_{3}\) in the plumes of Enceladus has been established by measurements from several instruments onboard the Cassini spacecraft (Waite et al. 2009) at \(>0.1\%\) relative to H\({}_{2}\)O, besides CO\({}_{2}\), CH\({}_{4}\) and H\({}_{2}\) (Magee & Waite 2017). Likewise NH\({}_{3}\) ice is considered to be a likely source of Titan's nitrogen (McKay et al. 1988; Sekine et al. 2011; Mandt et al. 2014). We suggest that the CPDs of sufficiently massive planets lose accreted NH\({}_{3}\) ice to mild accretion shocks and subsequent chemical evolution, and that the absence of NH\({}_{3}\) ice may indicate a (partial) chemical reset has occurred. As NH\({}_{3}\) represents one of the most potent and potentially abundant anti-freezes, subsurface ocean occurrence rates and longevity may then be relatively enhanced in the icy moons that accompany lower-mass giant planets which inherit circumstellar material. ### Carbon Dioxide at the origin of Ganymede and Callisto Several lines of evidence suggest the surface of Callisto is among the most primordial of the regular satellites, potentially providing a direct link to the formation environment of the Galilean moons (Moore et al. 2004; Zahnle et al. 2003). CO\({}_{2}\) ice has been detected on the surface of both Ganymede and Callisto (Carlson et al. 1996; McCord et al. 1997) but only appears below the surface on Callisto (Hibbitts et al. 2002; Hibbitts et al. 2003), where it appears to be exhummed impact cratering. In contrast, CO\({}_{2}\) on the surface of Ganymede appears to be of exogenous or radiolitic origin (Hibbitts et al. 2003). Hence if we consider Callisto's reservoir of CO\({}_{2}\) ice to be primordial we can consider which of our assumptions are consistent with its presence. In the partial reset case, which we considered to be a priori the most likely initial condition of accreted material, CO\({}_{2}\) ice is present in significant quantities at the present-day position of Callisto but less so near Ganymede. Superficially this appears to be consistent with the proposed distinct origins of Ganymede and Callisto's CO\({}_{2}\). However, the local ice mass fraction of CO\({}_{2}\) in the CPD is high (\(\geq\)60%). This appears to be in conflict with the inferred surface abundance of CO\({}_{2}\) ice on Callisto, where it constitutes no more than 0.01-0.16% of the host material mass (Hibbitts et al. 2002). It is however unclear whether the observationally inferred surface abundance of CO\({}_{2}\) on Callisto is truly representative of the subsurface composition. Pure CO\({}_{2}\) ice is not stable at the surface of the Galilean moons and CO\({}_{2}\) may instead be present in the form of clathrates (Chaban et al. 2007). Hence, an initially large CO\({}_{2}\) component exposed to the surface could have been lost to sublimation and dissociation. A substantial subsurface CO\({}_{2}\) reservoir is nevertheless implied, given the continuous replenishment of Callisto's CO\({}_{2}\) exosphere (Carlson 1999). In contrast to the partially reset case, we find CO\({}_{2}\) ice at a concentration of \(\sim 0.2\%\) near Callisto's location in the fully reset CPD. While this appears to be more representative of what is known of the Galilean moon surface composition, the primordial CO\({}_{2}\) concentration of Callisto's building blocks cannot simply be derived from the present state of the surface. Our findings are consistent with a primordial origin for Callisto's CO\({}_{2}\), and point to the possibility that Ganymede and Callisto's icy building blocks had distinct chemical compositions. While it has been suggested that Ganymede may have formed with a primordial CO\({}_{2}\) component which was lost during an episodic period of surface melting, our results suggest icy grains in its vicinity were CO\({}_{2}\)-poor. A CPD midplane temperature profile which is dominated by viscous heating and in which the water iceline falls between Europa and Ganymede naturally produces a CO\({}_{2}\) iceline between Ganymede and Callisto. ## 5 Summary and Conclusions If CPD ice composition is (partially or fully) reset, NH\({}_{3}\) ice formation is inefficient due to N\({}_{2}\) self-shielding. The resulting \(\ll\)1% concentration of NH\({}_{3}\) ice is unlikely to significantly alter the thermophysical/chemical properties of subsurface melt. The most significant impurities are the carbon-bearing CO\({}_{2}\) and HCOOH ices and each make up at most \(\sim 10\)% of the molar ice fraction. If the growth of the Galilean mooms occurred near their present-day positions they are largely free of impurities, being composed of 98% water ice, \(\sim 2\)% CH\({}_{3}\)OH, and trace amounts of CO\({}_{2}\). If instead the CPD ice composition is inherited from the circumstellar nebula, NH\({}_{3}\) ice can survive conditions at the CPD midplane and becomes the most abundant impurity. Observations indicating the presence of NH\({}_{3}\) in the Saturnian satellite system but not in the Galilean one are consistent with a reset-inheritance dichotomy. NH\({}_{3}\) in the planetary feeding zone of Jupiter, if present, may have been destroyed during accretion onto the CPD and then could not form again in time. Our key findings are summarized as follows: 1. The ice composition of the Galilean moons corresponds to a partial or full chemical reset, as opposed to the ices of the Saturnian moons, which may have been more directly inherited from the circumstellar disk. 2. A partial reset prevents efficient formation of ammonia ice. The building blocks of the Galilean moons (and of exomoons forming in similar CPDs) would be nitrogen-poor (NH\({}_{3}\) ice abundances with respect to the H\({}_{2}\)O ice of \(\sim 0.1\)%). 3. Our results are consistent with a primordial origin for CO\({}_{2}\) ice on Callisto and an ice composition that is chemically distinct from Ganymede. The composition of the building blocks that form moons around giant planets is determined by the conditions of accretion onto the planet's CPD, which in turn is influenced by the mass and orbital properties of the planet. The compositional reset-inheritance dichotomy of CPD ices ties together the properties of the planet and the long-term geophysical evolution and composition of icy satellite interior oceans. ###### Acknowledgements. The research of N.O. and I.K. is supported by grants from the Netherlands Organization for Scientific Research (NWO, grant number 614.001.552) and the Netherlands Research School for Astronomy (NOVA). This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research has also extensively used Numpy (Harris et al., 2020), Matplotlib (Hunter 2007), Scipy (Virtanen et al.), and Prodromorph [https://gjilab.astro.rug.nl/prodromorph.N.O](https://gjilab.astro.rug.nl/prodromorph.N.O) would like to thank S. Coulemans for her suggestion that greatly improved the visualizations in this work, as well as J. Tjoa and S. van Merlo for helpful discussions and support.
2309.10531
Proposal for an Organic Web, The missing link between the Web and the Semantic Web, Part 1
A huge amount of information is produced in digital form. The Semantic Web stems from the realisation that dealing efficiently with this production requires getting better at interlinking digital informational resources together. Its focus is on linking data. Linking data isn't enough. We need to provide infrastructural support for linking all sorts of informational resources including resources whose understanding and fine interlinking requires domain-specific human expertise. At times when many problems scale to planetary dimensions, it is essential to scale coordination of information processing and information production, without giving up on expertise and depth of analysis, nor forcing languages and formalisms onto thinkers, decision-makers and innovators that are only suitable to some forms of intelligence. This article makes a proposal in this direction and in line with the idea of interlinking championed by the Semantic Web.
Mathilde Noual
2023-09-19T11:17:32Z
http://arxiv.org/abs/2309.10531v1
# Proposal for an Organic Web ###### Abstract A huge amount of information is produced in digital form. The Semantic Web stems from the realisation that dealing efficiently with this production requires getting better at interlinking digital informational resources together [4]. Its focus is on linking _data_. Linking data isn't enough. Not all information produced is intended to be processed _as data_ per se. Most of the digital content produced today is unstructured (informal) text whose progressive semantics are only intended to be intelligible to humans. The documents containing the information can themselves be interlinked as if they were data. But links between granular documents then only convey a shallow pre-defined semantics that ignores the rich progressive semantics expressed inside the documents. Dealing with traditional documents as if they were data, is bound to make suboptimal use of their contents, and arguably remains of limited utility. We need to provide infrastructural support for linking all sorts of informational resources including resources whose understanding and fine interlinking requires domain-specific human expertise. At times when many problems scale to planetary dimensions, it is essential to scale coordination of information processing and information production, without giving up on expertise and depth of analysis, nor forcing languages and formalisms onto thinkers, decision-makers and innovators that are only suitable to some forms of intelligence. I make a proposal in this direction and in line with the idea of interlinking championed by the Semantic Web. This proposal is the result of a compilation of ideas contributed by the scientific and entrepreneur communities over several years of discussions. **Keywords:** Digital information system/network, Continual improvement, Global redundancy management, Collective documentation, Collective intelligence, Slow-first collaboration, Datamodel, Scientific research infrastructure, Knowledge management/engineering, Local-first software, Digital sobriety, Crowdsourced analytical reasoning, ## 1 Introduction ### Motivation Most of the digital content produced today is "unstructured", meaning its semantics is not understood without the involvement of a human mind. Natural language processing techniques only extract a tiny proportion of the semantics of all unstructured content produced by humans (and now machines). Huge quantities of unstructured digital content are a problem. More unstructured content means more work for humans. The risks are: (i) that content be made suboptimal use of, by both humans and machines, and (ii) that content stand in the way of humans communicating and working efficiently with each other. A primary motivation of my proposal is to address this problem and deal with unstructured digital content _early_ in its life cycle. My aim is to help preempt the production of digital content with low capacity to positively impact on and empower human enterprises and with high capacity to stand in their way. This requires means to gauge the value of new pieces of information against the body of pre-existing information which Vannevar Bush famously referred to as "_the record_" [10], as I do here too. It isn't enough to evaluate pieces of information individually, we also need to take a step back and watch over humanity's global informational commons. The actual digitalisation of knowledge and of knowledge work is an opportunity to materialise a well delineated version of the record. But amassing quantities of digital content is not enough to make the record manageable. Pieces of information in the record must be related with one another _meaningfully_ (cf Suppl. Mat. B.2). It isn't enough to have all theorems about, say, Boolean Automata Networks digitally recorded. To ensure that we make good use of all those theorems, and that we don't add repetitions of knowledge that they already capture, we must also know, and document, how those theorems relate to each other: _which ones are generalisations of which other ones, which ones are used in the proofs of others, which ones are equivalent to each other, which ones contradict each other..._ I contend that the record should honour the highest level of expertise of the humans who contribute knowledge to it. More than just document the existence and nature of epistemic relations between theorems and other pieces of information, we should also endeavour to _highlight_ the finest known details of those relations - e.g. _how does theorem \(t_{1}\) generalise theorem \(t_{2}\), that is, what features of Boolean Automata Networks does the generalisation relation operate on: the topology of the Boolean Automata Networks? their dynamics? In what way is formalism \(f_{1}\) equivalent to formalism \(f_{2}\): mathematically? semantically? philosophically? What question do two results provide contradicting answers to?..._ My proposal relies on design biases. I document them in yellow boxes in the present article. The supplementary material B accompanying this article, presents further fundamental philosophical biases underlying my proposal. Design Bias #1: **Smart Networking** I wish to support an informational network whose _structure_ reflects as best as possible the depth of human expertise across the diversity of informational domains that get documented in the network. Design Bias 1 requires that expertise no longer be documented only _inside_ documents at the endpoints of the informational network's links. The links themselves should reflect expert insight. This disagrees with the documentation of links by/for outsiders. My proposal devises a solution to manage the record both locally and globally. It aims (1) to favour the systematic stripping down of new digital content to the bare, useful minimum, defined in terms of what information the digital record already contains, and (2) to promote care and detail in linking new digital content to humanity's body of digital content. ### Information As suggested above, the focus of this proposal is on _unstructured_ information meant to be consumed by humans rather than machines. The proposal is nonetheless extended to deal marginally with the case of structured data. This extension will be presented in a subsequent article. Until then, our focus is more specifically on textual information, for instance: pieces of science, of philosophical arguments, of geopolitical discussions, polemics, recipes, tutorials, stories... The solution proposed in this paper may also to some extent accommodate poems and videos, but I leave this less argumentative kind of content aside for the moment to concentrate on textual information that invites discussion, nuance, questioning, updating, reformulating... A future extension of the solution to at least pictures (e.g. photos of whiteboards) will be important in my opinion. The solution should ultimately accommodate the diversity of successful informational work practices and not impose cumbersome documentation efforts on thinkers, especially when less interfering documentation alternatives can be put into place (see Design Bias 2). It remains that the extension of our solution to non-textual media can be taken care of later once I have mastered the solution for textual information. Design Bias #2: **Experts at work know best / Support what already works** I wish to support humans in their practice of informational work. Many information workers, i.e., thinkers (e.g. scientist researchers) are already equipped to doing worthwhile informational work and are doing it well. Different thinkers operate within different epistemic cultures, and their chains of thought follow different series of landmarks. It is essential for "informational welfare" (progress and quality of information) that existing informational know-how, cultures and habits be respected. I wish to dedicate a solution to supporting what works well already1. And in particular I wish to propose a solution that preserves the focus of experts on what they are successfully contributing. Footnote 1: I believe it is far more important and safe to support what already functions well than to attempt relieving pain points of informational workers. Indeed, solutions brought to informational workers are bound to emphasise certain of their activities over others. Pain points of informational workers that are _at the core_ of informational work and that outsiders don’t understand (e.g. difficulty in proving a conjecture) are best dealt with by the informational workers themselves. All the other pain points relate to activities that are _marginal_ to the informational work (e.g. self-marketing activities). Arguably, mitigating those pain points risks facilitating and thereby emphasising those marginal activities at the expense of the core informational work. Dynamics and balance of experts’ activities should be protected from generic technologies built by outsiders. Design Bias 2 means the solution aimed at here, is _not_ primarily a solution in support of communication. The Supplementary Material B accompanying the present article emphasises a distinction between supporting communication and supporting information (cf Suppl. Mat. B.19). The present proposal emerged from a context of transdisciplinary scientific research involving formal sciences and life sciences. Its motivating intention isn't so much to support the production of formal documents that help communicate ideas. It rather is to support the stage of informational work at which ideas are shaped with words, sometimes upstream from the work of formal documentation. The emphasis is _less_ on supporting use and reuse of informational resources (contrary to the Web Annotation Data Model[53]), than it is on supporting _the continual improvement, renewal and obsolescence of informational resources_. I expect enhanced use and reuse to naturally follow from smart networking (as well as from architectural choices presented in Suppl. Mat. A). Design Bias #3: **Continual Improvement and Updating of Information** I consider information as a process, something to do, until it no longer is relevant. A piece of information is neither definite nor free-standing. Unlike data is not a given and unlike knowledge it is not established. It is _at its best_ when it is subjected to continual improvement: when it is getting nuanced, detailed, challenged, (re)contextualised, updated... to follow the world it describes as it changes. Eventually a piece of information becomes obsolete. I wish to support information _at its best_ and facilitate the obsolescence of unprocessable information. Notably, Design Bias 3 disagrees with ridding the record of low quality information. Low quality information is typically information that can be improved. See Suppl. Mat. B.16. Following Design Bias 3 and its emphasis on the _dynamism_ of information, the solution I propose to build is also not optimized for automated reasoning and inference, which require _settling_ on some knowledge base. The Supplementary Material B further details assumed characteristics of the notion of information, providing epistemic foundation to this proposed solution. ### Epistemic Glue A base assumption of the proposed solution is that different pieces of information may be produced by different humans. The solution is to support _collective_ documentation. Appreciating the way in which individual pieces of information relate to each other is paramount to this project of organising the record and managing redundancy in it. Consider two individual pieces of information, \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). A legitimate question is: How are \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) related? Possible answers are: * _They are not related at all._ * _They contradict each other._ * _They are about the same topic_ \(\mathcal{T}\)_._ * _They use the same term_ \(\mathcal{T}\)_._ * _They refer to the same object or concept_ \(x\)_._ * _They denote the same object or concept_ \(x\)_._ * _They imply the same consequences._ * _They answer the same question._ * _They appear in the same book._ There is a great diversity of ways in which two independent pieces of information might relate to each other. Obviously not all relations are possible between any two pieces of information. And some relations are harder to document than others - e.g. it is harder to say that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) refer to the same concept than to say that they use the same term. Note also that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) may be related in more than one way, and that a relation between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) is itself a piece of information. I use the term "glue" to refer to information that relates pieces of information together. Because the record is a collective document, glue is necessary for it to have structure. Without glue, the record would merely be a collection of independent resources, possibly organised into some sub-collections and categories defined by an arbitrary (central) entity. An important desirable property of glue is that it be _generic_ so the structure it gives to the record be _domain-agnostic_. For instance, saying that \(\mathcal{I}_{1}\) answers the question \(\mathcal{I}_{2}\) is a generic, domain-agnostic way of gluing \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) together. Saying that \(\mathcal{I}_{1}\Longrightarrow\mathcal{I}_{2}\) (\(\mathcal{I}_{1}\) mathematically implies \(\mathcal{I}_{2}\) as in \(\neg\mathcal{I}_{1}\vee\mathcal{I}_{2}\) holds) is not. Not all relations provide the same "epistemic depth" of glue. For instance, understanding that \(\mathcal{I}_{1}\) implies \(\mathcal{I}_{2}\) is epistemically deeper than understanding that \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) appear in the same book. Generally, let \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\) be two relations (like the ones listed above) both between pieces of information \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Informally, we say that \(\mathcal{R}\) is _epistemically deeper_ than \(\mathcal{R}^{\prime}\) if understanding \(\mathcal{R}\)_leads to_ more understanding of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) than does understanding \(\mathcal{R}^{\prime}\), or if \(\mathcal{R}\)_comes from_ more understanding of \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). The glue we're interested here must be _smart_ (cf Design Bias 1). It must be generic without being shallow. We want to materialise a _smartly_ networked version of the digital record. So we are interested in emphasising relations that are epistemically deep. However, the digital record is a _collective_ work. A diversity of epistemic cultures, approaches, formalisms and languages need to be accommodated. Our solution must support the provision of glue in a diversity of formalisms, languages _etc_. For experts to contribute glue, glue should not be tedious to contribute. Following Design Bias 2, a legal scientist who understands that \(\mathcal{I}_{1}\) legally implies \(\mathcal{I}_{2}\) should _not_ have to understand the mathematical notion of implication (nor any other notion of implication for that matter) in order to document the legal relation between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). Similarly, when documenting a mathematical relation between two theorems \(T_{1}\) and \(T_{2}\), a mathematician who knows how theorem \(T_{1}\) generalises theorem \(T_{2}\) should _not_ be required to take the approach of a formal ontology modeller3. An expertise in \(\mathcal{R}\), \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) (all three are pieces of information) should be enough to document \(\mathcal{R}\) between \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). No understanding of the repercussions of \(\mathcal{R}\) beyond \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) should be required. In short, from the point of view of experts at work, documenting content as part of our smart networking solution should not be significantly different from documenting content today with current digital technologies that don't support smart networking (e.g. text editors). There are many ways of "gluing" informational resources together (cf bullet points above). A popular way involves ontological and taxonomic commitments (cf Suppl. Mat. B.15). For instance, two independent informational resources A and B (e.g. two statements, or two articles) may be related based on the fact that they both assume that a certain object 'Bob' exists and they both agree on what Bob really refers to, what kind of object it is, e.g. a 'person' which is a type of'mammal' with a 'phone number', equal to 01-23-45-67-89. If resources A and B agree on the existence and meaning of 'Bob', there is deep epistemic glue between them in the form of shared rigorous semantics. Checking that the 'Bob' of resource A is exactly the same as the 'Bob' of resource B can be very demanding. Often, correspondences across ontologies and taxonomies, between concepts used to define an object (e.g. person, mammal, phone number) are not trivial to establish [44]. I propose to avoid this difficulty altogether and _not_ restrict the glue we consider here in the way formal ontologies and taxonomies restrict it to semantic relations. A founding hypothesis of my proposal is that good information feeds on a diversity of epistemic glue and that the glue itself must be challengeable like any other piece of information is. Thus, my proposal neither assumes, imposes nor even aims at semantic homogeneity. I propose to allow for ontological and taxonomic commitments to be documented explicitly, questioned and discussed like any other piece of information. Even without a guarantee that resources A and B rigorously agree on the existence and meaning of 'Bob', A and B may still be glued together by the _explicit (motivated) assumption_ that they do agree, or simply by a question asking whether they agree4. An obvious consequence is that the digital version of the record that I propose to structure with glue, isn't expected to have global semantic consistency. It can't serve as a functional representation of the world5. The aim here is rather to support "safe passage" between different documented perspectives on the world. Footnote 4: This suggests an alternative to having to build trust in the content of the record. Rather than try to make the information trustworthy, we encourage anyone who has a doubt to express that doubt so that it can be addressed. Of course, for this, efficient redundancy management is key. Footnote 5: This is desirable. A model doesn’t scale well. A collectively built, all-encompassing model would have limited use for the huge majority of humans who continually need to change perspectives, deal with uncertainty, ambiguity _etc._ When gluing a new piece of information \(\mathcal{I}\) to existing pieces of information \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) already in the record, \(\mathcal{I}\) should gain in precision because of the epistemically relevant context provided by the glue to \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\). Conversely, \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) should also gain from being completed, nuanced, circumscribed _etc_ by the glue to \(\mathcal{I}\). Otherwise, the relations between \(\mathcal{I}\) and \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\mathcal{I}_{n}\) should highlight the inappropriateness of \(\mathcal{I}\). Glue generally gives information on information6. More glue should make the record more manageable because it should increase the ease with which we can epistemically relate any two pieces of information in it and thus jointly deal with them. It should help take into account pieces of information that come from heterogeneous sources. Individual pieces of information that make sense individually shouldn't make less sense when glued to other pieces of the record. They shouldn't degrade the record either. On the contrary, more information should be beneficial to the record, or it should be easy to see that it isn't. Footnote 6: _Not_ meta-information – of Suppl. Mat. B.3 ### There is a missing link between the Web and the Semantic Web. Traditional human-centric information networks - e.g. the Web, the network of inter-cited academic publications - are not smart epistemically structured networks. They are _infrastructural_ networks taking care of the logistics of humans' informational activities. Information in these networks is mostly _inside_ the documents at the endpoints of the network links. The links between documents are epistemically shallow. The Web is missing a smart epistemic backbone. The Semantic Web proposes to materialise a semantic one, in the aim of opening the wealth of Web content up to automatic processing. It defines standards like RDF (the Resource Description Framework) [14, 36] that support strong semantic gluing of informational resources. The gluing operates at a finer granularity than the Web's cross-document hyperlinking. The shift in granularity is consequential: it unlocks possibilities of collective documentation and enhanced resource sharing (data can be reused in multiple RDF triples by multiple authors)7. Footnote 7: RDF is a standard data model that expresses information by way of ’semantic triples”. A semantic triple consists in a subject, a predicate and an object which altogether express a statement. Each of the three components of a triple are individually addressable. Atomic pieces of information can thus be reused in multiple triples by multiple authors. “Bob” can be the subject and object of multiple statements. ”enjoys” can be the predicate of multiple statements, linking together multiple resources acting as subjects and objects. The Semantic Web only concerns a minor proportion of the information that is of interest to humans. Its standards are designed to make information machine-understandable in order to support automatic reasoning over it. Most information that is of interest to humans is expressed by humans, for humans8. It doesn't need to be made machine-understandable and is therefore usually not worth being formalised into SW standards. When a researcher proves a new theorem \(T\) there rarely is a pressing incentive, if any, to make a machine understand \(T\). There is usually one for the researcher's human peers to understand \(T\). The sort of information that the Semantic Web is primarily designed to deal with is _metadata_ (cf Suppl. Mat. B.3). Metadata is essential for transferring rudimentary semantics over to the machines [24]. But unlike the expression and proof of a theorem \(T\), metadata describing \(T\) (e.g. the authorship, the creation date of \(T\)) is not reflective of the depth and nuance of understanding that domain experts have of \(T\). So we can't make a smart network out of metadata. Also, the SW standards are designed to constructively collect information as part of coherent ontological models. But our interest here is supporting informational progress (cf Design Bias 3), which can happen in a variety of ways, including through the deconstruction of established models. Human produced information expresses doubts, questions, hypotheses _etc_ and sometimes challenges the best models actually in service. A web looser than the Semantic Web is needed to interlink and structure this information. Footnote 8: The common denominator for humans is natural language, not any specific formal language. Even when humans (e.g. theoretical computer scientists at work) endeavour to produce highly formalised information, they spend most of their work time navigating _between_ levels of formalisation, rather than thinking committedly in one particular formalism. As most humans don’t speak in rigorous RDF triples, populating the Semantic Web with human produced information would require intermediary entities savvy of SW standards. The translation of a piece of information into SW standards is rarely worth the cost and effort. Arguably, there also is a danger in handing the translation of expert information over to non-specialists. Profound domain-specific knowledge is best documented first-hand by the domain experts. I propose to materialise **an intermediary web** - namely, the **"the MMM"** - geared towards supporting human reasoning and its organised documentation9. MMM stands for **Mutual Mutable Medium**, meaning _collective dynamic document_ or _record_10. Footnote 9: The semantic Web vision is to support automated reasoning by providing machines with collectively built knowledge in the right formalisation. Arguably, a reasonable intermediary step to organising information for machines is to do it for humans (cf Suppl. Mat. B.12). Footnote 10: “Mutual” in MMM replaces “Worldwide” in WWW. The idea is to go from a paradigm where information is meant to be distributed to everyone, to a paradigm where information emerges from small scale invested relationships. The intermediary MMM web is to **co-exist with the original Web**, possibly even interface with it. Theoretically, anything expressible in text on the Web can be expressed on the MMM with no obligation for translation or reformulation. In practice, software interfaces may exist (for instance Web browser extensions comparable to hypothes.is [46, 57], see also Fig. 23 in Suppl. Mat. A.4) to copy or to reference information from the Web onto the MMM where information benefits from epistemic interlinking. The intermediary MMM web is also to **relay the Semantic Web**. Without serving the same purpose as the Semantic Web nor conveying the same kind of content, the MMM is to offer some compatibility with the standards of the Semantic Web. \begin{table} \begin{tabular}{|l|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & **The Web** & **The Semantic Web** \\ \hline Is designed primarily for... & humans & machines \\ \hline Allows anyone to contribute without technical skills & & ✗ \\ \hline Doesn’t limit expressivity, accepts any textual information expressed & & ✗ \\ in any formalism or language, at any level of formalisation, including & & \\ healthily challenging and nuancing information & & \\ \hline Supports epistemic glue in between individual resources & ✗ & ✗ \\ & hyperlinks only & _semantic_ glue \\ \hline Is designed to network any granularity of informational resources & ✗ & ✗ \\ & mostly documents & \\ \hline Natively allows to annotate any granularity of information & ✗ & ✗ \\ & & cf RDFS comments \\ \hline Natively allows to interlink annotations like any other informational & ✗ & ✗ \\ resource & & \\ \hline \end{tabular} \end{table} Table 1: There is room for an intermediary web whose content, like the Webs, is primarily for humans to consume, and whose structure, like the Semantic Webs, is “smart” (cf Design Bias 1) and “epistemically deep” (cf Section 1.3). A mapping between the MMM and the RDF data models will be provided in a follow-up article. Once mapped into the MMM, data looses some of its ability to be efficiently processed by machines. Automatically populating the MMM with Semantic Web resources may nonetheless have some advantages. It may favour the reuse of resources because it brings the resources to humans, exposing them to human scrutiny and annotations, and making humans systematically aware of the existing terminologies and taxonomies that are epistemically relevant to their work. It may help mitigate the amount of redundant and diverging new terminology. Conversely, interfacing the Semantic Web with the MMM can enable more systematic generation and update of formal ontologies. The MMM data model can be leveraged in the capacity of "proto-taxonomical" structure for smart systematic documenting and informing of ontology design decisions [38, 48]. ### Requirements To materialise a version of the digital record that has the desirable properties discussed in Section 1, I propose to organise information as per a pre-defined data model. The data model is called the **MMM data model** or **MMM format**. It is formally introduced in the next section. Here let us first list some basic requirements for the MMM format. N.B.: This proposal is _not_ a solution for organising already archived information**. It aims instead at supporting humans as they work on updating and renewing information. In other terms, the solution is to resemble an upgrade on the concept of whiteboard more than it is to resemble an upgrade on the concept of library. 1. **Inclusivity and informalism-friendliness**: It must be possible and easy for a human user to contribute anything expressible as text in natural language. We shouldn't have to bother users with unwelcome formalisation exercises. It must be possible to contribute content to the smart network, without knowledge of any formal language. 2. **Epistemic glue**: It must be easy for users to document links between their contributions and pre-existing ones. It must be possible for them to make explicit the epistemic relationship between them. 3. **Recursive annotation**: It must be possible to question, nuance, challenge, detail any contribution. Generally, it must be possible to comment on/annotate any contribution, including annotations themselves, as well as links between contributions and links between annotations. 4. **Minimal metadata**: The amount of metadata associated to each contribution must be kept minimal, and for strictly administrative purposes (cf Suppl. Mat. B.3). Metadata should not be needed to assess the quality of contributions. 5. **Reformulation**: It must be possible to contribute to the record by adding a reformulation of an existing contribution (this proposal does _not_ aim at finding a unique canonical expression of each piece of information). 6. **Intelligible collective documentation**: It must be possible for independent users to contribute to the record without consulting each other, even when they document closely related information. The record should not decrease in quality. It must be possible for the set of theirs contributions to constitute an intelligible collective document whose smart structure allows navigating meaningfully from contribution to contribution. 7. **Contribution types**: It must be easy to distinguish between different types of contributions. In particular it must be easy to distinguish contributions that are questions from contributions that are not (cf Suppl. Mat. B.17). Generally: 1. The semantics of contribution types must be intuitive. It must be easy for contributors to know which type to assign to their contributions. 2. The number of different contribution types must be kept small. To assign a type to their contributions, users must not need to learn a long list of types. 3. The set of different contribution types must be stable. Types must be pre-defined11. Users must not need to keep up to speed with evolving definitions. Footnote 11: This means that we need to have a good set of types from the start to limit the need for future evolutions. I claim to have a basis for this starting set (cf Section 2). Methodical and diverse testing of this set needs to be performed beyond what has already been accomplished before writing this proposal. 4. The semantics of contribution types must be generic (domain-independent). A contribution's type must convey the basic epistemic role or purpose of the contribution (e.g. is the contribution a question, a statement or something else?) The genericness of types must maintain bridges across domains of expertise. 5. The semantics of contribution types must be loose. There must be room for interpretation and specification of the meaning of a type. It must be possible for users to make slightly different uses of the same type12. Users from different domains must still use the same contribution type in _relatable_ ways13 (cf requirement R7d). It must thus be possible for contributors to _narrow down_ the generic epistemic purpose conveyed by a type. For instance having assigned the type "question" to a contribution, it must be possible for the contributor to specify that the contribution is a _rhetorical_ question or that it is a _confirmation seeking_ question. Footnote 12: Just like how biologists and mathematicians slightly diverge on what kind of information goes in an article introduction. But they can still read each others’ articles without being thrown off by what they find in the introduction. 6. It must be easy for contributors to assign a _default_ type to their contributions when they don't want to bother choosing a more epistemically meaningful type. It must be possible for contributors to use our solution without leveraging its structuring opportunities. 7. It must be easy to highlight the ontological commitments underlying a contribution. For instance the question "_What are genes made of_" makes the tacit ontological assumption that genes exist (cf Suppl. Mat. B.18). It must be easy to make a contribution whose purpose is to highlight this about this question. Because of requirement R1 and also because of Design Bias 2 ("Experts at work know best"), the MMM data model must be very flexible by design. This means that there may often be several ways to document the same piece of information in MMM format. As a consequence, the definition of the MMM format needs to be relayed by a collection of _best practices for users._ Some are mentioned below. Best practices may be promoted through the design of MMM editors and other tools for contributing content in MMM format (cf Suppl. Mat. A.4). The primary version of the MMM data model introduced in Section 2 is expected to need some minor tweaking. It is essential that it remain small and simple. The MMM format must strike a balance between freedom and constraint in the exercise of manual documentation (cf Suppl. Mat. B.12). Possible necessary future adaptations of the MMM format should be weary of maintaining that balance. The definition of the MMM data model is exclusively motivated by _practical_ reasons. There is no epistemological theory behind its definition. Any modification to the MMM data model must be done circumspcly to address practical needs only (rather than to be exhaustive and demonstrate a certain form coherence). ## 2 Definition of the MMM format In the sequel, I use the symbol \(\mathbb{S}\) to denote the set of strings of characters possibly including markdown symbols, and I use the symbol \(\mathbb{D}\) to denote the set of dates. #### 2.0.1 Landscape A MMM network, a.k.a. "**landscape**" consists of objects called "**landmarks**". Exactly one of those objects is a special landmark called the "**pit**", denoted by \(\bot\). All other landmarks in a landscape **N** are **contributions**\(c\in\textbf{C}\) belonging to the set \(\textbf{C}\subset\textbf{N}=\textbf{C}\cup\{\bot\}\) of contributions. Contributions have attributes that I list in next paragraphs SS2.2.1 - SS2.2.4. ### Main Contribution Attributes Contributions convey information through their three main attributes, namely, their label, their type, and their tags. #### 2.1.1 Labels Contributions \(c\in\mathbf{C}\) have **labels**. For now, we consider labels are taken from the set \(\mathbb{S}\) of character strings of arbitrary length. Labels can be empty. And some types of contributions (namely bidirectional edges) can have multiple labels. Labels satisfy requirement R1. There is no limit on the length of a contribution label. An entire book could be copied into the label of a contribution. _Best practices for users_: Keep labels short. Prefer to decompose a long text into multiple contribution labels. #### 2.1.2 Types A contribution has an abstract type and a concrete type (a.k.a. a type and a subtype). There are five different sets of **abstract contribution types**, namely _(i)_ the set \(\mathbf{V}\) of vertex types (cf SS2.3.2 below), _(ii)_ the set \(\mathbf{P}\) of pen types (cf SS2.3.7), and the set \(\mathbf{E}\) of edge types (cf SS2.3.3) which comprises _(iii)_ the set \(\mathbf{E_{A}}\) of adirectional edge types (cf SS2.3.4), _(iv)_ the set \(\mathbf{E_{0}}\) of unirectional edge types (cf SS2.3.5), and _(v)_ the set \(\mathbf{E_{B}}\) of bidirectional edge types (cf SS2.3.6). The set of abstract types is denoted \(\mathbf{T^{AB}}\). It is equal to \(\mathbf{T^{AB}}=\mathbf{V}\cup\mathbf{P}\cup\mathbf{E}=\mathbf{V}\cup\mathbf{ P}\cup\mathbf{E}_{\mathbf{A}}\cup\mathbf{E_{0}}\cup\mathbf{E_{B}}\). The abstract type of a contribution is specified by a **concrete type**. Examples of concrete types of vertex contributions are the question and narrative subtypes. Examples of concrete types of edges are the pertains and equates subtypes. The different concrete types are formally introduced in subsequent paragraphs of this section: SS2.3.2 - SS2.3.7. Types generally satisfy requirement R7 about contribution types. Abstract types are however merely infrastructural while concrete types convey structuring epistemic information in agreement with requirements R7d - R7f (satisfaction of requirement R7a is to be further supported by UJ application code). Importantly, despite concrete types being epistemically structuring, plenty of room is left for most of them (especially concrete edge types) to be interpreted with some flexibility (cf the interpretations.graphml file [42]). The set of concrete types is denoted \(\mathbf{T^{CO}}\). It is equal to \(\mathbf{T^{CO}}=\mathbf{T_{V}}\cup\mathbf{T_{P}}\cup\mathbf{T_{E}}=\mathbf{T_ {V}}\cup\mathbf{T_{P}}\cup\mathbf{T_{A}}\cup\mathbf{T_{U}}\cup\mathbf{T_{B}}\). #### 2.1.3 Tags Contributions \(c\in\mathbf{C}\) are associated a possibly empty **set of tags**. The tag set associated to a contribution is often empty. By convention for now we expect that a tag is a string that starts with the character '\(\oplus\)'. We denote by \(\mathbb{S}_{\oplus}\) the set of character strings in which tags are taken from. Like labels, tags are for enriching concrete types. Tags are used instead or in addition to labels when the full semantics of a contribution is specified somewhere else than in the contribution's label and than in the contribution's concrete type, e.g. when it is specified in an external resource (cf Fig 1). Tags are typically URIs or standardised keywords, e.g. the @yes and @no tags typically specify the meaning of an answers edge incoming a closed question contribution (cf Fig. 2). Pervasive tags may eventually inform an update of the set of predefined concrete MMM contribution types - provided they represent fundamental epistemic concepts that are not already represented in the set of predefined concrete types. It however is essential to keep the set of concrete types (listed in the sequel) small to satisfy requirement R7. ### Metadata Attributes The remaining five MMM landmark attributes that we introduce next constitute "**metadata**". For the sake of simplicity, except the identifier, we will ignore metadata attributes in the definition and in the examples of MMM contributions given later on. Metadata attributes are kept minimal in agreement with requirement R4. Figure 1: A relatesTo edge (cf §2.3.5) labelled ”is a” and tagged with the URI of the RDF schema definition of ”RDF type” [36]. Figure 2: A closed question and two statement answers each linked to the question by an edge of type answers appropriately tagged. #### 2.2.1 Identifiers All landmarks \(x\in\mathbf{N}\) in a landscape \(\mathbf{N}=\mathbf{C}\cup\{\bot\}\) have a unique **identifier** taken in a certain set of identifiers denoted here by \(\mathbf{I}\). Those could be standard (namespace based) UUIDs (universally unique identifiers). A hash function applied to just the label and type of a contribution (see SS2.1.1 and SS2.1.2 below) may facilitate the identification of duplicates (cf SS3.1.9). Involving a user identifier in the hash may also be required to support the digital signature of contributions. Future research will explore how to define identifiers of MMM contributions so as to facilitate search in the MMM space. Whatever the landscape \(\mathbf{N}\), the identifier of the pit is always the same because there is only one pit. The pit is the first collective landmark of the intermediary MMM web. Let us assume for now, in the examples below, that \(\mathbf{I}=\mathbb{N}\) and that the identifier of the pit is \(0\). #### 2.2.2 Authorship Contributions are associated a possibly empty, grow-only **set of authorships** in the power set \(\mathcal{P}(\mathbf{A})\) of the set \(\mathbf{A}\) of authorships. Authorships are taken in the set \(\mathbf{A}=\mathcal{P}(\mathbb{S})\times\mathbb{D}\). An _authorship_\(a\in\mathbf{A}\) is a pair comprised of a list of author names and a date. For instance the following is an authorship: \[(\underbrace{(\text{``Jane Smith''},\text{``Ivy Li''},\text{``Amari Doe''})}_{ \text{team of authors}}),\underbrace{13/08/2023}_{\text{timestamp}}\in\mathbf{A}.\] Contributions can be assigned several authorships, and usually they must be assigned at least one. The following example of an authorship set contains three authorships: \[(\underbrace{(\text{``Anne Martin''})}_{\text{one authorship}},(\underbrace{( \text{``Jane Smith''},\text{``Ivy Li''},\text{``Amari Doe''})}_{\text{another authorship}},\underbrace{(\text{``Al B.''})}_{\text{another}})\mid\in\mathcal{P}(\mathbf{A}).\] _Best practices for users:_ Authorships are primarily to recognise the humans who have _recorded_ a contribution. When a contribution is a quote from the work of a different human, use additional contributions (cf SS12) to make the source explicit. The late Alan Turing, for instance, should never appear as part of an authorship of an MMM contribution. Recognition of intellectual activity is mentioned in SS3.3.1 below. The pit landmark has no authorship set attribute. #### 2.2.3 Status As will be detailed in the sequel, MMM landscapes are to be collectively built and distributed. Individual MMM contributions are stored locally on users' machines. They may also be shared. A contribution \(c\in\mathbf{C}\) has a **status**. By default, the status of a contribution is private and local. If \(c\) is private, a possibility is that \(c\) is privately shared with one or several groups of users who have access right to \(c\). A private contribution that is shared with groups \(g_{1},\ldots,g_{n}\) of users, has its status attribute set to \(\text{sharedWith:}g_{1},\ldots,g_{n};R\) where \(R\) is the reference to a sharing contract, cf SS3.3.4. If \(c\) is private and shared with no-one, it has default status local. A contribution can also be public, meaning that _any_ user has access right to it. Public contributions have their status attribute set to public. When a contribution's status attribute is set to public, its label, type and tag attributes become immutable. Let \(s\) and \(s^{\prime}\) be two contribution statuses. We define an order \(\preceq\) on contribution statuses. We write \(s\preceq s^{\prime}\) when either one of the following conditions is satisfied: * \(s=\)local, or * \(s^{\prime}=\)public, or * \(s=\)sharedWith:\(g_{1},\ldots,g_{n};R\), \(s^{\prime}=\)sharedWith:\(g_{1}^{\prime},\ldots,g_{m}^{\prime};R^{\prime},\bigcup g_{i}\subseteq \bigcup g_{i}^{\prime}\) and \(R^{\prime}\) is no more constraining than \(R\). To downgrade (resp. upgrade) status \(s\) is to replace \(s\) with status \(s^{\prime}\) where \(s^{\prime}\neq s\) abd \(s^{\prime}\preceq s\) (resp. \(s\preceq s^{\prime}\)). Downgrading a contribution status is forbidden. A contribution's status can only be upgraded. The pit landmark's status is public. #### 2.2.4 Marks Contributions may be marked by any number of marks. Marks can be _ad hoc_ custom marks, e.g.: archived, hidden, dim, highlighted, folder, unpublished. There also are predefined marks, e.g.: new (meaning unread), obsolete (cf SS3.1.7), syncWith (cf SS3.3.6), subscribedTo (cf SS3.3.5), rewarded (cf SS3.3.1). Marks may have parameters. For instance, the syncWith mark is parametrised by the list of devices \(d_{1},\ldots,d_{n}\) that the contribution is meant to be copied to. Marks are mostly for internal house-keeping, in contrast to the status which universally characterises a contribution. The meaning of a mark and existence is usually specific to one user or to a team of users. For instance, one user or team may choose to hide contribution \(c\) marked as hidden, while other users may neither mark nor hide \(c\), and others may mark it as hidden but not hide it. Also in contrast to the status, marks are usually locally revocable by the user. #### 2.2.5 Timestamps A contribution \(c\in\mathbf{C}\) is associated a timestamp corresponding to the date at which a user first encounters \(c\). If the user is the creator of \(c\), the timestamp is the date of creation of \(c\). Otherwise, it is the date at which the user receives \(c\) from another user. Defined this way, timestamp attributes allow ordering contributions into a timeline of granular events, namely contribution appearances (creation or reception) which are the events we propose to emphasise. The definition of the timestamp attribute of MMM contributions will need to be refined to support finer timelines also accounting for modifications of the attributes of existing contributions. ### Landmarks We have defined the main attributes of MMM landmarks. Now we specify the different kinds of landmarks, and especially the different kinds of contributions. #### 2.3.1 Contributions For the sake of simplicity in the following sections we ignore the metadata of contributions - i.e., the authorship set, status, mark set and timestamp attributes. A contribution is an object from \(\mathbf{C}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}(\mathbb{S}_{ \oplus})\times\mathbf{T}^{\textsc{st}}\) comprised of an identifier in \(\mathbf{I}\), a label in \(\mathbb{S}\), a tag set in \(\mathcal{P}(\mathbb{S}_{\oplus})\) and an abstract type in \(\mathbf{T}^{\textsc{st}}\). Abstract types define the following sets of contributions: * The set \(\mathbf{C}_{\mathbf{V}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{V}\) of "vertex contributions" a.k.a. "vertices" a.k.a. "nodes" * The set \(\mathbf{C}_{\mathbf{P}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{P}\) of "pen contributions" a.k.a. "pens" * The set \(\mathbf{C}_{\mathbf{E}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{E}\) of "edge contributions" a.k.a. "edges" a.k.a. "links" which is comprised of: Visual design choices in the illustrations of this article are arbitrary. This proposal does not cover visualisation of the MMM formatted content. Different user interfaces can later offer different visualisations to accommodate different preferences. #### 2.3.2 Vertices A vertex contribution \(c\in\mathbf{C}_{\mathbf{V}}\subset\mathbf{I}\times\mathbb{S}\times\mathcal{P}( \mathbb{S}_{\oplus})\times\mathbf{V}\) is composed of an identifier, **a non-empty label**, a tag set in \(\mathcal{P}(\mathbb{S}_{\oplus})\) and an abstract type in \(\mathbf{V}=\mathbf{T}_{\mathbf{V}}\subset\mathbf{T}^{\texttt{AB}}\) equal to the concrete type. Vertices have one of five possible (abstract/concrete) types: Figure 3: Summary diagram of the MMM format definitions. \[\mathbf{T_{V}=V=\{question,narrative,existence,action,data\}}.\] The types of vertices that are the most central to our system are question vertices, narrative vertices and existence vertices. Contrary to the other two types, redundancy management is to be severe on those central three types. We won't mind if there are several data vertices labelled "42". However, we will mind if there are several question vertices labelled "What colour is the sky?". Vertex labels _cannot_ be empty. Below are examples of contributions that are vertices. I remind that visual choices made in the illustrations in this article are arbitrary. MMM documented information doesn't even need to be graphically represented. The MMM-JSON format for instance is enough to capture it. * (1, "What colour is the sky?", \(\mathfrak{O},\text{question})\in\mathbf{C_{V}}\) * (2, "The sky is blue.", \(\mathfrak{O},\text{narrative})\in\mathbf{C_{V}}\) * (3, "Sky", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (4, "To be blue", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (5, "Blue", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (6, "the color of a cloudless daytime sky", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (7, "Turquoise", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (8, "bleu", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (9, "White", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (10, "Colour", \(\mathfrak{O},\text{existence})\in\mathbf{C_{V}}\) * (11, "true", \(\{\mathfrak{O}boolean\},\text{data})\in\mathbf{C_{V}}\) * (12, "12", \(\mathfrak{O}\text{int},\text{data})\in\mathbf{C_{V}}\) * (13, "Boil water.", \(\mathfrak{O},\text{action})\in\mathbf{C_{V}}\) Contributions with identifiers 5 to 9 above could be drawn from a glossary file listing/defining colours. Those contributions could be tagged (@colours) instead of \(\mathfrak{O}\). Contribution 3 could be tagged \(\{\mathfrak{O}myWeatherVocab,\mathfrak{O}some-standard-bird-ontology\}\). _Best practices for users:_ Use question vertices for labels that end with a question mark. Use narrative as the default contribution type when you don't want to bother determining what other type best suits your new contribution. But ideally, use narrative vertices for labels that end with a period, for instance, a statement, a series of statements, a theorem, a story.... Use existence vertices for labels that typically don't end with any punctuation mark, naming or formulating something that matters to you, a concept or a property (e.g. "Sky", "Blue", "To be blue"). Use action vertices for labels that describe some action to perform (e.g. "Boil water"). Use data vertices for labels that express the value of a data point (e.g. "42", "true", "13/08/1991"). The default contribution type narrative satisfies requirement R7f while the vertex type existence satisfies requirement R7g. Before I introduce the other types of MMM contributions, I recall that in agreement with Design Bias 2 and with requirement R7f, the MMM format provides a default information container (namely the narrative node). A busy/lazy user (using a lazy UI) doesn't need to know about, nor use anything else in order to document their notes in MMM format. They could possibly consider that one narrative corresponds one traditional document. I propose however to incentivise and facilitate the use of other MMM contributions. #### 2.3.3 Edges There are three categories of edges: adirectional, unidirectional and bidirectional. Contrary to that of vertices, the abstract type of edges contains more data than just the concrete type. For one, the abstract type specifies the endpoints of the edge. Bidirectional edges are special. A bidirectional edge resembles two unidirectional edges in opposite direction. Each direction may have its own label and its own tag set. Contrary to node labels, edge labels can be empty. Edge endpoints can be any kind of landmark: vertices, edges, pens and even the pit. This allows to satisfy requirement R3, while generally MMM edges satisfy requirements R2 and R6. An edge can't be one of its own two endpoints. An edge can however have another edge as one of its endpoints. We may have to limit the depth of the recursion14. Footnote 14: Say an ordinary edge between two node contributions is of depth 0. An edge of depth \(n+1\) is an edge that has one of its endpoint that is an edge of depth \(n\) and the other endpoint is of depth no greater than \(n\). MMM landscapes might be easier to store and to navigate if we forbid edges of depth greater than a given maximum. I expect that very deep recursion is dispensable. _Best practices for users:_ Make extensive use of directional edges. Prefer contributing information in edge labels than in vertex labels (cf SS3.1.2). Concrete edge types play an important role in this proposal. They allow to _roughly_ sort epistemic glue according to its intended purpose. Together with complementary edge information (conveyed by tags and labels), concrete edge types allow aggregating information (cf SS3.2.4). Figure 4: Different concrete types of MMM edges. As mentioned above (cf §1.5), the concrete types of MMM edges (e.g. equates, instantiates, pertains) have deliberately loose semantics. This proposal offers a starting set of edge types that is intended to remain small. Future modifications of this starting set may be needed to ensure the use of the different edge types is balanced. Again let us insist on the flexibility of interpretation of concrete types of edges. It plays a central part in the intended universality of the MMM data structure. Concrete types are meant to convey a form of common epistemic denominator like the concrete type question: most people can have some form of agreement on what a question is, if only they agree that it is not a statement. The same holds for concrete types of edges but the room for variations in interpretations of each concrete edge type is wider. For instance, the equates edge introduced below can be interpreted as conveying a relation of synonymy between two linguistic terms. It can just as well be used to link two mathematically equivalent theorems expressed in different formalisms. Future work will have to circumscribe the extent of the variations in interpretations that is tolerated. #### 2.3.4. Adirectional Edges There is only one concrete type of adjicretional edge, namely the relate edge. The abstract type15 of an adjicretional edge is a triple in. The first two components of an adjicretional edge type are the identifiers of its endpoints. The order of these two components does not matter. Footnote 15: The term “type” in “abstract type” is not ideal. For two edges to have the same abstract type, they need to share the same endpoints and the same concrete type. Arguably, they need to almost be the same edge. Here are some examples of contributions that are adjicretional edges (see Fig. 5). * \((14,"",\emptyset,5,3,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is a labelless edge between vertices with IDs 3 and 5 introduced as examples in SS2.3.2. * \((15,"",\emptyset,5,4,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is another labelless edge, between vertices with IDs 4 and 5. * \((16,"",\emptyset,4,5,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is an edge labelled "similar" between the same two vertices as edge \(15\). As mentioned already, edges can have other landmarks than vertices as endpoints: * \((17,"",\emptyset,15,5,\text{relate})\in\mathbf{C}_{\mathbf{E_{A}}}\) is an edge between edge with ID \(15\) and vertex with ID \(5\). #### 2.3.5. Unidirectional edges The abstract type of an adjicretional edge is a triple from. The first (resp. second) component is the identifier of the edge's start point (resp. endpoint). The last component is the edge's concrete type taken in: \[\mathbf{T}_{U}\ni \{\text{answers},\text{questions},\text{pertains}, \text{instantiates},\] \[\text{nuances},\text{supports},\text{pennedIn},\text{precedes}, \text{relatesTo}\}\] Figure 5: Adirectional edges of type relate (represented in black lines here). All other edge types should be preferred to this default edge type. Edges can have edges as endpoints. Their labels are empty or not. Above all edge labels are empty except one which is set to ”similar”. The relatesTo edge is the default unidirectional edge that is to be used (sparingly) as a default edge similar to the additional relate edge. The pertains edge is an ubiquitous type of contribution that we might consider renaming "details". In agreement with requirement R7e, a pertains edge from contribution \(a\) to contribution \(b\) can mean a number of things such as: \(a\) belongs to \(b\), \(a\) characterises \(b\), \(b\) involves \(a\), \(a\) is a detail of \(b\), \(b\) concerns \(a\), \(b\) refers to \(a\), or \(b\) is about \(a\)... Work is actually being carried out to explore and circumscrib the relevant semantics of this edge type. I expect this edge type might need to be divided into two edge types, e.g. pertains and characterises. Tying pertains links between pieces of information and the topics/concepts they are about plays an important role in our solution (cf SSA.2.3). The pennedIn edge is the only MMM contribution that comes with a constraint of usage. The endpoint of pennedIn edge can only be a pen contribution (cf SS2.3.7 below). The start point can be any type of contribution. In contrast to pertains edges (and other edge types) which convey semantically structural information, pennedIn edges are rather for the meta coordination of MMM contributors. We further discuss this special type of edge in SS2.4.3 below. Here are some examples of contributions that are unidirectional edges (see Fig. 4): * \((18,"",\emptyset,2,1,\text{answers})\in\mathbf{C_{E_{0}}}\) offers narrative vertex \(2\) as answer to question vertex \(1\) (cf SS2.3.2). * \((19,"",\emptyset,5,1,\text{answers})\in\mathbf{C_{E_{0}}}\) offers existence vertex \(5\) as answer to question vertex \(1\). * \((20,"\text{is }a",\{\text{@rdf:type}\},5,10,\text{instantiates})\in\mathbf{C_{E_{0}}}\). * \((21,"\text{definition}",\emptyset,6,5,\text{pertains})\in\mathbf{C_{E_{0}}}\). #### 2.3.6 Bidirectional edges Formally, the abstract type of a bidirectional edge is a seven component uple from \(\mathbf{E_{B}}=\mathbf{I}\times\mathbf{I}\times\mathbf{T}_{B}\times\mathbb{S} \times\mathbb{S}\times\mathcal{P}(\mathbb{S}_{\emptyset})\times\mathcal{P}( \mathbb{S}_{\emptyset})\). The first two components are the identifiers of the edge's startpoint and endpoint. The third component is the edge's concrete type taken in: \[\mathbf{T}_{B}\supseteq\{\text{equates,differsFrom}\}.\] The fourth (resp. fifth) component is the label of the edge specific to the direction start point \(\rightarrow\) endpoint (resp. endpoint \(\rightarrow\) start point). The sixth (resp. seventh) component is the tag list specific to the direction start point \(\rightarrow\) endpoint (resp. endpoint \(\rightarrow\) start point). One, two or all three of the labels of a bidirectional edge can be empty. Here are some examples of contributions that are bidirectional edges (see Fig. 4): * \((22,\underbrace{\text{language translation}"}_{\text{main label}}, \emptyset,5,8,\text{equates, }\underbrace{\text{"EN}\rightarrow\text{FR"}}_{\text{label for dir 1}},\underbrace{\text{"FR}\rightarrow\text{EN"}}_{\text{label for dir 2}},\emptyset,\emptyset)\in\mathbf{C_{E_{0}}}\) * \((23,"",\emptyset,5,7,\text{differsFrom, }"\text{add a bit of green}","\text{remove some green}",\emptyset,\emptyset)\in\mathbf{C_{E_{0}}}\) Bidirectional equates edges are important as they allow connecting similar contributions that are not necessarily perfect duplicates. They support requirement R5. #### 2.3.7 Pens The abstract type of a pen is a couple taken from the set \(\mathbf{P}=\mathcal{P}(\mathbf{I})\times\mathbf{T_{P}}\). The first component is a set of identifiers. The second component is the pen's concrete type from: \[\mathbf{T_{P}}\supseteq\{\text{definition,reasons,conditions, glossary, experimentalProtocol,measure,pointer,document,default}\}.\] Pens are similar to edges in a hypergraph. Let \(p=(i,l,x,S,t)\in\mathbf{C_{P}}\) be an arbitrary pen where \(S\in\mathcal{P}(\mathbf{I})\) is a set of landmark identifiers. Abusing language, for any landmark identifier \(j\in S\), we say that the pen \(p\)_contains_ the landmark \(j\). A landmark can be contained in multiple pens, i.e., pens can overlap. Here are some examples of pen contributions: Pens can contain any type of landmarks. Pen 24 above for instance contains two vertices and an edge. Pens can also contain other pens. #### 2.3.8 The Pit The pit denoted \(\bot\) is a very special kind of landmark in the landscape, the only landmark that is not a user contributed contribution. There only is one pit for all users. All users see the pit identified with the same identifier. The pit represents Figure 6: A definition pen used to specify that node 6 not only characterises the concept ”Blue” of node 5, it defines it. Not everyone has to agree with this definition. Someone else could include node 5 in a different definition pen. Figure 7: A default pen for colour names, containing previously defined contributions. absurdity. It plays an essential role in our quality management system. See SS3.1.6. We have defined the different kinds of landmarks and their attributes. This concludes our definition of the MMM format. A JSON schema formalisation of the MMM format, namely JSON-MMM exists, cf [41]. ### Areas _etc_ Now we look into sets of contributions. We already have seen in SS2.0.1 that a set of contributions including the pit constitutes a landscape. #### 2.4.1 Landscapes, Areas, and Territories A MMM landscape \(\mathbf{N}\) is a set of landmarks, necessarily containing the pit landmark \(\bot\). **Areas** are a generalisation of landscapes. An area is any set of landmarks not necessarily containing \(\bot\). **The MMM** - i.e. the structured digital version of the record that I propose to materialise (cf SS1.1) a.k.a. the intermediary epistemic web (cf SS1.4) - is the reunion of all landscapes. It is denoted \(\mathbf{N^{\star}}\). Some areas of \(\mathbf{N^{\star}}\) are private. Others are public. The reunion of all public landmarks is called **the public MMM** and denoted \(\mathbf{N^{\star}_{p}}\). \(\mathbf{N^{\star}}\) and \(\mathbf{N^{\star}_{p}}\) are _collective_ landscapes: their landmarks are contributed by different users and can be stored in a distributed manner. Other smaller landscapes may also be collective in that sense. A special kind of landscape is a **territory**. A territory is associated with a human user. It comprises \(\bot\) and the set of contributions that the user is acquainted with. Typically a territory is mostly stored locally, on the user's device(s), although parts of it may be stored remotely. The part that is stored locally is called the **local territory**. Users need not keep local copies of all landmarks they are acquainted with as they may have little interest in some landmarks they have visited. Also, they may trust other users to keep copies of the landmarks they are interested in and not store them themselves (cf Suppl. Mat. A). #### 2.4.2 Paths A MMM **path** is a series of MMM landmarks \(l_{1},l_{2},\ldots,l_{n}\) such that for any even integer \(i<n\), the contribution \(l_{i}\) is an edge between landmark \(l_{i-1}\) and landmark \(l_{i+1}\). By default, edges in an MMM path don't need to all have the same direction. If they do, then we say the path is directed. Note that because edges can have edges as endpoints, a MMM path can have several successive edges. #### 2.4.3 Mutable Pens and Contributions The set \(S\) of contents of a pen \(p\) is immutable (except for the local obsolescence mechanism, cf SS3.1.7). We can nonetheless define _mutable_ pens. Contrary to normal, immutable pens defined in SS2.3.7, **mutable pens** aren't atomic contributions. They are _sets_ of contributions containing exactly one (possibly empty) pen \(p\) and other contributions linked to \(p\) by a pennedIn edge. The contents of the mutable pen are the contributions that are linked to the pen \(p\) by a pennedIn edge, and if any, the contents of \(p\). Contents can be added to a mutable pen. Using the obsoleing mechanism described in SS3.1.7, contents can also be removed. Mutable pens are typically for delineating a mutable collaborative area of the landscape that plays the role of an editable document. Multiple users can include contents in a mutable pen. Application code can offer the possibility to the user to remain "working inside" the mutable pen/document, meaning that all contributions created by the user during the work session are automatically linked with a pennedIn edge to the pen. By default, contribution labels and types are immutable. Pens together with the obsoleting mechanism described in SS3.1.7 can also be used to define **mutable contributions**. The MMM format isn't designed for real-time sharing of fine-grained information (see SS3.3.2). MMM contributions are meant to be final. The MMM is mostly an add-only space equipped with a slow obsoleting mechanism described in SS3.1.7. ## 3 Landscape Based Activities Section 2 above defined the MMM format. Next, Section 3 here discusses how to use the elements of this format. Three kinds of landscape based activities are presented, namely, editing the landscape, exploring the landscape, and sharing landmarks. The supplementary material A describes the technological infrastructure and tooling to assist with MMM landscape based activities. An essential part of the proposition is the plan to support different user interfaces (UIs) in order to accommodate different epistemic approaches and interests towards information. Figure 8: A mutable pen. UI application code can visually represent atomic and mutable pens in a similar way although they are different objects. I recall that this proposal is to deal with unstructured, typically disputable, information. Shopping lists are thus not typical content of interest to this proposal, but they make for simple illustrations. Figure 9: A mutable contribution. Contributions marked with � are obsoleted contributions. They are not immediately deleted but eventually will (cf §3.1.7). Application code is required to handle marks and ensure desired user experience in agreement with the sharing mechanisms presented in §3.3. ### Landscape Editing Activities #### 3.1.1 Contributing Contributing means recording one or several new contributions on a landscape. The most basic contributing actions are: posting a question using a question node, posting a vocabulary term using an existence node, posting a statement or posting anything arbitrary using a narrative node. Other basic contributions are to post an edge - e.g. a default relate edge or an equates edge - between two existing nodes. Most contributing actions involve multiple nodes and/or edges (cf SS3.1.2). _Best practices for users:_ In a context of collective documentation on a shared landscape like \(\mathbf{N}_{p}^{\star}\), the act of contributing should happen mainly through acts of "annotation" (see below SS3.1.2). Contributions that are not annotations will typically contribute an isolated contribution or group of contributions. #### 3.1.2 Annotating/Improving Design Bias #4: **Information as improving events** Information is regarded here as _events_ that cause the record to improve. Note that following Design Bias 3, improving the record does not necessarily mean adding high quality information to it. It can mean exposing low quality information to nuance and detail. To support Design Bias 4, I propose to encourage the provision of new information in the form of _annotations_ to pre-existing information (see also Suppl. Mat. B.16 in relation to improving the quality of information on the record). A MMM annotation is a set of MMM contributions that involves at least one edge incident on a pre-existing contribution in \(\mathbf{N}^{\star}\). To annotate contribution \(c\in\mathbf{C}\) means to add at least one edge to the landscape between \(c\) and another contribution. Annotations complete, nuance, challenge, support, detail, update _etc_ information that is already in the record \(\mathbf{N}^{\star}\). Here are some annotation patterns that I expect to be useful: 1. Question a contribution, using the "questions" type of edge: 2. Answer a question using the "answers" type of edge: [MISSING_PAGE_POST] * [11] Red-flag a contribution (see details given in SS3.1.6): What colour is the sky? What colour is the sky? * [12] Chemtrails are for mind and weather control. * [13] BEFORE * [14] AFTER [MISSING_PAGE_POST] contribution: * [99] Reference a contribution: * [99] a contribution: * [99] Reference a contribution: * [99] a contribution: An author documented in the authorship set of a contribution \(c\), isn't necessarily the original author of the text documented in \(c\)'s label. Alice may document in the MMM that contribution \(c\) is supported by reference \(r\). The author of \(r\) might not agree with Alice on that. In the case pictured on the right above, the pertains edge links reference \(r\) to a quote extracted from the referenced document. In this case, the pertains edge conveys the meaning that the quote has the property of coming from the document referenced by \(r\): the quote is _characterised_ by its source. In contrast, in the case pictured on the right, the pertains edge conveys the fact that the quote is a part of the referenced document. By convention, MMM edges used for linking a contribution to its reference can be labelled or tagged "reference". UI application code can promote the documentation of references in standard formats such as APA, MLA, bibtex. Annotation is recursive in the sense that any contribution that takes part in an annotation can itself be annotated. The 12 annotation patterns listed above are suggestions. I recall that except for the pennedIn edge, there are no grammar rules for assembling MMM contributions in a landscape. I propose that user interfaces of MMM editing tools be designed to facilitate the annotation patterns listed above. They may provide shortcuts to some annotating patterns, and possibly emphasise some patterns more than others depending on what annotations are more typical to a given user segment served by the given UI. The metadata of a MMM contribution (id, authorship set, mark set, status, timestamp) can't be annotated. MMM edges refer to the main informational payload of their contribution endpoints. And the main informational payload of a contribution is expressed in the contribution's label, type and possibly tag set. Consider the node with ID 2 given as an example in SS2.3.2 above. While you can comment on the statement "The sky is blue." given by the label of the node, you can't comment on the fact that the ID of that contribution is 2. Similarly, while you can comment on the fact that edge with ID \(12\) has concrete type relate (see for instance SS3.1.6) you can't comment on the fact that its documented contributor is Anne Martin. _Best practices for users:_ If you wish to use some metadata as information, then document it explicitly as information: on the MMM, this means document it as a contribution which has its own metadata. #### 3.1.3 Implanting Implanting means contributing edges between a new contribution and pre-existing contributions on the landscape. It is comparable to annotating. Annotating is aimed at improving pre-existing content by way of new content. Implanting is aimed at using old content to provide context and visibility to new contributions (cf Suppl. Mat. B.5). _Best practices for users:_ Implant much and well. Poorly implanted contributions will have less visibility as the pathways leading to them are rare and/or at risk of being red-flagged and obsoleted. Implanting is central to my proposal. In the distributed MMM network, the better a contribution is implanted (not just the _more_ it is), the more visible it is likely to be: the more likely it is that a peer will discover it by following a path to it (cf SS3.2.5). Implantation is also key to redundancy management. The better implanted a new contribution is, the easier it is to identify how the information it conveys relates to other recorded information and possibly overlaps some of it. Before a contribution \(c\) is recorded for good on the landscape, and before it propagates to other users' territories, contributions in the neighbourhood of \(c\) where \(c\) is implanted can help determine the value that \(c\) adds to the area. Good, early implantation can also spare authors the effort of documenting atomic pieces of information that have already been documented by them or by someone else. Authors can concentrate on the added value they bring. They need only link that added value to the already documented relevant information that has been found, offering all the epistemic glue they have to favour good implantation. Incentives for authors are discussed in [43]. Because they favour connectedness of the overall MMM network, implantation and annotation, also contribute to a _desirable_ form of redundancy. See SS3.2.6 below. #### 3.1.4 Bridging Bridging two contributions (or areas) \(c\) and _c'_ means contributing an edge between them (epistemically gluing them together) or contributing several contributions which together materialise a path between \(c\) and _c'_. Bridging is particularly important when \(c\) and _c'_ are initially "distant" on the MMM (cf SS3.2.1), i.e., when no epistemic relation has been documented between them, not even an indirect one. Following Design Bias 2, I mentioned that the MMM solution is intended to address information problems primarily rather than to address communication problems. Communication solutions help get messages across geographical distance (e.g. powerful vocal cords, emailing _etc_) and across language barriers (e.g. hand waving, dictionaries, automatic translators). But interlocutors in the same room speaking the same language sometimes still struggle to understand each other because of different epistemic cultures, viewpoints, mindsets _etc._ Increasing the likelihood of an epistemic bridge being documented between the interlocutors' MMM territories may help. Any mechanism that increases the connectedness of the MMM network favours bridging. #### 3.1.5 Documenting Traditional documents like scientific articles contain multiple pieces of information. And unless they amount to unexplained bullet point lists, traditional documents also provide epistemic links between the pieces of information they contain. Information in scientific articles is typically organised in sections, subsections _etc_ (cf Suppl. Mat. B.12). In the MMM, a traditional self-standing document can be recorded in one piece, e.g. as the label of a narrative node. Or it can be decomposed into several pieces conveyed by a network of several interlinked MMM contributions. _Best practices for users:_ Decompose the content that you want to document into atomic contributions well interlinked with each other (rather than dump it as a monolithic text in a single contribution). Limit the amount of content you convey through each single contribution and make extensive use of meaningful MMM edges. Optionally a MMMified document can be delineated using a pen - typically a _mutable_ pen to support collaborative editing (cf SS2.4.3). #### 3.1.6 Disapproving and Red-Flagging A contribution can be deemed of low quality - i.e., disapproved of - for exactly two reasons: 1. The contribution is poorly positioned in the landscape. The edges between it and other landmarks are absurd. For instance the contribution is off-topic. Or it is on-topic, but it is implanted with the wrong kind of edge, e.g. it is linked with an answers edge to a question node while not providing an answer to that specific question. 2. The contribution is well positioned, but its label is of poor quality. For instance the contribution is a narrative node conveying a statement that someone considers untrue. Figure 10: An “MMification” of a traditional linear text document. N.B.: the MMM format isn’t designed for MMMifying existing documents. It rather is conversely meant for doing the informational work in preparation of a new document composition (cf §3.1.8 and Suppl. Mat. A.3). I propose to manage the two situations differently, and neither by resorting to censorship. The second situation is a case where we especially don't want to make the low quality contribution less visible on the MMM. My conviction is that mistakes, misinformation, disinformation _etc_ are unavoidable if not normal on a collaborative information space (cf Suppl. Mat. B.16). Red-flagging is an annotation pattern for dealing with the first situation listed above. Red-flagging a contribution \(c\in\mathbf{C}\) consists in recording an equates edge between \(c\) and \(\bot\). Typically, red-flagging applies on contributions that are edges. Red-flagging an edge conveys the idea that the edge's endpoints should not be linked in the way the edge says they are. Consider for instance the example on the right above. A question\(q\) asks about RNA mechanisms involved in vaccines. A narrative contribution \(n\) is provided as an answer to \(q\) through the answers edge \(e\) connecting \(n\) to \(q\). Narrative \(n\) makes no mention of RNA molecules. Whatever the value of the statement \(n\) makes, whether it is true or false, whether we agree with \(n\) or not, it is easy to see that \(n\) is not an answer to \(q\) because an answer to \(q\) has to involve the notion of RNA. So independently of what one may think of \(n\), one can safely red-flag edge \(e\) (not \(n\) itself). This operation leaves all three contributions \(q\), \(n\), and \(e\) unchanged. The only change to the landscape is the addition of a new equates edge from \(e\) to \(\bot\). This new edge contribution can be labelled "An answer to the question must mention RNA mechanisms." or simply "Off topic" to specify the reason for the red-flagging of \(e\). _Best practices for users:_ Document the reason for the red-flagging of a contribution in one of the labels of the equates edge linking it to \(\bot\). In the example above the narrative contribution \(n\) about 5G doesn't need to be red-flagged. Justifying a red-flagging of \(n\) itself is more demanding than justifying the red-flagging of \(e\). If \(n\) could be standing alone disconnected of other contributions, and then "Off topic" wouldn't apply. _Best practices for users:_ Avoid red-flagging a well positioned contribution especially if it is of low quality. If it is of low quality, then annotate it: link new contributions to it that explicit in what way it is of low quality. In particular, use nuance and questions edges abundantly to nuance and question the contribution. Only red-flag poorly positioned contributions. Application code can deal with red-flagged contributions differently depending on the desired user experience. One UI could hide all contributions that have been red-flagged at least once by the community of users. This would mean that the 5G contribution above wouldn't be reachable from the question about vaccines. Another UI could wait for the contribution to have been red-flagged 5 times (5 different authorships for all equates edges between \(n\) and \(\bot\)). Another UI could highlight red-flagged contributions. #### 3.1.7 Obsoleting Obsoleting a contribution starts with marking it as obsolete. Different MMM editing tools may deal with obsolete contributions slightly differently. I describe the main idea. The end result of obsolescence is the deletion of the contribution from the user's local database state. There may be several copies of a contribution \(c\) in the distributed MMM network. By default, obsolete contributions are not propagated. If Alice has marked contribution \(c\) as obsolete, and if Bob has no copy of \(c\) in his own local database, then Bob won't get \(c\) from Alice. However, if Charlie already has a copy of \(c\), then Charlie may be notified of Alice's obsolescence of \(c\). So obsolete contributions tend to disappear for good from the MMM, although no single user can be responsible for the definite disappearance of a shared contribution. When a contribution \(c\) is marked as obsolete, all edges incident on it are also marked as obsolete. The edge recursion mentioned in SS2.3.3 (see footnote 14 on page 15) means that obsolescence cascades down series of incident edges. We say that we **recursively obsolete \(c\)**. Obsolete contributions are not immediately deleted. They remain "in limbo" for a customisable amount of time. Suppose contribution \(c\) is marked as obsolete on Alice's territory. Suppose that Bob has the three contributions \(c\), \(c^{\prime}\) and edge \(e\) between \(c\) and \(c^{\prime}\) stored on his territory. Neither of these three contributions is marked as obsolete on Bob's territory. And suppose Alice synchronises with Bob. Having \(c\) in limbo on Alice's side allows dealing appropriately with Bob's input to Alice's territory. The fact that Bob's input is linked to a piece of information that Alice has already obsoleted may be used as a pretext to filter out everything Bob has linked to \(c\). In this case Alice neither gets \(c\), \(c^{\prime}\) nor \(e\) from Bob. Alice may also on the contrary decide to take \(c\) out of the limbo because if Bob is still interested in \(c\) she decides she may also be after all. Another possibility is that Alice obsoleted \(c\) because she replaced \(c\) with \(c^{\prime}\). If there is an equates edge connecting \(c\) to \(c^{*}\), it suggests that \(c\) and \(c^{\prime\prime}\) are "epistemically equivalent", i.e., they convey the same idea. Alice might then be interested in Bob's annotation of \(c\), provided it is redirected onto \(c^{*}\) as in the figure below: The Limbo periods are periods during which a user's disinterest for a contribution is remembered and can be shared. The main reason for obsoleting a contribution is that the contribution is no longer _useful_. An obsolete contribution is not "epistemically inconvenient". Its presence in a landscape doesn't make the landscape less valuable. However, an obsolete contribution, since it no longer is relevant to a particular user, becomes _visually_ inconvenient to that user. Because obosolescence is local, a user can't retract a contribution that she has already shared. Once a contribution \(c\) labelled "_The cat is out of the box_." has propagated, it can't be deleted from the MMM by any single user. If the author of \(c\) regrets her publication, the only things she can do is (1) try to propagate her obsoleting of \(c\), and (2) direct attention away from \(c\) or change the perception of \(c\) by adding annotations around \(c\) (she could also red-flag her own contribution, but that might not be advantageous to her in some contexts). #### 3.1.8 Drafting, Substituting and Versioning The MMM can be used as a drafting medium: a place to organise ideas before deciding a linear order to present those ideas in a traditional document. MMM contributions are not meant to undergo significant changes after their creation. Contributions represent atomic units of work. Replacing a contribution by a new contribution is a workaround the definiteness of contributions. As before, in the figures below, \(\mathsf{X}\) marks obsoleted contributions that will eventually be deleted. Rather than make contributions evolve over time, I advise the following default strategy for replacing a contribution \(c\) with a new version \(c^{\prime}\) of \(c\): * Make a copy \(c^{\prime}\) of \(c\) that is a brand-new contribution with a different identifier than \(c\). * Let the user change whatever attribute she needs to change in \(c^{\prime}\). Figure 11: Versioning. An idea is reformulated. The old version is obsoleted and appropriately linked to the new version. * Link \(c\) and \(c^{\prime}\) using an equates edge. The new equates edge can be tagged "@version" and optionally labelled with the reason for the replacement of the old version by the new. In some cases, the old and new versions might be too different to justify an equates link. Another type of link should be preferred, possibly a relate link (cf Fig. 12). * Recursively obsolete \(c\). * If \(c\) and \(c^{\prime}\) are epistemically equivalent (e.g. if \(c^{\prime}\) corrects a typo in \(c\)) and have been linked with an equates edge, then make copies (with different identifiers) of all edges incident on \(c\), replacing the \(c\) endpoints of these edges with \(c^{\prime}\) endpoints. If \(c\) and \(c^{\prime}\) aren't epistemically equivalent, leave it to the user (or possibly to a trained AI) to decide how to redirect edges from \(c\) to \(c^{\prime}\). Because the obsolescence around \(c\) is recursive, recursive redirection might be necessary. * Add \(c^{\prime}\) and new incident edges to all pens to which \(c\) and old edges belong. _Best practices for users:_ To modify an existing MMMified document \(D\) - i.e. a document documented in the MMM as a network of interlinked contributions (cf Fig. 10) - first locate the exact pieces of information (contributions) in \(D\) that your modification applies to. If you want to specify one or several of those pieces of information, then preferably annotate them (cf SS3.1.2). If you want to delete them then obsolete them (cf SS2.2.4). And if you want to replace them then record their new versions and link them appropriately to the old using an equates link as detailed above. Figure 12: Versioning when the old and new versions of a contribution are not epistemically equivalent. The vertical pertains edge incoming the old version cant simply be redirected to the new version. Figure 13: Another case of versioning in which the old and new versions of the contribution aren’t epistemically equivalent. Again the vertical pertains edge incoming the old version can’t simply be redirected to the new version. #### 3.1.9 Merging Merging on the MMM only concerns contributions that are "epistemically equivalent", i.e., that convey the same idea. A contribution labelled "_The Earth is round._" will never be merged nor replaced with a contribution labelled "_The Earth is flat._" nor even with a contribution labelled "_The Earth is roundish._". The MMM remains a mostly add-only system. The atomic unit of information that can be added to, and obsoleted from, the landscape is a MMM contribution. MMM contributions don't disappear from the landscape because they are replaced _per se_, but because they are _in themselves_ no longer useful. For the sake of simplicity I ignore mark sets in this section, although a merging operation for mark sets can be defined. I assume an order can be defined on MMM contribution identifiers, for instance using timestamps. Let \(c\) and \(c^{\prime}\) be two MMM contributions, respectively with identifiers \(i\) and \(i^{\prime}\), with identical labels \(l=l^{\prime}\), with tag sets \(x\) and \(x^{\prime}\), with identical types \(t=t^{\prime}\), with authorship sets \(a\) and \(a^{\prime}\), and with statuses \(s\) and \(s^{\prime}\). I define the order \(\preceq\) on contributions so that \(c\preceq c^{\prime}\) holds whenever all the following holds: \(i\leq i^{\prime}\), \(l=l^{\prime}\), \(x\subseteq x^{\prime}\), \(t=t^{\prime}\), \(a\subseteq a^{\prime}\), and \(s\leq s^{\prime}\). I define function \(\texttt{m}:\texttt{C}\times\texttt{C}\rightarrow\texttt{C}\) such that for any two contributions \(c\), \(c^{\prime}\in\texttt{C}\) that share the same label and type, \(\texttt{m}(c,c^{\prime})\in\texttt{C}\) is the contribution whose identifier is \(\max\{i,i^{\prime}\}\), whose label is \(l=l^{\prime}\), whose tag set is \(x\cup x^{\prime}\), whose type is \(t=t^{\prime}\), whose authorship set is \(a\cup a^{\prime}\), and whose status is \(\max\{s,s^{\prime}\}\). Contribution \(\texttt{m}(c,c^{\prime})=c\lor c^{\prime}\) is the join of \(c\) and \(c^{\prime}\). The set of contributions that share the same label and type is a join-semilattice partially ordered by \(\preceq\). The merge operation only affects contributions \(c\), \(c^{\prime}\) that have the same label and type. The end result of the merge is that only one contribution survives, which is equal to \(\texttt{m}(c,c^{\prime})\). Merging doesn't modify the label nor the type of any MMM contributions. Typically, its effect is to complete the authorship set and to upgrade the status of a contribution. Merging can affect "homologous contributions" and "non-homologous contributions". Homologous contributions are contributions that have the same identifier. They necessarily have identical labels and types but may differ by their tag set and metadata. A merge of homologous contributions \(c_{1}\) and \(c_{2}\) typically happens when a user has a copy \(c_{1}\) of a contribution locally stored and receives another copy \(c_{2}\) of the same contribution from a peer over the distributed network. Contributions \(c\), \(c^{\prime}\) with different identifiers (non-homologues) can also be merged as long as they have the same label and type. If \(i<i^{\prime}\) are the identifiers of \(c\) and \(c^{\prime}\), merging \(c\) and \(c^{\prime}\) is called **merging \(c\)_into \(c^{\prime}\)_**. Generally merging \(c\) and \(c^{\prime}\) with identifiers \(i\preceq i^{\prime}\) consists in the following: * Update \(c^{\prime}\) to \(\texttt{m}(c,c^{\prime})\) which has the same identifier as \(c^{\prime}\). * Create an equates edge between \(c\) and \(c^{\prime}\). I recall that a bidirectional edge has three labels and three tag sets. The new equates edge can be labelled or tagged "merged" (main label/tag set), "replaced by" (direction \(c\) to \(c^{\prime}\)), and/or "replaces" (direction \(c^{\prime}\) to \(c\)). * Recursively obsolete \(c\). Figure 14: Contribution with ID 2 is merged into contribution with ID 52. * Recursively make non-homologous copies of all recently obsoleted edges around \(c\) and redirect them towards \(c^{\prime}\), as in SS3.1.8. * Add \(c^{\prime}\) and new incident edges to all pens to which \(c\) and old edges belong. As long as obsoleted contributions are in limbo, the merging operation is commutative, associative and idempotent because applying function \(\mathtt{m}:\mathbf{C}\times\mathbf{C}\rightarrow\mathbf{C}\) is. The order in which merges occur and repetitions of the same merge are impactless. Function \(\mathtt{m}:\mathbf{C}\times\mathbf{C}\rightarrow\mathbf{C}\) may be generalised to allow the merge of two pens \(p\) and \(p^{\prime}\) that have different contents but same concrete type. The resulting/surviving pen is a pen whose identifier is \(\max\{i,i^{\prime}\}\) and whose set of contents is the union of the contents of \(p\) and the contents of \(p^{\prime}\). The merge of mutable pens follows since merging causes all incident edges, including pennedBy edges, to be copied to the surviving pen. Merging is essential to redundancy management on the MMM. A **relaxed version of the merge** operation can be implemented to allow for the merge of two contributions with different but epistemically equivalent labels. The regular merge described above updates the surviving contribution to \(\mathtt{m}(c,c^{\prime})\) and in doing so may modify some contribution attributes. In contrast, because "epistemic equivalence" is subjective, the relaxed merge must not modify any of the two contributions that are merged, except for marking one as obsolete. A diversity of mechanisms may be implemented to identify potential motives for merges (e.g. language similarity [23]) and submit them to the user for translation into actual merges. Because obsolete marks are local, merges also tend to be local. Alice may merge non-homologous contributions \(c\) and \(c^{\prime}\) locally resulting for instance in the authorship set of \(c^{\prime}\) to grow. If Bob also has copies of \(c\) and \(c^{\prime}\), Bob will not be affected by the merge. This is desirable as \(c\) and \(c^{\prime}\) may be public contributions and Alice alone should not be allowed to update public material for everyone. If Bob independently decides to merge \(c\) and \(c^{\prime}\), Bob will adopt the same merging strategy as Alice: merging the contribution with the smallest ID into the contribution with the largest. If Bob doesn't have a copy of \(c\) nor of \(c^{\prime}\), by default, Bob will not inherit obsolete contribution \(c\) from Alice. If Bob has a copy of \(c\), we may use Alice's equates link between \(c\) and \(c^{\prime}\) as a trigger for Bob to substitute \(c\) with \(c^{\prime}\). The more users merge \(c\) into \(c^{\prime}\), the greater the likelihood that \(c\) ends up obsoleting from the entire distributed MMM database. The notion of merging contributions naturally extends to a notion of **merging landscapes**. For any two landscape \(L\) and \(L^{\prime}\), \(\mathtt{m}^{*}(L,L^{\prime})\) is the landscape that contains exactly the union of all contributions of \(L\) and all contributions of \(L^{\prime}\) where contributions \(c_{1}\in L\) have been merged with their homologous contributions \(c_{2}\in L^{\prime}\). Let us write \(L\sqsubseteq L^{\prime}\) whenever for any contribution \(c_{1}\in L\) there is a homologous contribution \(c_{2}\in L^{\prime}\). The set of landscapes partially ordered by \(\sqsubseteq\) forms a join-semilattice where the join of two landscapes is given by \(\mathtt{m}^{*}\). The MMM is the join of all landscapes. If we guaranteed delivery of every contribution to every peer on the distributed MMM network (and also immutability of contributions which we mostly have for public contributions) then we could guarantee landscape convergence (locally users would eventually see the same landscape) [54]. Importantly however, we _arenit_ aiming at convergence. We don't want peers to see the same landscape _despite_ the distributed nature of the MMM, i.e., we don't want them to have the same local territory. On the contrary I propose to leverage the distributed nature of the MMM in order support a diversity of points of view on the record, and reduce the overall amount of digital content that any peer sees to just what is relevant to them16 (cf SS3.2.5). Because not everyone is interested in the same material, the MMM need not be materialised at any point in time at any single node of the distributed network [43]. Footnote 16: The subject of echo chambers is discussed in §3.2.5 and in [43]. #### 3.1.10 Updating the Landscape Let us continue ignoring marks, including the obsolete mark. The possible ways of updating a landscape \(L\) are the following: 1. Add a contribution to \(L\) (this is the principal way of updating a landscape, cf Design Bias 4). 2. Merge duplicate contributions (same label, same type) as described in SS3.1.9. 3. Add a tag to the tag set of a contribution in \(L\). 4. Add an authorship to the authorship set of a contribution in \(L\). 5. Upgrade the status of a contribution in \(L\). For any of the five kinds of updates \(u\) listed above, let us write \(L+u\) to denote the landscape obtained by applying \(u\) to \(L\). We have \(L\sqsubseteq L+u\), provided we consider obsolete contributions when non-homologous duplicates are merged. Landscapes are thus monotonically non-decreasing with updates. Updates to a user's local territory cause the territory to grow. We have a simple case of CRDT (Conflict-Free Data Type) [54] because as noted above, the set of landscapes forms a semilattice ordered by \(\sqsubseteq\) and the merge function \(\texttt{m}^{\star}\) computes the join of two landscapes (cf SS3.1.9). The YEd desktop tool has recently been used as a demonstrator of the basic MMM editing functionalities. In particular, it has been used to test the design of the MMM data model on information emanating from the aeronautical engineering domain. The set of YEd "palettes" that have been used in that exercise is available on Gitlab [42]. YEd certainly isn't the only existing graph editor tool that can be adapted to act as a rudimentary MMM editor. It has the advantage of serialising graphs into the standard GraphML format. And conveniently, it allows to group nodes and thus partially supports MMM pens. It does not allow to have edges act as endpoints of other edges however. We have used a temporary workaround breaking edges with a dummy node (cf the MMM EDGE BREAK POINTS.graphml palette). ### Landscape Consuming Activities In section 3.1, we saw how to modify the MMM by adding and obsoleting content. Now, assuming the MMM is populated, we explore uses we can make of it. #### 3.2.1 Measuring A distance can be defined between any two contributions \(c\) and \(c^{\prime}\) in the MMM, for instance as the length of the shortest undirected path between \(c\) and \(c^{\prime}\). The notion of distance between contributions in the MMM network can be refined to capture a notion of "**epistemic proximity**" between contributions. Naively, the depth _underneath_ a contribution \(c\) can be defined as the length of the longest acyclic directed path incoming \(c\). A notion of **absolute contribution depth** can be introduced to characterise contributions against a referent contribution depth 0 - the depth of the most vacuous (/abstract) question possible (e.g. "What is the nature of existence?"). The'maturity' of a contribution may be measured in terms of the number of annotations to it, especially the number of nuances, details, (answered) questions to it, and recursively, the maturity of those annotations. The'reliability' Figure 15: Suggested scale of epistemic proximity for two statements. of a contribution \(c\) can be defined in terms of maturity, in terms of ratio of details and nuances, and in terms of the depth of \(c\) (length of directed paths outgoing \(c\) leading to mature contributions). Other, finer metrics can be defined using the graph theoretic properties of landmarks and of areas of the landscape in order to further qualify elements of the landscape. This form of "epistemic topography" may allow nuancing primitive binary qualification of information (e.g. correct/incorrect, true/false, consensual/not consensual, there/not there) replacing it by richer more profound qualification (cf Suppl. Mat. B.1). The number of red-flagging edges incident on a contribution \(c\) (i.e., the number of equates edges connecting \(c\) with \(\bot\)), and the number of authors of each public red-flagging edge can also be counted to quantify the quality of \(c\) or of neighbours of \(c\). Application code may rely on metrics and thresholds to decide when to trigger the publication or display of a MMM contribution (e.g. on an external feed). #### 3.2.2 Zooming in/out An exploitable property of the MMM format is that most links are "vertical" links (unidirectional) as opposed to "horizontal" (adirectional or bidirectional) meaning that in some respects they express the idea that what is expressed at their start point is more defined, more narrow or more precise than what is expressed at their endpoint which is more abstract, more compendious or more indiscriminate. Unidirectional/Vertical edges tend go from more specific to more general. This feature can be used to implement "epistemic zooming in and out". Of course, in the MMM, directed paths can be circular, and I expect circularity will be common. This is not a problem. The "epistemic zooming" proposition is merely practical. It is to allow some interactive filtering out of content, locally (as opposed to revealing some profound property of information). A fundamental functionality that we may want MMM editors to support is "contribution collapse": all contributions on an acyclic directed path to a given contribution are hidden and displayed on demand. The YEd graph editor mentioned before partially supports this functionality assuming YEd groups are used to represent MMM nodes (cf collapsingNodes.graphml[42]). #### 3.2.3 Filtering Highlighting The diversity of metrics that can be defined to qualify landscapes (cf SS3.2.1) can be used to define a diversity of custom filters. These would allow users to experience the same landscape from different points of view. I propose that a collection of pre-defined adjustable filters be provided to users. Different areas of a landscape, and even different single contributions, may be managed by different filtering rules. Further research is needed to define and test possible default filter rules and determine the degree of stringency needed. A default filter rule could for instance reject any contribution that hasn't been challenged \(x\) times and nuanced \(y\) times by \(z\) different users, and/or whose annotations are shallow in the sense that the depth underneath them is less than \(d\in\mathbb{N}\). In this case, default values for \(x,y,z\) and \(d\) need to be determined based on an understanding of the possible local and global consequences of systematically rejecting more or less content with this rule. Filter rules may also be used to trigger the local marking of contributions (cf SS2.2.4) - e.g. as hidden or dim. Front-end code can use the marks to appropriately display the landscape to the user. Rules can also conversely be defined to highlight certain contributions. We may for instance want to highlight contributions that have high measures of quality, or that are marked as rewarded. #### 3.2.4 Aggregating Aggregation on the MMM means grouping together annotations that serve the same purpose - e.g. grouping together all yes answers to a given question\(q\) which can be identified as contributions linked to \(q\) by an answers edge tagged @yes. Aggregation can support redundancy management mechanisms. Comparing aggregable contributions can allow identifying overlaps between those contributions. UIs can be designed to preempt the recording of new contributions by the user that overlap with existing content, or to assist the user in narrowing down the value they can add. UIs can also be designed to systematically hide repetitive content identified as epistemically very close or equivalent to content not hidden. This way, sheer quantity and repetition of a piece of information may not directly translate into visibility of this piece of information. Aggregation can be leveraged to further improve the landscape readability showing less objects at once. Neighbouring contributions may be grouped together into "macro-meta contributions" by the graphical user interface (GUI), based on their subtype and on their position in the landscape. For instance all definitions of the same term, or all answers to the same question may be displayed by the GUI as a single macro-meta node on which the user needs to click to access the details of each definition or answer. Macro-meta contributions are part of the view, not the data model. #### Navigating, Exploring, Searching and Finding On a MMM landscape, **navigating** or **exploring** means travelling down a path. Let \(c\) and \(c^{\prime}\) be two MMM contributions. Let \(a\) be a landscape area. Let \(u\) be a user and let \(T_{u}\) denote \(u\)'s territory. We say that \(c\) is **findable** or **visible**_from_ contribution \(c^{\prime}\) (resp. from area \(a\)) if there is a path between \(c\) and \(c^{\prime}\) (resp. between \(c\) and any landmark \(c^{\prime}\in a\)). We say that \(c\) is findable _by_ user \(u\) if \(c\) is findable from \(T_{u}\). Further consideration of path properties (e.g. directedness) may help refine the notion of findability. Based on appropriate notions of findability and relevant filters, various landscape exploration strategies may be defined. I call **"wayfarer exploration strategies"** strategies that explore MMM landscapes by following paths in the MMM network. Alternative **"parachuting exploration strategies"** can be used to explore the MMM. In contrast to wayfarer strategies, parachuting strategies ignore the MMM network topology and the MMM data model semantics. They explore MMM contributions in more traditional lexical ways using natural language processing techniques to infer semantic similarities between contributions. Figure 16: Aggregation of all documented answers to a question, in one macro-meta-node. Search over the MMM matches textual search queries to MMM landscape areas. The search query may be _selective_ - for instance when the user is looking for a contribution with a given identifier, or for questions recorded between two given dates, or for contributions that have more than 3 authors and are tagged "@genetic regulation". In this case, a parachutist strategy is enough. The query may also be _approximate_ with an exploratory intention behind it. In this case, a wayfarer approach might be better suited to locate epistemically relevant content. Given a textual search query \(q\) formulated by the user, a wayfarer search strategy (i) picks a MMM landmark (or area) \(s\) as starting point of the search, (2) performs a wayfarer exploration of the landscape following paths (of a certain capped length) outgoing \(s\), (3) circumscribes a relevant target area \(a\) of the landscape that is to be outputted as the search result, and possibly (4) derives an epistemic link in the MMM landscape between \(q\), \(s\) and \(a\). Graph theoretic properties combined with properties of the MMM data model (cf SS3.2.1) will be useful to formalise the details of these steps. I leave this as an open problem for now. A subsequent article will propose ontology-based exploration strategies to support searching the landscape from a configurable point of view. Defining a relevant wayfarer exploration may entail specifying the semantics of the MMM data model. The semantics of the MMM data model have deliberately been left vague and informal17. Varying specifications of these semantics are possible. Different specifications may induce different notions of "consistency" for instance. One may specify that two existence nodes are consistent as long as they aren't connected by "contradicting" paths: one path being a sequence of equates edges, another path being composed only of equates edges except for one differsFrom edge. A variation of the semantics relevant to scientific researchers might restrict this definition to equates edges that are tagged with a certain tag - e.g. "@biological equivalence" or "@logical equivalence". Specifying the MMM data model semantics allows to define the purpose of a wayfarer exploration (e.g. checking the consistency of an area of landscape). Wayfarer exploration is then as close as we get to reasoning on the MMM. The semantics determines how the information conveyed by MMM contribution types should be interpreted, how they should inform decisions and in particular how they should orient the exploration. A semantics may specify that equates edges tagged "@naturalLanguageTranslation" or more specifically "@EN \(\rightarrow\) FR", don't count. Available language translations would then be ignored by the wayfarer exploration. Footnote 17: We have been using the term “epistemics” rather than ”semantics”. The wayfarer approach favours discovery of information that is epistemically related to information that has already been discovered. This might favour preparedness of the user to new information. The user's local territory grows gradually to include contributions that are epistemically close to the contributions she already understands (output area \(a\) of a search is probably directly related to input query \(q\)). _In itself_ wayfarer discovery might also favour epistemic isolation reminiscent of the actual Web's echo chambers. Epistemic proximity is however not semantic proximity. Contradictory answers to the same question are for example very close epistemically. So are the different interpretations of the same question. Arguably, humans can't systematically avoid being directed by confirmation bias towards the same statements supporting the same views. But if they come across questions on the MMM (e.g. "How do vaccines work?"), there is no obvious natural way for them to systematically preempt exposure to the different answers that are contributed by individuals with different mindsets. The different answers are likely to be implanted on the same MMM landmark, namely the question they answer. Epistemic locality does not necessarily entail social locality as very different people can take interest in the same questions. Furthermore, wayfarer exploration not only leverages the connectedness of the overall MMM network it may also contribute to enhancing it. By making users aware of information that is already documented, it can help preempt the documentation of redundant content and encourage the building of short epistemic bridges between previously epistemically distant contributions. #### 3.2.6 Safe Reformulating and Translating One idea or piece of information can be expressed in multiple ways. Different expressions of the same idea don't necessarily speak to the same people, if only because people understand different languages. Connectedness of the overall MMM network increases the likelihood that there is a path between the different expressions of the same idea: one expression is findable from another expression. People who understand idea \(i\) via expression \(E\) can be made aware of the annotations concerning \(i\) contributed by people who better understand an alternative expression \(E^{\prime}\) of \(i\). In the MMM, the epistemic connection between contributions \(E\) and \(E^{\prime}\) being explicitly documented, it can be used to ensure "safe passage" between \(E\) and \(E^{\prime}\). Arguably, as the formulation of a piece of information changes, the information itself changes, possibly degrades. This may have repercussions on people's understanding of the information. "Safe passage" means that the new formulation is accessed with awareness of ensuing semantic changes. The people who translate or vulgarise information to give other people access to it are sometimes not the experts who can gauge the extent of the semantic drift caused by the reformulation. But if the drift is documented in the MMM (e.g. with a labelled equates or differsFrom edge between the two formulations) by someone who can discern it, then it becomes discussable like any other piece of information. Reformulation on the MMM is like any other ordinary act of deriving information out of pre-existing information. It is a documentable process that can be challenged and justified explicitly. In addition to promoting epistemic democracy, multiple connected formulations of the same idea is _a desirable form of redundancy_ that supports mitigation of _a useless sort of redundancy_. The useless sort of redundancy occurs between two expressions that are semantically too similar to be humanly distinguishable. One of the expressions can be removed without risk of reducing, now or in the future, the number of people potentially able to understand the idea. Desirable redundancy might not be relevant to _every_ user, but globally it contributes to the connectedness of the MMM network which in turns helps compare MMM contributions and identify useless overlaps. #### 3.2.7 Epistemic Time Travelling The system architecture that we propose in the supplementary material A allows for "epistemic time travel". On the MMM, epistemic time travel consists in playing back and forth the history of changes to a landscape. #### 3.2.8 "Citing the future" On the MMM, citation links are conveyed by MMM edges (or paths) linking a citing contribution to a cited contribution. Though an appropriate choice of edge types and possibly through the documentation of edge labels, MMM citation links can be made to convey epistemic glue. In traditional documents like scientific articles, citation links direct the reader to a resource from the past. In contrast, MMM links direct the reader to a durable location in the landscape. The area around this location may evolve over time as contributions (nuances, supporting details, questions, _etc._) are implanted on it. But the cited contribution location perdures. Suppose that article \(a\) published in 2023 refers to current "_RNA vaccination_" techniques and cites the latest scientific publication on that subject, namely article _a_\({}^{\prime}\). Both \(a\) and _a_\({}^{\prime}\) are implanted in the MMM. Ten years later, Alice reads \(a\). By then, \(a\) is still somewhat relevant but the contents of _a_\({}^{\prime}\) are long outdated. Contribution _a_\({}^{\prime}\) is now deep under a decade of thorough annotations conveying new understanding of "_RNA vaccination_" and new technical propositions. The link from \(a\) to _a_\({}^{\prime}\) in the MMM doesn't just point Alice in the direction of the old outdated article _a_\({}^{\prime}\), it points her towards a relevant area of the MMM that has been continually updated. Note that understanding an old article \(a\) properly might require more than updated information on the subject of the article _a_\({}^{\prime}\) that is cited by \(a\). It might require understanding the historical context in which the link between \(a\) and _a_\({}^{\prime}\) was made. Epistemic time travel (cf SS3.2.7) allows to access the state of the landscape back then and play the succession of events that lead to outdating the contents of _a_\({}^{\prime}\). Thus while _a_\({}^{\prime}\) becomes outdated, the link (epistemic glue) between _a_\({}^{\prime}\) and \(a\) continues to convey information of actual worth, for at least as long as \(a\) doesn't also outdate. It is the purpose of central "refrigeration mechanisms" (MMM archival) to save some contents such as contribution \(a\) from global obsolescence (disappearance from the MMM). Refrigeration mechanisms need to be defined to implement choices of what contributions deserve to be archived and for how long, depending possibly on (the dynamics of) the contributions' implantation. #### 3.2.9 File and Folder Organising MMM contributions (e.g. existence nodes) can be used to represent bibliographical references to documents that exist outside the MMM. Similarly, MMM contributions can be used to represent resources like files and folders existing in a file hierarchy. Mapping (parts of) a device's local file hierarchy into the user's local MMM territory would allow the user to seamlessly navigate between their MMM notes and their local file hierarchy (see Fig. 17 and Suppl. Mat. B.2.). All contributions resulting from this mapping should be marked with a distinct mark, say FH. And all contributions marked FH should be considered private and unpublished. ### Landscape Sharing Activities In this section we discuss the social dimension of the MMM proposal. Figure 17: Bottom left: a simplified version of a part of the hierarchy of files and folders that a researcher could have on one of their devices, as presented by the Tree program. Above right: the corresponding MMM landscape area. I recall that all visual representations of MMM formatted information in this article are arbitrary. Precisely, a UI could in this case provide a visual representation of the MMM to the user identical to the terminal output of the Tree program. File and folder paths can be documented as tags in contributions’ tag sets. pennedIn edges can be tagged “@file contained in” or “@sub-directory of”. Other MMM contributions can be added. The diversity of MMM edge types can be leveraged to complete the file system’s hierarchical organisation and play a role similar to rich symbolic links. In the sequel let us assume the following context. There exists software interfaces for human users to interact with the MMM. The software are locally installed on users' devices. Each user has their own local territory as mentioned in SS2.4.1. A user's local territory is stored on the user's device(s). Devices (belonging to the same or to different users) connect to each other (over the internet or a LAN) to exchange MMM contributions. I say these devices are _MMM hosts_, participating in the distributed MMM network. I sometimes refer to users as _peers_ in this network. I don't assume that MMM hosts operate as servers. Details of the implementation of the distributed MMM network and the software involved are given in the supplementary material A. #### 3.3.1 Rewarding Suppose that Alice publishes contribution \(c_{A}\) (e.g. a description of how she feeds the lab mice). Then Bob publishes contribution \(c_{B}\) as an annotation of \(c_{A}\) (maybe a specification of the problems Bob encounters in carrying out an experiment using Alice's mice). Later Charles publishes \(c_{C}\) which, among many other things, builds on \(c_{B}\). And it turns out that Charles ends up greatly rewarded for \(c_{C}\) (maybe \(c_{C}\) gets published in a prestigious academic journal, maybe it is the object of a Nobel Prize). I propose to use the collectively documented train of thought \(c_{A}\longrightarrow c_{B}\longrightarrow c_{C}\) to formally and proportionately acknowledge Alice's and Bob's participation in the work that eventually produced \(c_{C}\). Charles (or the prestigious publisher or the Nobel Foundation) may locally mark \(c_{C}\) as rewarded. The rewarded mark can be parametrised with (1) data specifying or referring to the details of the reward (e.g. "Nobel Prize"), (2) data specifying the MMM distance to the rewarded contribution (0 in the case of \(c_{C}\)), and (3) the identifier of the rewarded contribution if the distance is Figure 18: The MMM is the reunion of all MMM contributions stored on the local territories of users. As users share MMM contributions with each other, replication of the material occurs. Some contributions are copied on multiple hosts. greater than 0. The distributed system through word of mouth (successive local shares, cf SS3.3.4) propagates rewarded marks. Charles' reward can "trickle" down to Bob and then Alice. Every tricking step increments the distance recorded as parameter to the rewarded mark. On reception of a homologous copy \(c_{A}^{\prime}\) of Alice's contribution \(c_{A}\), a comparison can be made18 between the rewarded marks of both copies. The minimum distance to \(c_{C}\) can be kept as parameter of the rewarded mark. I call this mechanism "trickling reward". The topic of acknowledgement and reward is discussed in [43]. Footnote 18: By the concierge module, cf Suppl. Mat. A.2. #### 3.3.2 (Slow) Collaborating The MMM is a collective document: users can offer answers to the same questions, they can nuance each others' statements, they can add to each others' mutable pens, they can connect each other's contributions using appropriate MMM edges _etc_. Consider a traditional document \(d\) decomposed into area \(a_{d}\) of the MMM (\(a_{d}\) is the "MMified" version of \(d\), cf SS3.1.5). Several users may work simultaneously on \(a_{d}\), annotating contributions that are already in \(a_{d}\) and episodically sharing with each other their annotations. Collaborators don't have to keep copies of every contribution in \(a_{d}\). **Floating and dangling edges are possible.** Alice may keep a copy of the edge \(e\) going from her contribution \(c_{A}\) to Bob's contribution \(c_{B}\), without storing a local copy of \(c_{B}\). Alice might regard the label and type of \(e\) as enough information about \(c_{B}\). The MMM solution isn't optimised for fast communication. It is to natively support "**slow-first collaboration**" which is when collaborators can work without having fast editor access at any moment to what their collaborators are doing (real time read access remains a possibility). Slow-first collaboration is sometimes enough and sometimes preferable (cf Suppl. Mat. B.20). Users edit _different_ contributions and then link them if relevant. To support real-time collaboration _within_ contributions (collaborators concurrently editing the same contributions), will require nesting finer-grained CRDTs in the native coarse-grain MMM CRDT-like structure [32]. Contributions are atomic units in the MMM just like characters are atomic units in traditional digital documents (cf Table 2). Let us expand on a recommendation made in SS3.1.8 which frontend code can implement: _Best practices for users:_ Just like you don't half-way type a character in a digital document, avoid half-way documenting a contribution in the MMM. When you are done editing contribution \(c\) mark it so as to indicate that \(c\) is finished (which is different from meaning \(c\) is _definite_ because of obsoleting and versioning mechanisms described in SS3.1.8), e.g. mark it as synchronisable. Only allow synchronisable contributions to be accessed from other devices. Figure 19: Automatic assisting mechanisms for redundancy management can leverage the properties of the area surrounding Alice’s contribution \(c_{A}\) and Bob’s contribution \(c_{B}\) in order to detect similarity between \(c_{A}\) and \(c_{B}\). Application code make encourage Alice and Bob to merge \(c_{A}\) and \(c_{B}\), to obsolete one of them or to document an explicit relationship between them. #### 3.3.3 Defining Topics In the MMM, topics - e.g. "holiday ideas" or "Al" - are typically documented in the user's territory as the labels of existence nodes. They can also be documented as the labels of any other kind of MMM contribution including question nodes, narrative nodes, pens and edges. When Alice records a new contribution \(c\) in her territory, she implants \(c\) (she links \(c\) to other contributions in her territory). If this materialises a semantic path between \(c\) and the existence node labelled "holiday ideas", then \(c\) can be considered related to the topic "holiday ideas" If the implantation of \(c\) in Alice's territory materialises a path between \(c\) and the existence node labelled "Al", then \(c\) will be regarded as related to the topic of "Al". We define the **topic anchor** to be the MMM contribution whose label - e.g. "holiday ideas" or "Al" - gives the topic name. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Google Docs** & **MMM** \\ \hline A Google Docs necessarily belongs to the holder of the Google account that it was created from. The owner and the editors of a Google Docs have more rights than its commentators. & Users don’t have statuses, only contributions do. Any user can annotate a contribution that she has access to. Alice may annotate Bob’s contribution, Bob may reject Alice’s request to share her annotation with him, and Alice may persist her annotation and share it with Charlie even if it displeases Bob. \\ \hline The finest granularity of collaboration is possible. Editors can see the effect of each other’s keystrokes in real-time. Their keystrokes can interact with each other when they simultaneously edit the same area of the document. & Fast fine-grained collaboration isn’t native to the MMM. The MMM only supports coarse-grained collaboration where collaborators don’t simultaneously edit the same atomic unit of information. \\ \hline Coarse-grained collaboration is supported. Users can comment and make suggestions to the main document. Their annotations are spatially tied to character positions in the document. & Annotations are _epistemically_ tied to the landscape. This is a simple case of CRDT because changes to the landscape may be applied without knowledge of the original context in which the changes were made. \\ \hline Versioning allows viewing past states of the document and to compare versions of the document after changes have been applied to it. & Changes are encapsulated in contributions and logged (cf SSA.1.1). The MMM system supports fine-grained "epistemic time travel" discussed in SSA.1.3. \\ \hline \end{tabular} \end{table} Table 2: Collaboration with Google Docs and collaboration with the MMM. The **topic extent** defines an area of the landscape surrounding the topic anchor. This area contains landmarks that are considered to fall within the scope of the topic. The area can be defined in terms of distance or depth - e.g. contribution \(c\) is relevant to the topic if \(c\) is at a distance \(d\leq 3\) of the topic anchor \(a\), or at a depth \(d\leq a\) underneath \(a\). Features of the MMM format may be exploited to refine the delineation of topic extents (cf SS3.2.1). For instance, a user might want to filter out of the topic's extent, contributions connected via relate or relatesTo edges. Or she might not be interested in questions whose answers fall outside the topic extent or whose answers are shallow. Formally a MMM topic is a couple \(T=(a,e)\) where \(a\) is the topic anchor and \(e\) is the topic extent. Let \(T=(a,e)\) be a topic whose extent \(e\) circumscribes an area of radius \(n\in\mathbb{N}\) around contribution \(a\). Let \(c\) be a contribution in that area at a distance \(m\leq n\) of \(a\). Let \(e_{c}\) be the area of radius \(n-m\in\mathbb{N}\) around contribution \(c\). We say that topic \(T_{c}=(c,e_{c})\) is **inherited** from topic \(T\). More generally, a topic \(T_{c}=(c,e_{c})\) inherited from topic \(T=(a,e)\) is such that contribution \(c\) belongs to the area circumscribed by \(e\), and this area contains the area circumscribed by \(e_{c}\). MMM topics are for **sharing and synchronising MMM contributions** on a need-to-know basis. The "need" is captured in advance in the topic extent. #### Sharing Contributions Users can send MMM contributions to each other, individually or by batches. Contributions that Alice has shared with Bob or received from Bob are marked with a parametrised sharedWith mark. When Alice receives a contribution \(c\) from Bob, \(c\) is initially marked as new on Alice's territory. A customisable amount of systematic filtering can be applied to new contributions on delivery (cf SSA.2.10, SSA.2.7 and SSA.2.6 in SSA.2). Alice might Figure 20: Two topics: ”moving house” and ”electric current”. The topic _anchors_ are respectively the pen and the existence node in which each topic name is documented. Topic scopes are captured in topic _extents_ and can overlap. In this example, if the topic extents are restricted to the dashed rectangles, then no contribution yet falls into the scope of both topics at once. only want to see contributions that are already well implanted in the global MMM or in the sender's local territory. If the new contribution \(c\) is not automatically filtered out, it remains for Alice to reject it or to accept it. If Alice rejects \(c\), \(c\) is deleted (not obsoleted) from her territory. If Alice accepts \(c\), then the new mark on \(c\) is removed. If there already is a homologous copy of \(c\) on Alice's territory, it is merged with \(c\). Sharing maintains the CRDT like properties of the set of landscapes mentioned in SS3.1.10. When Alice accepts a new contribution, her local territory grows according to the partial order \(\sqsubseteq\) on landscapes defined in SS3.1.9. I propose that **share contracts** be associated with MMM contributions when they are shared. The default share contract forbids the recipient of a MMM contribution from communicating the source's address to a third party. Only the source host can relax the contract. Possibly, in a GDPR-compliant way, the contract lists alternative hosts who have given their permission to be known as alternative hosts of the MMM material to share. The contract may contain some copyright clauses restricting what the recipient host can do with a shared contribution [43]. It may formalise a non-disclosure agreement. Research work is needed to define ways of enforcing contracts. Alice must not be able to change the contract she has with Bob. The software she uses to connect with Bob must not violate the contract. And/or peers with whom Alice connects shouldn't accept to interact with Alice in ways that violate the contracts applying to the data Alice has. #### 3.3.5 Subscribing to Topics Users can subscribe to topics. The anchors of the topics that they subscribe to must exist on their local territories. To subscribe to topic \(T=(c,e)\), the user must compose a "**subscription request**" and send it to one or several MMM hosts. The user not only specifies what information they are interested in acquiring, they also specify who they want to get the information from. The subscription request is a message with the following data: 1. A MMM topic \(T=(c,e)\) whose anchor \(c\) can be found on the user's local territory. 2. How often and until when the user wishes to receive \(T\)-related material. 3. A host from which the user wishes to get \(T\)-related material / the host to which the subscription request is sent. This host must have the anchor contribution \(c\) on their local territory as well if they are to serve \(T\)-related material to the subscriber. 4. A subscription contract specifying if the subscription can be forwarded by the recipient to an alternative \(T\)-serving host, and by the sender to an alternative \(T\)-interested subscriber19. Footnote 19: Suppose Alice is subscribed to Bob’s \(T\)-related contributions. The subscription contract might allow Bob to forward Alice’s subscription over to Charlie whom Bob gets his \(T\)-related information from – assuming Bob’s subscription contract with Charlie allows it. And the Alice-Bob contract might specify that Alice can forward the data she has on Bob to Eve so that Eve can receive \(T\)-related contributions directly from Bob. Serving a subscription consists in sharing \(T\)-related material found on one's territory to a subscribed peer (cf SS3.3.4). Further work is needed to ensure that serving subscription material is efficient. Work is in particular needed to determine the respective responsibilities of client and server in identifying subscription material in the landscape. Identifying the contributions \(c^{\prime}\) that fall into a topic \(T\)'s scope requires running through the landscape and possibly computing inherited topics. To facilitate the task of serving MMM material to his subscribers, Bob might want to constrain the topic extents that he is willing to serve, e.g. to "_one-size-fits-all-subscribers_" penned areas. He might leave most of the measuring and filtering work to his subscribers. As suggested in SS3.3.4, users can send each other unrequested MMM contributions. I propose to deal with these spontaneous exchanges of MMM material as **subscription**_invitations_. To share contribution \(c\) with Bob, Alice invites Bob to subscribe to \(T=(c,e)\) where \(e\) is empty if Alice wants to share nothing else than \(c\) with Bob. The extent \(e\) can alternatively define an area of radius \(1\) around \(c\) if Alice wants to share \(c\) and immediate annotations to \(c\). When he receives Alice's invitation to subscribe to her \(T\)-related material, Bob can modify certain parameters of the proposed subscription. He can for instance modify the extent of the topic and reduce the frequency of \(T\)-related news he will be receiving from Alice. I suggest that conversely, when Alice shares contribution \(c\) with Bob, especially if Alice is the author of \(c\), then Alice automatically subscribes to Bob's copy of \(c\) so that she episodically receives from Bob updates and annotations concerning \(c\) Alice's copy of the anchor contribution \(c\) is marked as subscribedTo, and possibly, so are also other contributions that fall into the topic's scope \(e\). The subscribedTo mark is parametrised with some of the subscription parameters. Subscribing to a topic on the MMM is comparable to joining a Semantic Overlay Network (SON) [12, 13] Connections between peers are determined by semantics. SONs are clusters of peers that share an interest for a concept (e.g. minimal tech music) and have resources related to that concept. Resources and peers are assigned to concepts and concepts are themselves organised into a predefined taxonomy of concepts. The taxonomy is leveraged to forward queries to the right peers and optimise search performance. In the MMM system, the set of all peers subscribed to topic \(T=(c,e)\) is reminiscent of a SON, even if contribution \(c\) is not part of a common predefined hierarchy of concepts and might not represent a concept at all (\(c\) might be a question or something else). We don't necessarily have the potential for a peer-to-peer connection between any two peers in the cluster. #### 3.3.6 Synchronising Across Devices Synchronising an area \(a\) of the landscape with another device one owns is similar to sharing it with a peer. The local copy of contribution \(c\) on device \(d_{0}\) is marked with the syncWith mark parametrised with the list of other devises that \(c\) is copied to. In contrast to sharing, synchronising usually doesn't ignore the house-keeping marks. Typically, if a device has contribution \(c\) marked as highlighted, other devices of the same user will also have \(c\) marked as highlighted. Relay servers can be used to streamline cross-device synchronisation. Data transiting through these thin servers should be end-to-end encrypted. #### 3.3.7 Publishing Contributions I propose to promote a strong notion of publication. Design Bias #5: **Irrevocable Publicness** Information that is published can't be unpublished nor altered by any number of individuals, not even by the author or publisher of the information. A strong notion of publication requires an accompanying paradigmatic shift towards tolerance for miscommunication and misinformation. This is discussed in Suppl. Mat. B.16. Figure 21: Sharing information by value or by pointer. Contributions marked with ✗ are obsoleted contributions. Although the MMM format is not primarily designed to store and manage mutable data like phone numbers and addresses, mutable MMM contributions (cf §2.4.3) can nonetheless marginally support the sharing of mutable data by pointer or by value. N.B.: The assumption is that the MMM node storing Alice’s address is part of Alice’s local territory. Combining MMM concepts with concepts from the Solid project [52], a proposal for personal data management will be made in a follow-up article. Publishing a contribution to the MMM starts with upgrading its status to public. Importantly, if Alice didn't create \(c\) herself, if she got \(c\) from Bob, then the propagation contract she has with Bob might forbid Alice from publishing \(c\). Contributions marked as public may remain unseen by everyone except their creator. But the point of making a contribution public is to share it. Sharing a public contribution happens as described in SS3.3.4. Like other contributions, public contributions propagate through word of mouth (successive shares). So they don't necessarily propagate. A public entity like a university participating in the distributed MMM network may reject all public contributions it receives unless the contributions are from its affiliated researchers and are highly implanted in the university's local territory. From the moment a public contribution is shared, the publication is virtually irrevocable. This is because (1) the public status of a contribution can't be downgraded and (2) sharing is irrevocable. _Best practices for users_: use versioning and obsoleting mechanisms to share a correction (eg a typo correction) for a contribution that has already propagated over the distributed network. The author of a public contribution \(c\) shares equal **ownership** of **and control** over \(c\) with anyone who has a copy of \(c\). Anyone is free to make a local copy of \(c\), share \(c\) and annotate \(c\). No-one (not even the author) can directly modify the label and type of \(c\) (cf Table 3 below). However, anyone can modify the epistemic environment around \(c\) by (publicly) commenting on \(c\), supporting it, detailing it, nuancing it, questioning it, red-flagging it, relating it to other information _etc_. So anyone can potentially sway the way others interpret \(c\). As mentioned before, sheer quantity and repetition of information in relation to \(c\) does not necessarily translate into visibility of this information on the MMM however (cf SS3.2.4 and Suppl. Mat. B.5). So no single user or group of users has the exclusive power to definitely sway the interpretation of \(c\). Users can't delete public contributions from the MMM once they have been shared. They can only delete their own local copies. I recall that the MMM is not suited for the documentation of all kinds of content. It is especially ill-suited for content that _cant_ be disputed such as feelings. It is better suited for analytical information that has some collective value such as scientific contributions. All the current owners of a local copy of a public contribution \(c\) could mark \(c\) as obsolete and \(c\) could eventually entirely disappear from the MMM. Contributions that disappear shortly after they propagate might not be deemed of the same quality20 as persisting pervasive contributions. I propose that collective obsolescence be leveraged to inform the design of smart archiving mechanisms capable of identifying digital public information that is worth archiving (cf Suppl. Mat. A.1.3). ## 4 Conclusion I introduced the MMM data model and a notion of epistemic landscape based on it. I defined the MMM as the reunion of all epistemic landscapes. I detailed the different types of MMM contributions and their attributes, which are involved in the MMM. I presented landscape based activities centered around editing landscapes, consuming information and sharing information. Like the Semantic Web's underlying formalisms, the MMM's networked structure allows pieces of information to be connected to each other so that they gain in precision from context, and they can be reused and meaningfully processed - by humans in the case of the MMM. The MMM data model is assigned _loose_ semantics because like the original Web21, the MMM is meant to accommodate a diversity of use cases involving humans at work contributing to scientific research and other evolving informational fields. Footnote 21: The CERN Web [3], not so much the Web centered around social media. I proposed to introduce rich metrics expanding our traditional definitions of informational quality. MMM metrics leverage the non-binary epistemic qualities of information captured in the MMM data model. They also account for the contextual "implantation" of information reflected by the MMM network topology. I formalised the notion of implantation and I evoked the incentives for authors to implant their contributions well. Implantation is central to my proposal. It is to promote connectedness of the MMM network, making every piece of information more likely to be findable from an arbitrary location in the MMM landscape. Connectedness can facilitate global \begin{table} \begin{tabular}{|l|c|c|} \hline \multicolumn{3}{|c|}{**Allowed landscape modifications depending on contribution status**} \\ \hline \multicolumn{3}{|c|}{**Landscape modification:**} & \multicolumn{2}{|c|}{**Status of the concerned contribution:**} \\ & Private & Public \\ \hline Add a new contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add an authorship & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a tag & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add, modify remove a mark & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Remove an author or authorship & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Remove a tag & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the endpoints of an edge & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a label to an edge & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add a label to a pen & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the label of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the concrete type of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Add an author to an authorship list & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Upgrade the status & \(\blacktriangledown\) & – \\ \hline Downgrade the status & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline Change the id of a contribution & \(\blacktriangledown\) & \(\blacktriangledown\) \\ \hline \end{tabular} \end{table} Table 3: The different modifications that a user can apply to a MMM landscape, depending on the status of the contribution involved in the modification. \(\blacktriangledown\) stands for possible and \(\blacktriangledown\) stands for impossible. Landscape modifications only apply to a user’s local territory. If the concerned contribution is sent to peers, the change may propagate through merges. For instance, Alice may add an authorship mentioning herself in the authorship set of contribution \(c\). Even if \(c\)’s status is public, the change only affects Alice’s local copy of \(c\) at first, until \(c\) is sent to another user Bob and Bob accepts it. Some changes mandatorily propagate whenever there is an opportunity for them to – cf rows in dark grey. Some changes such as changes relative to house-keeping marks, only propagate to a user’s devices. Share contracts may settle how/if modifications affecting private contributions propagate. Restrictions on the conditions for modifying and propagating tags are to be determined. redundancy mitigation. The supplementary material A proposes to further support it through automatic connection suggesting mechanisms (see in particular the lawless parachutist software component). The supplementary material A proposes a technological infrastructure to support the MMM. The MMM is organically distributed among peers who store the parts of the MMM network that are relevant to them and share parts that might be relevant to others. Globally, MMM contributions get replicated as they get shared and as peers decide to keep local copies for themselves. Locally, globally unique identifiers avoid duplication. I propose to encourage multiple UIs in order to accommodate the diversity of epistemic cultures. I defined a notion of local epistemic territory characteristic of a human user. MMM contributions inputted through the various UIs used by a user are all to be funnelled to that user's territory. On the MMM, all granularities of informational resources can be documented, identified and referred to, including fine-grained informational resources like single ideas and coarse-grained informational resources like the entire contents of an article. Information consumers can _precisely_ select the pieces of information that they consume and keep copies of. Their local epistemic territories can grow without getting littered by irrelevant atomic pieces of information. Useless archival and useless exchanges of information between users can be mitigated. The "refrigeration" process mentioned in the supplementary material A is a possible MMM based alternative to the Internet Archive's Wayback Machine [6]. Valuable pieces of MMM contributions can be archived enmeshed as they are with the epistemic MMM landscape. Collective obsolescence ensures that any atomic piece of information that isn't of use to anyone, eventually disappears from the record and archiving concentrates on information deemed relevant. This proposal is primarily geared towards digital content sobriety. Enhanced epistemic democracy is expected to follow. The focus is on what people are _not_ interested in. My hypothesis is that to be well-informed one must have enough focus and time to reason and to analyse information [43]. One must not suffer from information overload. I propose to equip people with enhanced means of disregarding information, without necessarily missing out on relevant content. Alice may not need any details on how electricity works. Bob may not be interested in having an example of fungal dissemination mechanism. With our proposed MMM solution, Alice and Bob can be made aware of the availability of these resources without having to come into contact with them even if their current interests are taking them in the immediate epistemic vicinity of those resources. Assuming as I do that there is a huge amount of information that a person does _not_ want to have access to, and that different people are uninterested in different areas and different levels of information, allows us to regard each host of the MMM network as a point of view on the MMM record participating in the distributed pruning of information. The MMM proposal is the result of a compilation of ideas contributed by the scientific and entrepreneur communities over several years of discussions. Further collective participation is welcome to help address remaining questions in the aim of materialising the MMM. Indeed, a number of research questions spanning over multiple domains still require attention. Some are essential administration questions: _How should MMM identifiers be defined? How can their definition provide an indexing of MMM contributions that facilitates search over the MMM? How can share contracts be enforced? What should they regulate? Should data identifying authors be hashed into the MMM identifiers of the contributions they author in order to support authentication of shared MMM contributions? If so, should the merge operation be adapted to persist the hashed data?_ The local territories of different users, in particular different users of the same machine, may overlap. _Can we give granular access rights to MMM contributions so that the overlaps don't have to be duplicated on the disk?_ Requirements listed in SS1.5 limit future possibilities of modifying the main attributes of MMM contributions. However, the basic metadata attributes can and may need to be adapted to provide satisfactory answers to some of these non-epistemic questions. Other questions requiring further work are concerned by the epistemic organisation of MMM information: _How can we nest fine-grain CRDTs in MMM contributions to support fast paced collaboration? How is MMM subscription material identified and by whom, the serving host or the recipient host? What relevant MMM based metrics can be defined? What relevant MMM based filters can be defined? What relevant MMM based filters can be defined? What are possible global repercussions of using certain filters systematically and universally? What learning mechanisms can be implemented on the MMM to enhance connectedness and promote other desirable global (topological) qualities of the MMM record?_ The architectural proposal made in the Supplementary Material proposes to persist MMM data in a graph database. _What should be the design of this database to optimize the different landscape-based activities - provided the MMM network is not exactly a graph because of pens and because of edges acting as nodes? What query language should we rely on?_ The supplementary material also mentions the possibility of documenting into the MMM the design choices and semantics of external data _schemas_ as well as possible known relations between data schemas. Because of the flexibility of the MMM data model, there is flexibility in the way external data models can be mapped to the MMM data model. _What external data models are worth mapping to the MMM data model, why and how should they be mapped?_ Perhaps most generative of questions is the interface between the MMM and formal ontologies. A preliminary RDF-MMM mapping is available on Gitlab [40]. As the MMM data model is not equivalent to RDF, different mappings are possible to serve different purposes in relation to the following questions: _How can formal ontologies thread the MMM and provide support for "long distance" MMM exploration and search? Conversely, can updated, possibly informally expressed knowledge documented on the MMM assist the design, evaluation, completion, alignement and generally the evolution of formal models? How can the MMM data model and its interface with standard ontology and knowledge graph formalisms be automatically leveraged to those ends? Are there kinds of information of interest in ontology engineering that are computationally easier to get from epistemic glue in the MMM than from formal inferences on represented knowledge?_ Answers to the latter questions would specify some additional incentives for populating the MMM. In a follow-up article I propose to formalise a knowledge graph called the "**Socio-Economic Overlay Network**" (SEON) and its interface with the MMM. The SEON is to relay the MMM on informational content like metadata that is less amenable to discussion than typical MMM information. The questions listed above need to be specified, and the list is not exhaustive nor definite. In addition to the research questions the MMM poses a number of implementation challenges. The supplementary material A sketches a technological solution abiding by local-first principles [33]. It would be relevant to examine possible synergies and overlaps with ongoing initiatives and existing technologies. The technological solution proposed for the MMM encourages multiplicity of frontend interfaces with a standard MMM backend. Opportunities to interface existing tools with this backend are key. The supplementary material A proposes a distributed architecture to support the MMM. The appropriate nature and conditions of network connections between hosts of this network need to be determined. _What protocols should they rely on? When can direct peer-to-peer connections be implemented? When are relay servers relevant?_
2309.04544
Intercavity polariton slows down dynamics in strongly coupled cavities
Band engineering stands as an efficient route to induce strongly correlated quantum many-body phenomena. Besides inspiring analogies among diverse physical fields, tuning on demand the group velocity is highly attractive in photonics because it allows unconventional flows of light. $\Lambda$-schemes offer a route to control the propagation of light in a lattice-free configurations, enabling exotic phases such as slow-light and allowing for highly optical non-linear systems. Here, we realize room-temperature intercavity Frenkel polaritons excited across two strongly coupled cavities. We demonstrate the formation of a tuneable heavy-polariton, akin to slow light, appearing in the absence of a periodic in-plane potential. Our photonic architecture based on a simple three-level scheme enables the unique spatial segregation of photons and excitons in different cavities and maintains a balanced degree of mixing between them. This unveils a dynamical competition between many-body scattering processes and the underlying polariton nature which leads to an increased fluorescence lifetime. The intercavity polariton features are further revealed under appropriate resonant pumping, where we observe suppression of the polariton fluorescence intensity.
Yesenia A García Jomaso, Brenda Vargas, David Ley Domínguez, Román Armenta, Huziel E. Sauceda, César L Ordoñez-Romero, Hugo A Lara-García, Arturo Camacho-Guardian, Giuseppe Pirruccio
2023-09-08T18:22:06Z
http://arxiv.org/abs/2309.04544v2
# Flatband slows down polariton dynamics in strongly coupled cavities ###### Abstract Flatbands in condensed-matter, atomic physics, and quantum optics stand as the basis for several strongly correlated quantum many-body phenomena such as Wigner crystallization, the fractional quantum Hall effect and Moire-related physics. Besides inspiring analogies among diverse physical fields, flatbands are highly sought-after in photonics because they allow unconventional light flows such as slow-light. Here, we realize room-temperature slow-light with Frenkel polaritons excited across two strongly coupled cavities. We demonstrate the formation of a tuneable flatband appearing in absence of a periodic in-plane potential. Our simple photonic architecture enables the unique spatial segregation of photons and excitons in different cavities and maintains a balanced degree of mixing between them. This unveils a dynamical competition between many-body scattering processes and the underlying polariton nature which leads to an increased fluorescence lifetime. The polariton features are further revealed under appropriate resonant pumping, where we observe suppression of the flatband polariton fluorescence intensity. In condensed matter physics, flatbands have led to the achievement of several breakthroughs, including strongly correlated electronic states [1], non-conventional superconductivity [2; 3; 4], and topological phases of matter in two-dimensional materials [5]. Similarly, within photonic systems, the presence of flat optical bands is highly desirable, as they could prompt the generation of strongly correlated photonic states [6; 7; 8; 9; 10]. This is not only fundamentally relevant, but could also have an impact on energy harvesting, sensing, and information processing. Slow-light, the phenomenon of controlling and manipulating dispersion of light [11], has received attention in quantum optics [12], atomic physics [13; 14] and condensed matter [15; 16]. The ability to tune the dispersion of light yields new opportunities to design light-matter interactions and non-linear optical devices [17; 18]. In the quantum domain, slow-light is typically understood in terms of polaritons [19], hybrid light-matter quasiparticles that arise from mixing a photon with an elementary matter excitation. The propagation of light in the form of a dark-state polariton within a medium can be slowed down and even stopped by heavily unbalancing the mixture of light and matter. In atomic gases, slow-light is typically accompanied by the phenomenon of electromagnetically induced transparency whereby, assisted by a control laser field, light propagates undamped in a typically opaque medium [20; 21]. **Slow-light and intercavity organic polaritons** Polaritons found in both organic and inorganic semiconductors have demonstrated a high degree of flexibility to control the internal energy structure and dynamics, unfolding alternative routes, [22; 23; 24; 25; 26; 27; 28] including the use of photonic lattices to craft flatbands [29; 30; 31; 32]. In this article, our proposal involves the successful implementation of a polariton flatband at room temperature through the strong coupling of a photonic cavity with a polaritonic cavity. The dispersion of the so-formed intercavity polaritons is tailored by tweaking the three-energy-level diagram. This adjustment of the diagram allows us to fine-tune the dispersion of the resulting intercavity polaritons. The observed flatband signals the transition from a bright to dark polariton state characterized by a stretched polariton lifetime. The dark nature of such intercavity flat polariton is indirectly confirmed by its suppressed light-emission obtained under careful resonant pumping. Our findings suggest that reducing polariton group velocity is akin to generating slow-light in atomic gases when the condition for electromagnetically induced transparency is fulfilled. Furthermore, inter-cavity polaritons open up the quantum tomography protocols for local and independent measurements of the photonic and molecular degrees of freedom. Up until now, the exploration of strongly coupled photonic cavities has primarily centered around photonic crystal cavities [33], micro-rings [34], and whispering-gallery modes [35]. In the context of semiconductor microcavities, double-well potentials have led to the realization of phenomena like Josephson oscillations and self-trapping with intercavity polaritons, [36; 37] and Figure 1: **Experimental setup.** (a) Representation of the hybrid photonic-polaritonic coupled system. (b) Relevant energy levels: \(|\omega_{c}^{(L)}\rangle\), \(|\omega_{c}^{(R)}\rangle\) and \(|\omega_{X}\rangle\) represent the left photon, right photon and exciton state, respectively. Photons tunnel from left to right cavity with a hopping amplitude \(t\). \(\Omega\) is the light-matter coupling. have been recently proposed for quantum chemistry control [38]. To realize slow-light we designed a three-level energy scheme representing two strongly coupled cavities. This set-up reminds of the \(\Lambda\)-scheme commonly employed in quantum control experiments. Our system, sketched in Fig. 1(a), is composed by two cavities, labelled (Left) and (Right), coupled _via_ a thin mirror. The left cavity, represented by \(|\omega_{c}^{(L)}\rangle\), is purely photonic, whereas the right cavity host both photons and excitons, identified by \(|\omega_{c}^{(R)}\rangle\) and \(|\omega_{c}^{(X)}\rangle\), respectively. The Hamiltonian of the system is given by \[\hat{H}= \sum_{i=L,R}\omega_{c}^{i}(\theta)\hat{a}_{i}^{\dagger}\hat{a}_{i }-t(\hat{a}_{L}^{\dagger}\hat{a}_{R}+\hat{a}_{R}^{\dagger}\hat{a}_{L})+ \tag{1}\] \[+\omega_{X}\hat{x}^{\dagger}\hat{x}+\Omega\left(\hat{x}^{\dagger }\hat{a}_{R}+\hat{a}_{R}^{\dagger}\hat{x}\right)\] where \(\omega_{c}^{(L/R)}(\theta)\) indicates the energy of the left/right cavity photons, which are created with the operator \(\hat{a}_{L/R}^{\dagger}\). Here, the angle \(\theta\) corresponds to the incident angle of light injected to the right cavity. In Fig. 1(b), the photon hopping is characterised by a tunneling amplitude \(t\), while the light-matter coupling is given by the Rabi frequency \(\Omega\). The sample consists of two vertically stacked nanocavities fabricated on a glass substrate by a sequence of multiple sputtering and spin-coating steps. The front, middle and back mirrors are made by Ag and their thickness equal to 20 nm, 20 nm and 300 nm, respectiveley. The middle mirror width determines the tunneling amplitude \(t\). The Left nanocavity is filled with polymethyl methacrylate (PMMA), whereas the Right one embeds a dye-doped polyvinyl alcohol (PVA) layer. The excitonic content is provided by a high concentration of homogeneously dispersed Eithrosine B (ErB) molecules [39]. The absorption spectrum of the active medium exhibits a main peak around \(\omega_{X}\approx 2.24\)eV associated to a principal exciton resonance and a second peak at \(\omega_{v}\approx 2.4\)eV related to the first vibron mode. The polymer thickness of the Left cavity features a slow wedge which provides us the possibility of fine tuning \(|\omega_{c}^{(L)}\rangle\) in a wide photon energy range. When the resonance frequency of the left cavity at normal incidence matches the exciton one, an intercavity polariton, \(|D(\theta=0)\rangle\), emerges \[|D(\theta=0)\rangle=\frac{\Omega}{\sqrt{\Omega^{2}+t^{2}}}|\omega_{c}^{(L)}( \theta=0)\rangle-\frac{t}{\sqrt{\Omega^{2}+t^{2}}}|\omega_{X}\rangle, \tag{2}\] solely formed by the superposition of the left cavity photon and the exciton. Importantly, the right cavity photon does not participate in the formation of this polariton state. Further details on the geometrical order of the cavities are provided in the Supplementary Information. In Fig. 2(a)-(c), we show the local band structure of the coupled cavity system measured via Fourier microscopy, which demonstrates that the \(\Lambda\)-scheme yields a middle polariton (MP) state with energies close to the bare exciton one. Fine-tuning of the MP energy is accomplished by leveraging the wedged PMMA thickness. The MP state shows a reduced dispersive character compared to the upper (UP) and lower polariton (LP) and flattens for a specific value of the thickness of the photonic cavity. To theoretically understand our results, we introduce the detuning, \(\delta=\omega_{c}^{(L)}-\omega_{X}\), and the imaginary-time Green's function of the system, \(\mathcal{G}_{\alpha,\beta}(\tau)=-\langle T_{\tau}[\hat{\psi}_{\alpha}(\tau) \hat{\psi}_{\beta}^{\dagger}(0)]\rangle\), where the subindices \(\alpha,\beta\) correspond to the left/right cavity photon and exciton, respectively. The fields \(\psi\) and \(\psi^{\dagger}\) evolve according to Eq. 1. We define the spectral function of the left cavity photon as \(A(\omega)=-2\text{Im}\mathcal{G}_{11}(\omega)\) and we plot it in Fig. 2(d)-(f) for three values of \(\delta\), showing a very good agreement with the experimentally observed reflectance. By continuously varying the left cavity thickness, its photon energy is driven in resonance with the exciton energy. Furthermore, the energy of the polariton states obtained from our theory and plotted with dashed curves on the reflectance maps, Fig. 2(a)-(c), provide an excellent quantitative understanding of the system. We obtain a reduction of the photonic dispersion of the left cavity photons \[\omega_{\text{MP}}(\theta)\approx\omega_{X}+\left[\omega_{c}^{L}(\theta)- \omega_{X}\right]\frac{1}{1+\left(\frac{t}{\Omega}\right)^{2}}, \tag{3}\] this means that the dispersion of the MP is controlled by the tunnelling ratio, \(t\), which can be tailored by means of the middle mirror thickness. On resonance, the MP emerges at the energy of the bare exciton, its dispersion reduced by a factor 4-8 compared to the energies of the upper and lower polaritons, this leads to a pronounced flatband over a wide range of incident angles. While the dispersion can be further reduced, this would be at the expense of strongly suppressing the photonic component. The character of the MP is clearly unveiled once it is written in terms of the bare photon and exciton states \(|\text{MP}\rangle=\sum_{\alpha}\mathcal{C}_{\text{MP}}^{\alpha}|\alpha\rangle\). The amplitude of its Hopfield coefficients \(Z_{\text{MP}}^{\alpha}=|\mathcal{C}_{\text{MP}}^{\alpha}|^{2}\) in Fig. 2(g)-(i) demonstrates that the MP decouples from the right cavity photons for \(\delta=0\). Furthermore, Fig. 2(i) shows that the Hopfield coefficients remain invariant along the angular region where the MP is flat. There, the MP has only non-vanishing Hopfield coefficients of the left cavity photon and the exciton. For conventional polaritons arising in two-level schemes, suppression of the polariton dispersion can only be achieved by compromising the degree of mixing between photons and excitons. This means that only far-detuned polaritons asymptotically exhibit quasi-flat dispersion and, thus, are formed by a large exciton component and negligible photon contribution. In contrast, our three-level design allows for flatband polaritons and slow-light retaining a significant photonic component of circa 40%. **Short-time dynamics.** Now we focus on the effect of the vanishing curvature of the MP, hinting at a mechanism analogous to the charge population trapping observed in atomic electromagnetically induced transparency. For this, we measure the prompt fluorescence decay employing the time-correlated single-photon counting technique. We explore the dynamics of the LP and MP and relate them with the corresponding photon component. The coupled cavities are pumped off-resonantly with laser pulses centered around 2.42 eV. Figure 3(a) displays decays for two detunings used in Fig. 2, i.e., \(\delta/\mathrm{eV}=0\) and \(\delta/\mathrm{eV}=-0.37\). In the considered time interval, all decays are well described by two time constants. For our analysis we focus initial dynamics illustrated in the pink shaded region We model the evolution assuming a dynamical equation of the form \(C(t)=A\exp(-\Gamma t)+(1-A)\exp(-\eta t)\), normalized to \(C(t=0)=1\). Figure 3(a) shows that this model (solid curves) provides a very good fit to the experimental data (points). We obtain that the LP dynamics is dominated by a single exponential with \(\Gamma_{\mathrm{LP}}(\delta=0)=3.25\) ns\({}^{-1}\) and \(A_{\mathrm{LP}}=0.94\), while \(\Gamma_{\mathrm{LP}}(\delta=-0.37)=3.0\) ns\({}^{-1}\) with \(A=0.86\). On the other hand, we find a richer dynamics for the MP, whose dynamics is characterised by Figure 2: **Intercavity polaritons.** (a)-(c) s-polarized reflectance as a function of the angle of incidence and photon energy for (a) \(\delta/\mathrm{eV}=-0.37\), (b) \(\delta/\mathrm{eV}=-0.18\) and (c) \(\delta/\mathrm{eV}=0\). The white, red, and black dashed curves correspond to the theoretical fitting of the energies of the lower, middle, and upper polariton respectively. (d)-(f) Spectral function, \(A(\mathbf{k},\omega)\), calculated for the same detunings as in (a)-(c). Middle polariton Hopfield coefficients for (g) \(\delta/\mathrm{eV}=-0.37\), (h) \(\delta/\mathrm{eV}=-0.18\) and (i) \(\delta/\mathrm{eV}=0\). The black, blue and red curves correspond to the left cavity photon, right cavity photon and exciton component, respectively. \(\Gamma_{\rm MP}(\delta=0)=0.18\) ns\({}^{-1}\) and \(\eta_{\rm MP}(\delta=0)=3.25\) ns\({}^{-1}\) for the flatband condition, and \(\Gamma_{\rm MP}(\delta=-0.37)=0.19\) ns\({}^{-1}\) and \(\eta_{\rm MP}(\delta=-0.37)=4.57\) ns\({}^{-1}\) for the detuned case. In contrast to the LP, the weight of the exponentials for the MP is almost equally distributed as \(A(\delta=0)=0.55\) and \(A(\delta=-0.37)=0.5\). The significantly slower dynamics of the MP shown in Fig. 3(a) is consequence of the interplay between the two dynamical factors. Indeed, we observe that at very small times, in the linear regime \(C_{\rm LP}(t)\approx 1-\Gamma_{\rm LP}t\), whereas \(C_{\rm MP}(t)\approx 1-[A\Gamma_{\rm MP}+(1-A)\eta_{\rm MP}]t=1-\gamma_{\rm eff}t\). On resonance, \(\delta=0\), we obtain \(\gamma_{\rm eff}\approx 1.7\) ns\({}^{-1}\) which is significantly smaller than the dominant decay rate of the LP. We attribute this effect to a competition between a reduced photon component of the MP, which suppresses the damping rate, and the presence of the dark-states reservoir of exciton laying close to the energy of the MP, which may favour non-radiative scattering and accelerated decay. We observe that the dynamics of the MP is slower than LP one, even though the MP energetically lies on top of the reservoir of dark-state excitons (see Fig. 3(b)). Therefore, we conclude that the vanishing curvature of the MP and its concomitant dark nature produces a significant effect in the polariton dynamics in the nanosecond range. Finally, we see that the decay of the LP becomes slower Figure 3: **Short-time dynamics and dark-state polaritons.** (a) Normalized fluorescence lifetime showing the short-time dynamics of the middle and lower polaritons. The dynamics of the MP is displayed with orange and green asterisks for \(\delta/{\rm eV}=-0.37\) and \(\delta/{\rm eV}=0\), respectively. The LP decay is illustrated with blue and purple asterisks for \(\delta/{\rm eV}=-0.27\) and \(\delta/{\rm eV}=0\), respectively. Shaded pink depicts the region where the fastest decay component dominates the early polariton dynamics. (b) Sketch of the relevant energy levels. Fine-tuning the left cavity photon energy, shifts the energy of the three polariton states. The grey area symbols the presence of the dark exciton reservoir. The triplet state of the ErB molecules, \(|T_{1}\rangle\), influences the LP dynamics via intersystem crossing. (c)-(d) s-polarized fluorescence, expressed in counts per integration time, for \(\delta/{\rm eV}=-0.37\) and \(\delta/{\rm eV}=0\), respectively. The energy of the bare exciton is represented by the dashed white line. Photons are injected at \(\omega_{\rm p}/{\rm eV}=2.6\) which lies at the energy of the upper polariton. (e) Hopfield coefficients of the UP for \(\delta/{\rm eV}=0\) showing that it is predominantly formed by right cavity photons The black, blue and red curves correspond to the left cavity photon, right cavity photon and exciton component, respectively as it approaches the energy of the ErB triplet state, \(|T_{1}\rangle\). This is evident from the asymptotic intensity value of the decay curve which does not converge to the noise floor of the other curves. Here, intersystem crossing not only increases significantly the slow component of the decay, but also stretches the fast one, outcompeting the role played by the photonic Hopfield coefficients and producing long-lived polaritons. **Photoluminescence** To further investigate the nature of the polariton states we now turn our attention to the steady-state fluorescence. The system is pumped with a CW laser emitting at \(\omega_{p}=2.62\)eV, which corresponds to an off-resonant excitation for all negative detunings. This means that we inject photons approximately equally in both cavities and produce excitons in the right cavity. For \(\delta/\mathrm{eV}=-0.37\), the MP is formed almost equally by both left and right cavity photons, while LP is formed predominantly by left cavity photons. Thus, we observe a higher fluorescence intensity from the MP than from the LP, as seen in Fig. 3(c). In the Supplemental Information we show that as we decrease the detuning, the LP acquires a larger fraction of the photon and exciton component of the right cavity, which consistently conduces to an increased LP fluorescence. However, as we approach the condition for \(\delta/\mathrm{eV}=0\), the UP energy shifts until it matches the pump energy at normal incidence. As shown in Fig. 3(e), the UP for \(\delta/\mathrm{eV}=0\), is primarily formed by right cavity photons. Therefore, this configuration preferentially injects photons into the right cavity. As we move towards the condition for flatband polariton, the right cavity photon component of the MP decreases until vanishing exactly. Decoupling the MP from the right cavity photons results in an inefficient excitation of the MP and, thus, a strongly suppressed MP fluorescence intensity, as demonstrated in Fig. 3(d), confirming the transition of the MP to a dark state. **Conclusions and Outlook** We have experimentally demonstrated the formation of inter-cavity polaritons composed by the admixture of photons and excitons sitting in physically separated optical cavities. The three energy level \(\Lambda\)-scheme underpinning the physics of this system implies, on resonance, the existence of a polariton flatband. We observed the flattening of the middle polariton branch which transits smoothly to a dark state upon tweaking the photon-exciton detuning. This mechanism effectively decouples the flatband polariton from free space leading to slower short-time dynamics. On resonance, the absence of one of the photon states in the composition of the dark polariton is confirmed by the steady-state fluorescence, whose intensity is suppressed under resonant pumping of the upper polariton branch. The interplay of the polaritons with the exciton reservoir needs to be taken into account as this introduces entropically favored scattering paths that may hamper the observation of more exotic physics. However, we stress that the short-dynamics modification hints at that slow-light possibly overcomes the effect of the reservoir. This motivates further studies on the interplay between the dark-state dynamics and the possible breakdown of quasiparticle picture [39, 40, 41]. Hybrid photonic-polariton systems are an ideal platform to explore many-body physics in multi-level coupled cavity systems. This shares analogy to interlayer excitons observed in stacked 2D materials, where the indirect character of the interlayer excitons gives rise to strongly interacting many-body states [42], long-lived exciton-polaritons [43, 44], and new classes of polaritons [45, 46]. Inspired by how twistronics in stacked 2D materials gave rise and non-conventional superconductivity [47], we envisage that polariton flatbands can be useful to increase polariton correlations and facilitate the observation of non-trivial quantum phases in lattices-free systems. Moreover, our strategy to generate slow-light does not compromise the photonic component of the flatband polariton, making it appealing for strongly correlated truly polariton states. The reduced in-plane propagation helps confining spatially polaritons without the additional complication of fabricating physical boundaries. This may lower the threshold needed for quantum phases transitions while maintaining the architecture of the system simple. Furthermore, the quantum entanglement between the photonic and molecular degrees of freedom may be unraveled by exploiting the spatially indirect character of the intercavity polariton. **Methods** _Sample fabrication_ The sample is composed by two vertically stacked Fabry-Perot cavities fabricated on a glass substrate \(10\times 10\) mm\({}^{2}\) by multiple successive sputtering and spin-coating steps. The bottom 300 nm-thick Ag mirror was fabricated by magnetron sputtering operated at room temperature and a base pressure of approximately \(10^{-6}\) Torr which, during deposition, is pressurized with argon flow to \(3\times 10^{-3}\) Torr. We deposited 99.99% purity Ag at a rate of 0.08 nm/s. The active layer of the first cavity is obtained starting from a solution of 25 mg of polyvinyl alcohol (PVA, Mowid 44-88, 86.7-88.7% hydrolyzed, Mw \(\approx\) 205 000 g/mol) dispersed in 1 mL of distilled water. Then, 9.8 mg of Erythrosin B (ErB, Sigma Aldrich with dye content \(>\)90%) was added to the PVA/water solution, yielding a 0.5 M concentration. The ErB/PVA thin films were deposited by spin-coating at 2100 rpm using a 0.45 \(\mu\)m pore PTFE syringe filter, obtaining approximately 120 nm thickness. The first cavity is completed by fabricating a 20 nm-thick middle mirror on top of the active layer. The second cavity is formed by a Polymethyl methacrylate (PMMA, Mw \(\approx\) 120 000 g/mol) layer embedded between the middle and top mirror. This layer is obtained starting from a 25 mg/mL solution of PMMA. The solution is spin-coated at 2600 rpm for 60 s using a 0.45 \(\mu\)m pore PTFE syringe filter and provides a slow thickness gradient centered around 140 nm. Using PMMA instead of PVA avoids the formation of micro bubbles at the surface of the second cavity. _Experimental set-up_ Energy-momentum spectroscopy is performed in a homemade confocal Fourier optical microscope. Imaging the back-focal plane of a high numerical aperture microscope objective onto the entrance slit of a spectrograph (Kymera 328i, Andor) coupled to a sCMOS camera (Zyla 4.2P, Andor) is done by a Bertrand lens and provides direct access to the angular- and spectral-resolved reflectance. In our set-up the sample is illuminated through a Plan Fluor 50x/0.8 NA objective (Nikon) with white light emitted by a halogen lamp. The focal spot full-width at half-maximum equals 14 \(\mu\)m. The collected light is dispersed by a diffraction grating (150 lines mm, blazed at 500 nm). Two linear polarizers in the excitation and collection path are used to select the s- or p- polarization. Angular-resolved reflectance is obtained by replacing the cavity with a commercial mirror, which allows to normalize the spectra reflected off the cavity at each angle with those obtained with the mirror at the corresponding angles. Angular-resolved steady-state fluorescence is measured by pumping the coupled cavity with a 473 nm continuous wave laser (Excelisor 473, Spectra Physics) coupled to the Fourier microscope in epi-illumination configuration and focused down to 1 \(\mu\)m. The laser power is attenuated to 30 mW to avoid local damaging of the sample. The pump laser is filtered by a 500 nm long pass filter and its polarization is selected by the same broadband linear polarizer used for reflectance. The measurements for different detunings are collected using the same integration time to ensure comparability. Lifetime measurements are performed by a homemade time-correlated single-photon counting module coupled to the Fourier microscope. 100 ps laser pulses centered around 513 nm (LDH-P-C-520M, PicoQuant) are focused on the sample surface by the same epi-illumination path used for the steady-state fluorescence. We use a repetition rate of 10 MHz and an intensity such that the average photon count rate at the detector is always 0.04% of the excitation rate. The focus diameter is roughly 1 \(\mu\)m. The pump beam is filtered by a 525 nm long pass filter, while the appropriate 10 nm full-width at half-maximum band pass filter selects the LP or MP normal incidence wavelength, for each detuning. The emitted light follows the same optical path as reflectance and steady-state fluorescence but is directed to a single-photon avalanche photodiode (MPD). The trigger from the laser driver (PDL 800-D, PicoQuant) and the signal from the detector are sent to a time-to-digital converter (Time Tagger 20, Swabian Instruments). All histograms are built with 100 ps binwidth. The instrument response function (IRF) has been measured in several ways to check consistency of the result and ensure a reliable exponential fit for the short-time dynamics. The relation between reflectance, fluorescence and decay measurements is guaranteed by the overlapping focus spots and the slow gradient slope of the PMMA layer thickness. _Detuning-resolved measurements_ In order to access experimentally a large set of detunings in a single sample, we designed the PMMA layer to exhibit a slow and almost linear gradient towards the peripheral zones. This radial gradient is controlled by the spin-coating rotation speed. A radial sample movement of one millimiter corresponds to increasing the PMMA thickness by approximately 50 nm. On the other hand, the ErB/PVA layer featured a constant thickness throughout the sample. The position of the focal spot is controlled by micrometer screws that permit shifting the sample in the focal plane of the microscope objective. _Theoretical approach.-_ The Green's functions follows the Dyson equation \(\mathcal{G}^{-1}(z)=[\mathcal{G}^{(0)}(z)]^{-1}-\Sigma(z)\), with the non-vanishing terms \[\mathcal{G}^{(0)}_{11}(z)=\frac{1}{z-\omega_{c}^{L}(\theta)}, \mathcal{G}^{(0)}_{22}(z)=\frac{1}{z-\omega_{c}^{R}(\theta)} \tag{4}\] \[\mathcal{G}^{(0)}_{33}(z)=\frac{1}{z-\omega_{X}},\] and the self-energy given by \[\Sigma_{12}(z)=\Sigma_{12}(z)=-t, \tag{5}\] \[\Sigma_{23}(z)=\Sigma_{32}(z)=\Omega.\] The Green's function can be obtained analytically in this case, in particular, we obtain \[\mathcal{G}_{11}(z)=\frac{1}{\mathcal{G}^{(0)}_{11}(z)-t^{2}\mathcal{G}^{(0)} _{11}(z-\Omega^{2}\mathcal{G}^{(0)}_{33}(z)).} \tag{6}\] After analytic continuation \(z\rightarrow\omega+i0^{+}\), the energies of the polaritons are obtained by the position of the poles of the Green's function \[\mathrm{Re}[\mathcal{G}^{-1}_{11}(E)]=0, \tag{7}\] which has the three solutions coined lower, middle, and upper polariton. The residue \[Z=\left.\left(\frac{\partial\mathrm{Re}[\mathcal{G}^{-1}_{11}(\omega)]}{ \partial\omega}\right)^{-1}\right|_{\omega=E}. \tag{8}\] The dispersion of the cavity photons is given \(\omega_{c}^{(R/L)}(\mathbf{k})=\frac{c}{n_{c}^{(R/L)}}\sqrt{k_{z}^{2}+k_{||}^ {2}}\), where the incident light is along the \(z\) axis, perpendicular to the cavity mirrors, where the angle \(\theta\), is given by \(k_{||}=n_{c}^{(R/L)}\frac{\omega}{c}\sin\theta\). _Acknowledgments.-_ We thank Joel Yuen-Zhou for the critical reading to our manuscript and valuable discussions. G. P. acknowledges financial support from Grants UNAM DGAPAP PAPIIT No. IN104522 and CONACyT projects 1564464 and 1098652. H. L. G. acknowledges financial support from Grant UNAM DGAPAP PAPIIT No. IA107023. A. C. G. acknowledges financial support from Grant UNAM DGAPAP PAPIIT No. IN108620. C. L. O-R acknowledges financial support from Grant UNAM DGAP PAPIIT IG100521. H.E.S. acknowledges support from DGTIC-UNAM under Project LANCAD-UNAM-DGTIC-419 and from Grant UNAM DGAPAPIIT No. IA106023. A.C.-G, G. P. and H. E. S acknowledge financial support of PIIF 2023 H.E.S, acknowledges Carlos Ernesto Lopez Nataren for helping with the high-performance computing infrastructure. H. A. L.-G, A.C.-G, G. P acknowledge support from Grant UNAM DGAPAPIIE No. PE101223. Contributions Y.A.G.-C, B. V., D. L. D, C. L. O.-R., H. L.-G, and G. P performed the experiments. R. A, H. E. S., and A. C.-G provided the theoretical analysis. H. A. L.-G., A. C. -G and G. P wrote the paper, with input from all authors. A. C.-G and G. P. designed the project. Correspondence to Arturo Camacho Guardian and Giuseppe Pirruccio.
2309.05882
Fluid Dynamic Simulations of Mach and Regular Reflections in Oblique Shock-Wave Configurations using Adaptive Mesh Refinement
In the context of the interaction between a moving plane shock wave and an inclined wall (wedge), it is possible to distinguish four distinct shock reflection configurations. These shock wave reflections, which depend on the characteristics of the incident shock wave and the geometry of the surface that it interacts with, are (i) regular reflection (RR), (ii) simple Mach reflection (SMR), (iii) transition Mach reflection (TMR), and (iv) double Mach reflection (DMR). The impact of these shock reflections on flow properties can be significant so understanding them is important when predicting the behavior of shock waves in more complex flow configurations. Previous research works have explored the referred shock reflections through both numerical and experimental approaches, employing various gases and different flow and geometrical configurations. The present study involves the use of a high-fidelity computational fluid dynamics (CFD) tool, known as PeleC, which is a compressible solver based on AMReX specifically designed to handle complex flow configurations. Accordingly, by solving the time-dependent Euler equations for various 2D flow configurations, this work studies shock wave reflections accounting for four different Mach-based operating conditions and compares and analyzes the resulting density profiles on the wedge wall with experimental data. To strike a balance between model accuracy and computational efficiency, adaptive mesh refinement (AMR) is incorporated, and a mesh independence study is performed by varying the number of AMR levels. The results of this study demonstrate the capabilities of the CFD tool employed as it accurately predicts the sensitivity of wave characteristics to different operating conditions.
Sebastian Valencia, Cesar Celis, Andres Mendiburu, Luis Bravo, Prashant Khare
2023-09-12T00:05:47Z
http://arxiv.org/abs/2309.05882v1
# Cob-2023-0803 ###### Abstract In the context of the interaction between a moving plane shock wave and an inclined wall (wedge), it is possible to distinguish four distinct shock reflection configurations. These shock wave reflections, which depend on the characteristics of the incident shock wave and the geometry of the surface that it interacts with, are (i) regular reflection (RR), (ii) simple Mach reflection (SMR), (iii) transition Mach reflection (TMR), and (iv) double Mach reflection (DMR). The impact of these shock reflections on flow properties can be significant so understanding them is important when predicting the behavior of shock waves in more complex flow configurations. Previous research works have explored the referred shock reflections through both numerical and experimental approaches, employing various gases and different flow and geometrical configurations. The present study involves the use of a high-fidelity computational fluid dynamics (CFD) tool, known as PeleC, which is a compressible solver based on AMReX specifically designed to handle complex flow configurations. Accordingly, by solving the time-dependent Euler equations for various 2D flow configurations, this work studies shock wave reflections accounting for four different Mach-based operating conditions and compares and analyzes the resulting density profiles on the wedge wall with experimental data. To strike a balance between model accuracy and computational efficiency, adaptive mesh refinement (AMR) is incorporated, and a mesh independence study is performed by varying the number of AMR levels. The numerical method utilized here is based on a finite volume discretization, involving approximate Riemann solvers. Temporal and spatial integration is performed using the method of lines (MOL), a second-order characteristic-based spatial method, coupled with a Runge-Kutta time integration. The time step obeys a specified Courant-Friedrichs-Lewy (CFL) condition of 0.3. The results of this study demonstrate the capabilities of the CFD tool employed as it accurately predicts the sensitivity of wave characteristics to different operating conditions. The findings of this work will serve as a foundation for future studies involving more complex flow configurations such as those featuring detonation waves. Wedge flows, Shock waves, Wave propagation and interaction, Euler equations, AMR. ## 1 Introduction Shock wave reflection is a fundamental phenomenon in gas dynamics that has attracted the attention of researchers for over a century. It occurs when a shock wave encounters a surface or another shock wave. The phenomenon was first observed by Ernst Mach in 1878, who experimentally identified two types of reflection configurations, (i) regular reflection (RR) and (ii) Mach reflection (MR) (Mach, 1878). In RR, an incident shock wave and a reflected shock one meet at a reflection point on a reflecting surface. A MR involves in turn one slipstream and three shock waves, the incident, the reflected and the Mach stem ones. Later on, von Neumann proposed the two- and three-shock theories for treating RR and MR, respectively, assuming the flow of an ideal gas to be inviscid (von Neumann, 1943). White and Smith later identified four shock reflection patterns, (i) RR, (ii) single-Mach reflection (SMR), (iii) transitional-Mach reflection (TMR), and (iv) double-Mach reflection (DMR) (White, 1952; Smith, 1945). In SMR, the point of convergence between the incident and reflected shock waves is situated above the wedge. At this convergence point, a third shock known as the Mach stem extends towards the surface of the wedge. Additionally, a curved shear layer called the slipstream trails behind the triple shock convergence point as the shocks propagate along the wedge. In DMR, a bend occurs in the reflected shock wave, giving rise to a second Mach stem. TMR marks the onset of double Mach reflection, where the second triple point is barely visible and manifests itself as a slight bend in the reflected shock (Hryniewicki et al., 2016). The detachment criterion between RR and MR was further investigated by Henderson and Lozzi (1975), who established that, if the transition follows the detachment criterion, a discontinuous transition pressure jump exists. Hence, another criterion named the mechanical equilibrium one is defined based on flow configurations that always fulfill the mechanical equilibrium during transition processes between Mach and regular reflections. These criteria resulted in transitional boundaries that distinguish the different shock wave reflection regions (Henderson and Lozzi, 1975). Deschambault and Glass (1983) conducted experimental investigations that demonstrated that, when compared to the ideal gas case, the use of properties of real gases do not significantly affect the four types of shock wave reflections. They obtained indeed reliable data for the four shock reflection types in air by utilizing infinite-fringe interferometric techniques (Deschambault and Glass, 1983). Ben-Dor et al. (1977) studied in turn a planar shock reflection over a plane double wedge and considered several complicated wave configurations (Ben-Dor et al., 1987). Further studies on the reflection of planar shock waves over different solid walks have been performed in the past both numerically and experimentally (Previtali et al. 2015; Zhang et al., 2016; Geva et al., 2017; Hryniewicki et al., 2017). Accordingly, to assess the capabilities of the computational tool employed here, this work carries out a detailed analysis of regular and Mach reflections using Pelec, a compressible solver based on AMReX. The numerical results obtained in this work are compared with the experimental data from Deschambault and Glass (1983) showing that the numerical model employed here is capable of accurately capturing the supersonic flow characteristics of regular and Mach reflections. ## 2 Theoretical Analysis ### Principles of Regular Reflection Normal shock waves are characterized by a sudden and significant increase in pressure and temperature, as well as a decrease in velocity, across the shock wave. The properties of normal shock waves can be calculated using the Rankine-Hugoniot relations (Houghton, 2017), which relate the upstream and downstream states of the gas to the properties of the shock wave. These relations describe the conservation of mass, momentum, and energy across the shock wave. The properties of normal shock waves depend on the Mach number of the incoming flow \(M_{i}\). Based on the conventional Rankine-Hugoniot equations, the detailed flow characteristics are computed as follows (Houghton, 2017), \[\frac{\rho_{2}}{\rho_{1}}=\frac{\left(1+\gamma\right){M_{i}}^{2} }{\left(\gamma-1\right){M_{i}}^{2}+2} \tag{1}\] \[\frac{p_{2}}{P_{1}}=\frac{2\gamma{M_{i}}^{2}-\left(\gamma-1 \right)}{\gamma+1}\] (2) \[\frac{a_{2}}{a_{1}}=\frac{\left[\frac{T_{2}}{T_{1}}\right]}{T_{1} }=\frac{\left[\frac{p_{2}\rho_{1}}{P_{1}\rho_{2}}\right.}{\left.\right.}\] (3) \[\frac{U_{2}-U_{1}}{a_{1}}=\frac{2\left({M_{i}}^{2}-1\right)}{M_{i} \left(\gamma+1\right)} \tag{4}\] where the flow properties temperature \(T\), density \(\rho\), pressure \(P\), sound speed \(a\), and velocity \(U\) denoted by the subscript \(1\) characterize the flow conditions before the shock, while those properties denoted by subscript \(2\) correspond to the flow conditions after the shock. \(\gamma\) is in turn the specific heat ratio. ### Analytical Boundaries for RR-MR transition The theoretical boundaries that define the RR-MR transition, i.e., the detachment boundary, the sonic boundary, and the von Neumann boundary, are obtained from the analytical solutions determined by von Neumann (1943). The work of von Neumann was later studied by Henderson and Lozzi (1975), who provided an analytical expression for these transition boundaries. Figure **1** shows that there are three regions determined by the mechanical equilibrium and detachment boundaries, (i) an upper region only for regular reflection, (ii) a dual region for either regular reflection or Mach reflection between the two boundaries, and (iii) a lower region for only Mach reflection (SMR, TMR, and DMR). The physically realizable solution that defines the detachment boundary comes from the two-shock theory (von Neuman, 1943). Expressed in terms of wedge angle \(\theta\) and incoming flow Mach number \(M_{l}\), this solution is given by, \[\cos\theta=\frac{1}{a+2e\cos(f/3)}, \tag{5}\] where, \[\begin{array}{c}a=\frac{1+(\gamma-1)d}{3},\qquad b=2d-d^{3},\qquad c=\gamma d ^{2},\qquad d=\frac{2}{\gamma+1}\frac{{M_{l}}^{2}-1}{{M_{l}}^{2}},\qquad e= \sqrt{a^{2}+\frac{b}{3}},\\ f=\cos^{-1}\left(\frac{ab+2a^{3}-c}{2e^{3}}\right),\end{array} \tag{6}\] The mechanical equilibrium criterion in turn is established based on the three-shock theory (von Neuman, 1943). The transition from Mach reflection to regular reflection appears when the triple-point trajectory angle diminishes to zero. At this situation, the Mach stem decreases to an infinitesimal length and the slipstream disappears. The relation between the wedge angle \(\theta\) and incoming flow Mach number \(M_{l}\) is shown to be, \[\cos^{2}\theta=\frac{c}{b+\sqrt{b^{2}-ac}} \tag{7}\] where, \[\begin{array}{c}a=4d+2(\gamma-1)(\gamma+2)d^{2}-(\gamma^{2}-1)d^{3},\qquad b =\gamma+3-\frac{1}{2}(5-\gamma)(\gamma+1)d+2\gamma d^{2},\\ c=4-4d,\qquad d=\frac{2}{\gamma+1}\frac{{M_{l}}^{2}-1}{{M_{l}}^{2}}.\end{array} \tag{8}\] Finally, the sonic boundary is given by a fifth order polynomial in terms of \(\sin^{2}\theta\) versus the inverse incident shock strength \(P_{1}/P_{2}\), and it is not considered here because this boundary lies very close to the detachment one, differing by less than a half of a degree for each given value. Figure 1: Regions of RR and MR patterns separated by analytical transition boundaries for air (Hryniewicki et al., 2016). ## 3 Mathematical Modeling Accounting for two-dimensional flow configurations, the time-dependent Euler equations are solved here. This choice of using the Euler equations aligns with previous studies conducted by various researchers in the field. Therefore, for a 2D Cartesian coordinate system (x, y), the corresponding transport equations for mass, momentum, and energy expressed in matrix-vector form are given by (Houghton, 2017), \[\frac{\partial U}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G}{ \partial y}=0, \tag{9}\] where \(t\) stands for physical time. The vector of solution variables, \(\mathbf{U}\), and the inviscid flux vectors, \(\mathbf{F}\) and \(\mathbf{G}\), are in turn given by, \[\mathbf{U}=\begin{pmatrix}\rho\\ \rho u\\ \rho v\\ \rho E\end{pmatrix},\qquad\mathbf{F}=\begin{pmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ (\rho E+p)u\end{pmatrix},\qquad\mathbf{G}=\begin{pmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ (\rho E+p)v\end{pmatrix}, \tag{10}\] where \(\rho\), \(u\), \(v\), and \(p\) represent gas density, velocities along the x and y coordinates, and pressure, respectively. In addition, \(E\) is the total energy including the kinetic (\(E_{k}\)) and internal (\(E_{U}\)) energies, \[E=E_{k}+E_{U}, \tag{11}\] \[E_{k}=\frac{1}{2}(u^{2}+v^{2}),\] (12) \[E_{U}=\frac{R}{\gamma-1}T, \tag{13}\] where \(T\) and \(R\) stand for, respectively, flow temperature and gas constant. ## 4 Numerical Modeling This section highlights the numerical modeling approach utilized here, with specific focus on the solver and numerical schemes employed, the geometric configuration accounted for, and the boundary conditions imposed. ### Solver and numerical schemes To conduct the intended numerical simulations, an open-source AMR-based compressible reacting flow solver, named Pelec (PeleC, 2023), is employed in this work. Pelec solves transport equations for mass and momentum in the compressible flow regime. Pelec is built on top of the AMReX framework, which massively facilitates parallel block-structured adaptive mesh refinement (AMR). The validity of Pelec has been established through its previous successful applications in several standard cases, including the Sod shock tube (Henry de Frahan et al., 2022). The governing equations here are closed with the ideal gas equation of state (EoS) available in the PelePhysics submodule, which provides models and parameters associated with thermodynamics, transport properties, and chemical reactions. The system of partial differential equations solved here is spatially discretized using a second-order finite volume approach. Notice that Pelec supports two different discretization methods, (i) the unsplit piecewise parabolic method (PPM) with optional hybrid PPM WENO variants, and (ii) a second-order characteristic-based spatial method coupled with a Runge-Kutta time integration known as a method of lines (MOL). For the present study, only the former method has been utilized as it is suitable for complex geometries with embedded boundaries (EB). In addition, for an accurate resolution of the resolving shock waves, adaptive mesh refinement (AMR) is enabled at locations with relatively high-density gradients. ### Geometric configuration A two-dimensional computational domain has been adopted here which shares similarities with the one used by Hryniewicki et al. (2016), Figure **2**. More specifically, the computational domain spans over a spatial region with x-coordinates ranging from 0 to 4 meters, and y-coordinates ranging from 0 to 0.75 meters. The domain is discretized using a grid featuring 512 cells in the \(x-\)direction and 96 cells in the \(y-\)direction. It is worth noticing that, in the two-dimensional plane, the grid cells are square shaped with an aspect ratio of unity (\(Dx/Dy=1\)). Besides, the initial position of the shockwave has been established at \(x=0.5\) meters, whereas the rigid wedge is introduced at \(x=3.0\) meters with different wedge angles. To enhance the grid resolution in specific regions of the flow, adaptive mesh refinement (AMR) processes are carried out during the computations. Both to reduce numerical errors and to ensure that grid-independent results are obtained, a mesh independence study was also conducted. The referred study, which main outcomes are summarized in Section 5.1, involved analyzing the results obtained using different AMR levels. ### Boundary and initial conditions It is of particular interest here to analyze the reflected shock waves originated under different incoming flow Mach numbers and wedge angles. To do so, for each of the four cases studied here and listed in Table 1, the initial thermochemical conditions ahead of the incident shock wave (Figure **2**, region 1) are identical to those investigated by Deschambault and Glass (1983). These initial conditions include density, temperature, and pressure of the fluid flow. Furthermore, the flow properties behind the incident shock (Figure **2**, region 2) are determined using the Rankine-Hugoniot equations, Eqs. (1)-(4). Regarding the boundary conditions, all numerical simulations carried out in this work involved the use of a first-order extrapolation (FOExtrap) based outflow boundary condition along the y-direction boundaries. In addition, the upper, lower, and wedge boundaries were set to no-slip wall conditions. Finally, notice that, as shown in Figure **3**, each of the four cases analyzed in this work correspond to a specific shock wave reflection region defined by the detachment and mechanical equilibrium criteria. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **CASE** & \(\mathbf{\theta_{w}}\) & \(\mathbf{M_{s}}\) & \(\mathbf{P_{1}(Kpa)}\) & \(\mathbf{T_{1}(K)}\) & \(\mathbf{\rho_{1}\Big{(}\dfrac{k\mathbf{g}}{m^{3}}\Big{)}}\) \\ \hline 1 & 63.4\({}^{\circ}\) & 2.05 & 33.3306 & 298.4 & 0.387 \\ 2 & 60\({}^{\circ}\) & 4.70 & 6.13283 & 298.5 & 0.0712 \\ 3 & 27\({}^{\circ}\) & 2.03 & 33.3306 & 299.2 & 0.387 \\ 4 & 20\({}^{\circ}\) & 7.19 & 7.99934 & 298.5 & 0.0929 \\ \hline \hline \end{tabular} \end{table} Table 1: Initial conditions for the four oblique shock wave reflection cases studied. Figure 2: 2D domain for shock wave reflections of an inclined and rigid wedge with angle \(\mathbf{\theta}\)(Hryniewicki et al., 2016). ## 5 Results and Discussion The main numerical results obtained in this work are presented and discussed in this section. Both qualitative and quantitative analyses of the referred results are carried out. ### Mesh independence study A mesh independence analysis was firstly performed here to determine the requirements in terms of grid resolution for the numerical simulations. Initially, a mesh with 512x96 elements and a minimum element size of 7.81 mm was generated and used as the base mesh. On top of this initial mesh, four other meshes were generated by varying the number of AMR levels. Finally, accounting for Case 3 (Table 1), several numerical simulations were conducted using the five meshes generated and the corresponding results are shown in Figure **4**. As illustrated in Figure **4** (left plot), in terms of wedge wall density, there are no significant differences between the results obtained with the meshes featuring 3 and 4 AMR levels. Consequently, the mesh including 3 AMR levels, featuring a minimum element size of 0.97 mm and a mesh size of about 1.5 million elements, was chosen here to carry out the intended numerical simulations. This mesh configuration was deemed adequate to achieve the desired level of accuracy in the simulations performed in this work. Figure **5** shows the computational mesh utilized here for the numerical simulations carried out. In this figure, the black and gray boxes indicate the regions where adaptive mesh refinement (AMR) was employed. More specifically, the black boxes represent the areas with the highest level of refinement, whereas the gray ones indicate regions with a lower level of refinement. It can be observed from Figure **5** that the area around the wedge surface features the highest level of Figure 4: Density ratio profiles along the compression ramp surface obtained with meshes featuring different number of AMR levels. Right plot is a zoom of the left one. Figure 3: Oblique shock wave reflection cases studied and their relative location regarding the analytical transition boundaries. refinement throughout. This occurs because this wedge region is a critical zone where the shock wave interacts with the wedge wall. As such, it is essential to have a refined and high-quality mesh to both capture the intricate features of the flow and accurately predict the wave-wall reflection interactions. Overall, the mesh was designed to ensure the accuracy of the numerical results by enforcing sufficient mesh resolution in the regions of interest. ### Density ratio distributions Figure **6** to Figure **9** shows density ratio contours obtained from the numerical simulations conducted for the four cases studied here (Table 1), which allows a qualitative comparison with the experimental data presented by Deschambault and Glass (1983). To gain an insight of the observed patterns, it is imperative to delve into the theories and descriptions of regular reflection, single Mach reflection, and transitional Mach reflection. Regular reflection occurs when a shock wave encounters a solid wall at an appropriate angle. In this scenario, the resulting reflected shock wave remains attached to the wall, forming an oblique shock wave. Besides, the incident and reflected shock waves intersect, giving rise to a regular arrangement of shock waves. As supported by the revised transition boundaries theory depicted in Figure 3, Case 1 and Case 2 studied here belong to the region of regular reflection. Both the numerical results and the experimental data corroborate this finding, as no Mach stem is present, and the shock pattern includes only two propagating shock waves (Figure **6** and Figure **7**). In single Mach reflection in turn, the incident shock wave strikes the surface of the wedge, generating a curved reflected shock wave that intersects with the incident one. The point of intersection forms what is called the triple point, which lies above the surface of the wedge. At the triple point, a third shock wave called the Mach stem extends towards the surface of the wedge. This shock pattern including the triple point is clearly noticed in the numerical results obtained for Case 3 and the experimental data from Deschambault and Glass (1983) (Figure **8**). Finally, transitional Mach reflection involves a second triple point that represents the confluence point of the incident and reflected shocks. This second triple point is slightly visible and manifest itself as a subtle kink or bend in the reflected shock wave. In this scenario, the overall shock pattern is in the process of transitioning from the simpler single Mach reflection pattern to the more intricate double Mach reflection one. Case 4 (Figure **9**) studied here exemplifies this scenario, where the second triple point is barely discernible. Both the numerical results and the experimental data exhibit this trend, which agree with the relevant theory and the von Neumann criteria. Figure 5: Details of computational mesh employed plus AMR levels included and contours of density ratio. Figure 8: Density ratio contours for Case 3 compared with experimental data from Deschambault and Glass (1983). Figure 6: Density ratio contours for Case 1 compared with experimental data from Deschambault and Glass (1983). Figure 7: Density ratio contours for Case 2 compared with experimental data from Deschambault and Glass (1983). ### Density ratio profiles Figure **10** illustrates with blue lines the density ratio profiles along the wedge wall computed for each of the four cases studied here (Table 1). This figure also includes the experimental data obtained by Deschambault and Glass (1983) as red symbols. From Figure **10a**, for Case 1, featuring a Mach number of 2.05 and an angle of incidence of 63.4\({}^{\circ}\), and exhibiting a regular reflection, the numerical results show a relatively good agreement with the experimental data, with no major differences detected in this case. This emphasizes that the numerical predictions accurately captured the supersonic flow characteristics. In Case 2 (Figure **10b**), which features a Mach number of 4.70 and an angle of incidence of 60\({}^{\circ}\), the flow is expected to be in the dual region of Regular and Mach reflections. In this case, the numerical results show a pattern similar to a regular reflection profile in the wedge wall, which is not observed in the experiments. This discrepancy could be attributed to the limitations of the numerical model employed in this work, as it may not be able to fully capture the complex physics of the flow in this particular situation. Nevertheless, the results of the numerical simulations carried out for this case still provide valuable insights into the flow characteristics and indicate the need for further research and development of more accurate models. In Case 3 (Figure **10c**), characterized by a Mach number of 2.03 and an angle of incidence of 27\({}^{\circ}\), a SMR is obtained (Figure **8**). Like Case 1, in this case the numerical results are quite similar to the experimental data and the discrepancies are insignificant. Finally, in Case 4 (Figure **10d**), featuring a Mach number of 7.19 and an angle of incidence of 20\({}^{\circ}\), a TMR is obtained. The numerical results and the experimental data show in this case some discrepancies in the region near the triple point and at the reflected shock. However, the density profile is well captured along all the wedge wall, providing valuable insights into the shockwave interactions and supersonic flow characteristics. Figure 9: Density ratio contours for Case 4 compared with experimental data from Deschambault and Glass (1983). ## 6 Conclusions This study was particularly focused on regular (RR) and Mach (MR) reflections, which are the two main types of shock wave reflections observed in steady flows. The computational tool PeleC, which includes adaptive mesh refinement (AMR), was used to model the shock wave reflection phenomena, and the obtained numerical results were compared with experimental data available in literature. From the results obtained in this work, it can be concluded that the numerical model employed here is able to accurately capture the supersonic flow characteristics for cases such as those involving regular and single Mach reflections. However, for cases involving more complex physics, such as those featuring dual regions of regular and Mach reflections, the model seems to have limitations that affect its accuracy. Overall, the results discussed here provide valuable insights into shock wave interactions and characteristics of wedge flows. The particular supersonic flow configurations where the numerical model seems to have some limitations highlight the need for further development and use of more accurate models. Notice that the findings of this work contribute to ongoing research in the field of supersonic flows focused on the study of more complex fluid flows such as those featuring 3D configurations, viscous flows, and detonation waves. ## 7 Acknowledgements This work has been supported by the US Army Research Laboratory under Research Grant No. W911NF-22-1-0275. Luis Bravo was supported by the US Army Research Laboratory 6.1 Basic research program in propulsion sciences.